#StackBounty: #node.js #amazon-s3 #aws-sdk #aws-sdk-nodejs #wasabi-hot-cloud-storage Image corrupted from s3

Bounty: 50

This is my upload code (node)

// generate a uuid string
            const uuid = uuidv4(); // type string

            // upload to wasabi

            fs.readFile(file.tempFilePath, "utf8", (err, data) => {
                if (err) {
                    throw (err);
                }
                // upload the object
                s3.upload(
                    {
                        Body: data,
                        Key: uuid + "-" + file.name,
                        Bucket: "files",
                        Metadata: { 'ext': ".png" },
                        // ContentLength: file.size
                    }, (err, data) => {
                        if (err) {
                            console.log(err);
                            throw (err);
                        }
                        console.log(data); 
                });
            });

The object actually does get to the bucket. It has more than 0 bytes. It is about half as small the source image but I’m just assuming someone is compressing it (wasabi or something).
I am using express-fileupload and my config for it is:


app.use(fileUpload({

    createParentPath: true,

    limits: {
        fileSize: 2 * 1024 * 1024 * 1024 //2MB max file(s) size
    },

    useTempFiles: true,

    tempFileDir: "/tmp/",

}));

Eventually I download this directly from the wasabi web client, and try to open it in an image viewer and they all say there is an error in the file or that they don’t support the file type (which is .png, so it is supported).
The image is only ~150kb .
Essentially the file gets there but its corrupted.


Get this bounty!!!

#StackBounty: #amazon-s3 #amazon-kinesis #amazon-kinesis-firehose kinesis data firehose log data is in encrypted form

Bounty: 200

strong textI create a aws Cross-Account Log Data Sharing with Subscriptions.
By follow this link

After create kinesis stream create Kinesis Data Firehose delivery streams to save logs in s3 bucket.

logs files creating in S3 bucket but in encrypted form .

enter image description here

And at sender side no KMS key id ..

enter image description here

How can i see the logs..

Also not able to decrypt in base64 manually..

Updated:
I found that logs store in S3 bucket have "Content-Type application/octet-stream". when i update content-type to "text/plain" ..
is there any way set in bucket level content type or configure in kinesis data stream or firehose

enter image description here

Is there any way to set content-type kinesis stream or set the default content-type for s3 folder?


Get this bounty!!!

#StackBounty: #amazon-s3 #file-transfer Is there a way to make CrossFTP or DragonDisk access path-style s3 server?

Bounty: 50

I’m trying to find a way to speed up the transfer of 95gb of data to my company’s Object Storage.

I’m transfering the files via node/aws-sdk but it’s too slow.
I already tried ManagedUpload and Multipart upload, with same result (too slow).

After that I found s3fs… same thing.

So now I dicovered CrossFTP and DragonDisk that can access S3 services, but both use bucket.site.com style urls and my corporative server uses path-style.

Is there a way to configure one/both of them to use path-style ?

Also anyone knows a faster way to transfer the files ? Or any other Linux s3 FOSS clients ?

If I could upload the metadata with the files that would be a plus.


Get this bounty!!!

#StackBounty: #redirect #amazon-s3 #aws-lambda #amazon-cloudfront #aws-lambda-edge Cloudfront, lambda @ edge, S3 redirect objects,

Bounty: 50

I am building a S3 URL redirect, nothing special just a bunch of zero length objects with the WebsiteRedirectLocation meta filled out. The S3 bucket is set to server static websites, bucket policy set to public ect. It works just fine.

HOWEVER – I also want to lock down certain files in the bucket – specifically some html files that serve to manage the redirects (like adding new redirects). With the traditional set up, I can both use the redirects, and also serve the html page just fine. But in order to lock it down, I need to use Cloudfront and Lambda@edge like in these posts:

https://douglasduhaime.com/posts/s3-lambda-auth.html

http://kynatro.com/blog/2018/01/03/a-step-by-step-guide-to-creating-a-password-protected-s3-bucket/

I have modified the lambda@edge script to only prompt for a password IF the admin page (or its assets like CSS/JS) are requested. If the requested path is something else (presumably a redirect file) the user is not prompted for a password. And yes, I could also set a behavior rule in Cloudfront to decide when to use the Lambda function to prompt for a password.

And it works kind of. When I follow these instructions and visit my site via the Cloudfront URL, I do indeed get prompted for a password when I goto the root of my site – the admin page. However, the redirects will not work. If I try to load a redirect the browser just downloads it instead.

Now, in another post someone suggested that I change my Cloudfront distribution endpoint to the S3 bucket WEBSITE endpoint – which I think also means changing the bucket policy back to website mode and public which sucks because now its accessible outside of the Cloudfront policy which I do not want. Additionally – Cloudfront no longer automatically serves the specified index file, which isnt the worst thing.

SO – is it possible to lock down my bucket, server it entirely through Cloudfront with Lambda@edge BUT also have Cloudfront respect those redirects instead of just prompting a download? Is there a setting in Cloudfront to respect the headers? Should I set up different behavior rules for the different files (html vs redirects)?


Get this bounty!!!

#StackBounty: #amazon-s3 #amazon-cloudfront Cloudfront not caching missing pages

Bounty: 50

So to explain a problem I have an S3 bucket with static site and CloudFront as CDN. On S3 the index and error document are both index.html. So when I go to subdomain.example.com I get served index.html and get Hit from Cloudfront.

However my static page is Vue page with router and default path is /en so when I reload the page subdomain.example.com/en I get Error from cloudfront. The same happens if I try to refresh it after it got hit the first time. Everything else(.css, .js, .img …) is cached ok.

I have S3 connected in origin like that:

Origin Domain Name: s3.eu-central-1.amazonaws.com
Origin Path: /subdomain.example.com
Origin ID: subdomain.example.com
Minimum Origin SSL Protocol: TLSv1
Origin Protocol Policy: HTTP Only
Origin Response Timeout: 30
Origin Keep-alive Timeout: 5
HTTP Port: 80
HTTPS Port: 443

On Cloudfront I also have custom error responses for 400,403 and 404 all pointing to /index.html with code 200.

Any ideas what am I doing wrong?

error image:
enter image description here


Get this bounty!!!

#StackBounty: #security #amazon-s3 #terraform How can you set up secure Terraform state storage for different infrastructure layers usi…

Bounty: 50

Context

After reading a lot about Terraform and playing with it in minor projects, I’d like to start using it in a real, production environment.

As the environment is mostly in AWS, I’d go for the S3 backend, but I’m open to change this.

Task

I’d like to have separate Terraform projects (states) per infrastucture layer. Clearly, the top layers should be able to access to output of lower layers. I can use the Terraform remote state data source to get this data.

I’ve seen different setups around the internet.

Setup #1

|–globals
|–modules
|-infrastucture1
| |-layer1
| | |-layer2

Setup #1

|–globals
|–modules
|-infrastucture1
| |-layer1
| |-layer2

Setup #3

Everything above has its separate git repo.

Question

  • What would be the recommended code organisation for this?
  • What access rights do I have to add to the lower layers’ S3 buckets to keep their state safe, but still allow Terraform remote state to access it?


Get this bounty!!!

#StackBounty: #python-3.x #amazon-s3 #aws-lambda Extract specific column from csv file, and convert to string in Lambda using Python

Bounty: 50

I’m trying to get all email addresses in a comma separated format from a specific column. This is coming from a csv temp file in Lambda. My goal is to save that file in s3 with only one column containing the email addresses.

This is what the source data looks like:

enter image description here

Here is my code:

#open file and extract email address
with open('/tmp/maillist.csv', 'w') as mail_file:
    wm = csv.writer(mail_file)
    mail_list = csv.reader(open('/tmp/filtered.csv', "r"))
    for rows in mail_list:
        ','.join(rows)
        wm.writerow(rows[3])
bucket.upload_file('/tmp/maillist.csv', key)

I was hoping to get a result like this:

enter image description here

But instead, I’m getting a result like this:

enter image description here

I also tried this code:

#open file and extract email address
mail_list = csv.reader(open('/tmp/filtered.csv', "r"))
with open('/tmp/maillist.csv', 'w') as mail_file:
    wm = csv.writer(mail_file)
    wm.writerow(mail_list[3])
bucket.upload_file('/tmp/maillist.csv', key)

But I get this error instead:

Response:
{
  "errorMessage": "'_csv.reader' object is not subscriptable",
  "errorType": "TypeError",
  "stackTrace": [
    "  File "/var/task/lambda_function.py", line 68, in lambda_handlern     wm.writerow(mail_list[3])n"

Any help is appreciated.


Get this bounty!!!

#StackBounty: #ruby #amazon-web-services #amazon-s3 #encryption #aes How to use S3 SSE C (Server Side Encryption with Client Provided K…

Bounty: 250

I’m trying to upload a file to S3 and have it encrypted using the SSE-C encryption options. I can upload without the SSE-C options, but when I supply the sse_customer_key options I’m getting the following error:

ArgumentError: header x-amz-server-side-encryption-customer-key has field value “QkExM0JGRTNDMUUyRDRCQzA5NjAwNEQ2MjRBNkExMDYwQzBGQjcxODJDMjM0nnMUE2MTNENDRCOTcxRjA2Qzk1Mg=”, this cannot include CR/LF

I’m not sure if the problem is with the key I’m generating or with the encoding. I’ve played around with different options here, but the AWS documentation is not very clear. In the general SSE-C documentation it says you need to supply a x-amz-server-side​-encryption​-customer-key header, which is described as this:

Use this header to provide the 256-bit, base64-encoded encryption key
for Amazon S3 to use to encrypt or decrypt your data.

However, if I look at the Ruby SDK documentation for uploading a file the 3 options have a slightly different description

  • :sse_customer_algorithm (String) — Specifies the algorithm to use to when encrypting the object (e.g.,
  • :sse_customer_key (String) — Specifies the customer-provided encryption key for Amazon S3 to use in
  • :sse_customer_key_md5 (String) — Specifies the 128-bit MD5 digest of the encryption key according to RFC

(I didn’t copy that wrong, the AWS documentation is literally half-written like that)

So the SDK documentation makes it seem like you supply the raw sse_customer_key and that it would base64-encode it on your behalf (which makes sense to me).

So right now I’m building the options like this:

  sse_customer_algorithm: :AES256,
  sse_customer_key: sse_customer_key,
  sse_customer_key_md5: Digest::MD5.hexdigest(sse_customer_key)

I previously tried doing Base64.encode64(sse_customer_key) but that gave me a different error:

Aws::S3::Errors::InvalidArgument: The secret key was invalid for the
specified algorithm

I’m not sure if I’m generating the key incorrectly or if I’m supplying the key incorrectly (or if it’s a different problem altogether).

This is how I’m generating the key:

require "openssl"
OpenSSL::Cipher.new("AES-256-CBC").random_key


Get this bounty!!!

#StackBounty: #python-3.x #amazon-s3 #aws-lambda #aws-lambda-layers Unable to import module 'lambda_function': No module named …

Bounty: 50

START RequestId: 3d5691d9-ad79-4eed-a26c-5bc3f1a23a99 Version: $LATEST
Unable to import module ‘lambda_function’: No module named ‘pandas’
END RequestId: 3d5691d9-ad79-4eed-a26c-5bc3f1a23a99

I’m using Windows 7 as the host OS.

What I want to do

I simply want to use pandas in AWS-Lambda environment. Just like I use it in windows environment, I am looking for a simple solution for Lambda.

What I have tried so far

  • Installed Xubuntu on a virtual box.
  • Then installed python 3.6 on it.
  • Then I installed pandas3.6 in the Xubuntu.
  • Thereafter, I copied the folder contents in Xubuntu at location '/usr/local/lib/python3.6/site-packages/pandas' to my host OS.
  • In the host OS, I created the folder path like, python/lib/python3.6/site-packages/.
  • I then zipped the folder using 7zip software and upload the folder to the S3 bucket. I have given public access to this folder in AWS S3 bucket.
  • In Lambda, the python function is built for version 3.6. The python script name is, lambda_function.py. The lambda function handler name is, lambda_handler(). The code snippet looks like,

import pandas as pd

def lambda_handler(event, context):

    dates = pd.date_range('2019001', periods=6)

    df = pd.DataFrame(np.random.randn(6, 4), index=dates, columns=list('ABCD'))
    print(df)
  • The handler is named as lambda_function.lambda_handler. I have given the lambda-role AWSLambdaFullAccess permission.
  • The time out is set to 0 min and 3 sec.
  • The test event looks like

    {
    “key1”: “This will be printed if all OK”
    }

I have tried the following solutions:

  • Tried precompiled linux-compatible binaries for pandas & numpy from here — no luck.
  • In Lambda, changed the Handler info to python_filename.function_name. For my case, it was lambda_function.lambda_handlerfailed with no module named ‘pandas’ error.
  • placed the lambda function in the root folder, zipped the folder using 7zip software and upload the folder to the S3 bucket. For my case, I placed the function at location pythonlibpython3.6site_packageslambda_function.pyfailed with no module named ‘pandas’ error.
  • Already tried these related solutions posted on SO, 1, 2, 3, 4, 5, 6

Note: I do not want to use Docker, because I do not know how to use it and I’m not willing to learn it as I’m exasperated now. I’m coming from a windows environment (it sucks, I now know.)

Any ideas on how to get this to work.


Get this bounty!!!