#StackBounty: #django #amazon-web-services #amazon-s3 #django-views #django-forms Django Form upload to AWS S3 with Boto3 results in em…

Bounty: 50

I am having a multipart/form-data form that should upload to a S3 bucket, using Boto3.

All is happening as expected, but at the end, the file at the S3 bucket has 0 bytes.

forms.py:

class DropOffFilesForm(forms.ModelForm):
    file = forms.FileField(error_messages={'invalid_type': _("Please upload only PDF, JPG, PNG, XLS(x), DOC(x) files")},
                       validators=[
                           FileTypeValidator(allowed_types=['application/pdf', 'image/*', 'application/msword', 'application/vnd.openxmlformats-officedocument.wordprocessingml.document', 'application/vnd.ms-excel', 'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet'], allowed_extensions=['.pdf', '.png', '.jpg', '.doc', '.docx', '.xls', '.xlsx'])], required=False)
    description = forms.CharField(widget=forms.Textarea, label=_("Description"), required=False)

    def clean_description(self):
        data = self.cleaned_data['description']
        return data

    class Meta:
        model = DropOffFiles
        exclude = ['filename',]
        fields = ['file', 'description']

views.py:

file_form = DropOffFilesForm(request.POST, request.FILES)

if file_form.is_valid():

   file_form = file_form.save(commit=False)

   s3 = boto3.client('s3', region_name=settings.AWS_ZONE,
                           aws_access_key_id=settings.AWS_KEY,
                           aws_secret_access_key=settings.AWS_SECRET)

   file = request.FILES['file']

   if file:

        file_type = file.content_type
        extension = file.name.split(".")[-1]

        # construct file name and location
        filename = calculate_md5(file) + '.'+extension

        # save the file
        try:
             response = s3.upload_fileobj(file, 'bucketname', filename)
        except ClientError as e:
             logging.error(e)
             return False



        file_form.filename = filename
        file_form.dropoff = dropoff
        file_form.save()

Happy for any suggestion.


Get this bounty!!!

#StackBounty: #python #amazon-s3 #pytorch #boto3 How to save a Pytorch Model directly in s3 Bucket?

Bounty: 100

The title says it all – I want to save a pytorch model in an s3 bucket. What I tried was the following:

import boto3

s3 = boto3.client('s3')
saved_model = model.to_json()
output_model_file = output_folder + "pytorch_model.json"
s3.put_object(Bucket="power-plant-embeddings", Key=output_model_file, Body=saved_model)

Unfortunately this doesn’t work, as .to_json() only works for tensorflow models. Does anyone know how to do it in pytorch?


Get this bounty!!!

#StackBounty: #ssl #https #amazon-s3 #amazon-cloudfront Unable to access S3 static website using https

Bounty: 50

I have followed all the steps from https://aws.amazon.com/premiumsupport/knowledge-center/cloudfront-https-requests-s3/ but my website still cannot be accessed using https.

  1. My certificate was issued to the domain admin.studentqr.com successfully as displayed here
    My certificate was issued successfully as displayed here

  2. My cloudfront seems to be working fine too
    enter image description here
    enter image description here

  3. And I’ve updated my admin.studentqr.com record in Route 53 to point to the cloudfront
    enter image description here

  4. My domain is registered under Wix, so I have the CNAME for admin.studentqr.com there pointed to my S3
    enter image description here

But still, I can only access http://admin.studentqr.com and not https://admin.studentqr.com

Is there anything that I still miss?


Get this bounty!!!

#StackBounty: #javascript #node.js #express #amazon-s3 #aws-sdk AWS SDK file upload to S3 via Node/Express using stream PassThrough – f…

Bounty: 200

It’s pretty straightforward. Using this code, any image file that is uploaded, is corrupt and cannot be opened. PDFs seem fine, but I noticed it’s injecting values into text-based files. It’s the correct file size in s3, not zero like something went wrong. I’m not sure if it’s a problem w/ Express, the SDK, or a combination of both? Is it Postman? I built something similar in a work project in March of this year, and it worked flawlessly. I no longer have access to that code to compare.

No errors, no indication of any problems.

const aws = require("aws-sdk");
const stream = require("stream");
const express = require("express");
const router = express.Router();

const AWS_ACCESS_KEY_ID = "XXXXXXXXXXXXXXXXXXXX";
const AWS_SECRET_ACCESS_KEY = "superSecretAccessKey";
const BUCKET_NAME = "my-bucket";
const BUCKET_REGION = "us-east-1";

const s3 = new aws.S3({
    region: BUCKET_REGION,
    accessKeyId: AWS_ACCESS_KEY_ID,
    secretAccessKey: AWS_SECRET_ACCESS_KEY
});

const uploadStream = key => {
    let streamPass = new stream.PassThrough();
    let params = {
        Bucket: BUCKET_NAME,
        Key: key,
        Body: streamPass
    };
    let streamPromise = s3.upload(params, (err, data) => {
        if (err) {
            console.error("ERROR: uploadStream:", err);
        } else {
            console.log("INFO: uploadStream:", data);
        }
    }).promise();
    return {
        streamPass: streamPass,
        streamPromise: streamPromise
    };
};

router.post("/upload", async (req, res) => {
    try {
        let key = req.query.file_name;
        let { streamPass, streamPromise } = uploadStream(key);
        req.pipe(streamPass);
        await streamPromise;
        res.status(200).send({ result: "Success!" });
    } catch (e) {
        res.status(500).send({ result: "Fail!" });
    }
});

module.exports = router;

Here’s my package.json:

{
  "name": "expresss3streampass",
  "version": "0.0.0",
  "private": true,
  "scripts": {
    "start": "node ./bin/www"
  },
  "dependencies": {
    "aws-sdk": "^2.812.0",
    "cookie-parser": "~1.4.4",
    "debug": "~2.6.9",
    "express": "~4.16.1",
    "morgan": "~1.9.1"
  }
}

UPDATE:

After further testing, I noticed plain-text files are being changed by Postman. For example, this source file:

{
    "question_id": null,
    "position_type_id": 1,
    "question_category_id": 1,
    "position_level_id": 1,
    "question": "Do you test your code before calling it "done"?",
    "answer": "Candidate should respond that they at least happy path test every feature and bug fix they write.",
    "active": 1
}

…looks like this after it lands in the bucket:

----------------------------472518836063077482836177
Content-Disposition: form-data; name="file"; filename="question.json"
Content-Type: application/json

{
    "question_id": null,
    "position_type_id": 1,
    "question_category_id": 1,
    "position_level_id": 1,
    "question": "Do you test your code before calling it "done"?",
    "answer": "Candidate should respond that they at least happy path test every feature and bug fix they write.",
    "active": 1
}
----------------------------472518836063077482836177--

I have to think this is the problem. Postman is the only thing that changed in this equation, from when this code first worked for me. My request headers look like this:

enter image description here

I was the one who had originally added the "application/x-www-form-urlencoded" header. If I use that now, I end up with a file that has 0 bytes, in the bucket.


Get this bounty!!!

#StackBounty: #node.js #amazon-s3 #aws-sdk #aws-sdk-nodejs #wasabi-hot-cloud-storage Image corrupted from s3

Bounty: 50

This is my upload code (node)

// generate a uuid string
            const uuid = uuidv4(); // type string

            // upload to wasabi

            fs.readFile(file.tempFilePath, "utf8", (err, data) => {
                if (err) {
                    throw (err);
                }
                // upload the object
                s3.upload(
                    {
                        Body: data,
                        Key: uuid + "-" + file.name,
                        Bucket: "files",
                        Metadata: { 'ext': ".png" },
                        // ContentLength: file.size
                    }, (err, data) => {
                        if (err) {
                            console.log(err);
                            throw (err);
                        }
                        console.log(data); 
                });
            });

The object actually does get to the bucket. It has more than 0 bytes. It is about half as small the source image but I’m just assuming someone is compressing it (wasabi or something).
I am using express-fileupload and my config for it is:


app.use(fileUpload({

    createParentPath: true,

    limits: {
        fileSize: 2 * 1024 * 1024 * 1024 //2MB max file(s) size
    },

    useTempFiles: true,

    tempFileDir: "/tmp/",

}));

Eventually I download this directly from the wasabi web client, and try to open it in an image viewer and they all say there is an error in the file or that they don’t support the file type (which is .png, so it is supported).
The image is only ~150kb .
Essentially the file gets there but its corrupted.


Get this bounty!!!

#StackBounty: #amazon-s3 #amazon-kinesis #amazon-kinesis-firehose kinesis data firehose log data is in encrypted form

Bounty: 200

strong textI create a aws Cross-Account Log Data Sharing with Subscriptions.
By follow this link

After create kinesis stream create Kinesis Data Firehose delivery streams to save logs in s3 bucket.

logs files creating in S3 bucket but in encrypted form .

enter image description here

And at sender side no KMS key id ..

enter image description here

How can i see the logs..

Also not able to decrypt in base64 manually..

Updated:
I found that logs store in S3 bucket have "Content-Type application/octet-stream". when i update content-type to "text/plain" ..
is there any way set in bucket level content type or configure in kinesis data stream or firehose

enter image description here

Is there any way to set content-type kinesis stream or set the default content-type for s3 folder?


Get this bounty!!!

#StackBounty: #amazon-s3 #file-transfer Is there a way to make CrossFTP or DragonDisk access path-style s3 server?

Bounty: 50

I’m trying to find a way to speed up the transfer of 95gb of data to my company’s Object Storage.

I’m transfering the files via node/aws-sdk but it’s too slow.
I already tried ManagedUpload and Multipart upload, with same result (too slow).

After that I found s3fs… same thing.

So now I dicovered CrossFTP and DragonDisk that can access S3 services, but both use bucket.site.com style urls and my corporative server uses path-style.

Is there a way to configure one/both of them to use path-style ?

Also anyone knows a faster way to transfer the files ? Or any other Linux s3 FOSS clients ?

If I could upload the metadata with the files that would be a plus.


Get this bounty!!!

#StackBounty: #redirect #amazon-s3 #aws-lambda #amazon-cloudfront #aws-lambda-edge Cloudfront, lambda @ edge, S3 redirect objects,

Bounty: 50

I am building a S3 URL redirect, nothing special just a bunch of zero length objects with the WebsiteRedirectLocation meta filled out. The S3 bucket is set to server static websites, bucket policy set to public ect. It works just fine.

HOWEVER – I also want to lock down certain files in the bucket – specifically some html files that serve to manage the redirects (like adding new redirects). With the traditional set up, I can both use the redirects, and also serve the html page just fine. But in order to lock it down, I need to use Cloudfront and Lambda@edge like in these posts:

https://douglasduhaime.com/posts/s3-lambda-auth.html

http://kynatro.com/blog/2018/01/03/a-step-by-step-guide-to-creating-a-password-protected-s3-bucket/

I have modified the lambda@edge script to only prompt for a password IF the admin page (or its assets like CSS/JS) are requested. If the requested path is something else (presumably a redirect file) the user is not prompted for a password. And yes, I could also set a behavior rule in Cloudfront to decide when to use the Lambda function to prompt for a password.

And it works kind of. When I follow these instructions and visit my site via the Cloudfront URL, I do indeed get prompted for a password when I goto the root of my site – the admin page. However, the redirects will not work. If I try to load a redirect the browser just downloads it instead.

Now, in another post someone suggested that I change my Cloudfront distribution endpoint to the S3 bucket WEBSITE endpoint – which I think also means changing the bucket policy back to website mode and public which sucks because now its accessible outside of the Cloudfront policy which I do not want. Additionally – Cloudfront no longer automatically serves the specified index file, which isnt the worst thing.

SO – is it possible to lock down my bucket, server it entirely through Cloudfront with Lambda@edge BUT also have Cloudfront respect those redirects instead of just prompting a download? Is there a setting in Cloudfront to respect the headers? Should I set up different behavior rules for the different files (html vs redirects)?


Get this bounty!!!

#StackBounty: #amazon-s3 #amazon-cloudfront Cloudfront not caching missing pages

Bounty: 50

So to explain a problem I have an S3 bucket with static site and CloudFront as CDN. On S3 the index and error document are both index.html. So when I go to subdomain.example.com I get served index.html and get Hit from Cloudfront.

However my static page is Vue page with router and default path is /en so when I reload the page subdomain.example.com/en I get Error from cloudfront. The same happens if I try to refresh it after it got hit the first time. Everything else(.css, .js, .img …) is cached ok.

I have S3 connected in origin like that:

Origin Domain Name: s3.eu-central-1.amazonaws.com
Origin Path: /subdomain.example.com
Origin ID: subdomain.example.com
Minimum Origin SSL Protocol: TLSv1
Origin Protocol Policy: HTTP Only
Origin Response Timeout: 30
Origin Keep-alive Timeout: 5
HTTP Port: 80
HTTPS Port: 443

On Cloudfront I also have custom error responses for 400,403 and 404 all pointing to /index.html with code 200.

Any ideas what am I doing wrong?

error image:
enter image description here


Get this bounty!!!