#StackBounty: #amazon-web-services #pgadmin AWS database single column adds extremely much data

Bounty: 150

I’m retrieving data from an AWS database using PgAdmin. This works well. The problem is that I have one column that I set to True after I retrieve the corresponding row, where originally it is set to Null. Doing so adds an enormous amount of data to my database.

I have checked that this is not due to other processes: it only happens when my program is running.
I am certain no rows are being added, I have checked the number of rows before and after and they’re the same.

Furthermore, it only does this when changing specific tables, when I update other tables in the same database with the same process, the database size stays the same. It also does not always increase the database size, only once every couple changes does the total size increase.

How can changing a single boolean from Null to True add 0.1 MB to my database?

I’m using the following commands to check my database makeup:

To get table sizes

SELECT
    relname as Table,
    pg_total_relation_size(relid) As Size,
    pg_size_pretty(pg_total_relation_size(relid) - pg_relation_size(relid)) as External Size
FROM pg_catalog.pg_statio_user_tables ORDER BY pg_total_relation_size(relid) DESC;

To get number of rows:

SELECT schemaname,relname,n_live_tup 
  FROM pg_stat_user_tables 
  ORDER BY n_live_tup DESC;

To get database size:

SELECT pg_database_size('mydatabasename')


Get this bounty!!!

#StackBounty: #amazon-web-services #apache-2.4 #virtualhost #elastic-beanstalk Apache (2.4) only serves 1st virtual host found on elast…

Bounty: 50

I have an Elastic Beanstalk server which I am using for my employer’s main site, example.com and they want me to host one of their ancillary sites on it: go.example.com.

So I just created a new ebextension config to create a second vhost. The problem I found is that Apache (HTTPD) only wants to use the first vhost entry.

Here are my vhosts:

 # I have Apache listening on port 8080 because I have varnish in front of my sites.
    <VirtualHost *:8080>

        ServerName      example.com
        ServerAlias     www.example.com
        ServerAdmin     webmaster@example.com

        RewriteEngine On
        RewriteCond %{HTTP_HOST} ^example.com
        RewriteRule ^(.*)$ http://www.example.com%{REQUEST_URI} [R=301,L]

        DocumentRoot "/var/app/current/httpdocs"

        <Directory "/var/app/current">
            AllowOverride All
            Require all granted
        </Directory>

        <Directory "/var/app/current/cgi-bin">
            AllowOverride All
            Options None
            Require all granted
        </Directory>

        <Directory "/var/app/current/httpdocs">
            Options FollowSymLinks
            AllowOverride All
            DirectoryIndex index.html index.php
            Require all granted
        </Directory>

    </VirtualHost>
    #go.example.com
    <VirtualHost *:8080>

      ServerName      go.example.com
      ServerAlias     staging.go.example.com
      ServerAdmin     webmaster@example.com

      DocumentRoot    /var/www/go.example.com/httpdocs

      <Directory "/var/www/go.example.com">
          AllowOverride All
          Require all granted
      </Directory>

      <Directory "/var/www/go.example.com/httpdocs">
          Options FollowSymLinks
          AllowOverride All
          DirectoryIndex index.html index.php
          Require all granted
      </Directory>

So the server will always listen for example.com, and with the above vhost order example.com will serve /var/app/current/httpdocs while go.example.com is just a blank page.

If I swap the vhost order, so go.example.com is first, then example.com serves /var/www/go.example.com/httpdocs. And go.example.com is still a blank page.

Nothing is really jumping out at me, and I dont have this problem is I build a regular ol’ EC2.


Get this bounty!!!

#StackBounty: #amazon-web-services #docker #acl #amazon-ecs #vpc Why are "weird" TCP ports required for my AWS ECS app to pul…

Bounty: 500

I am using ECS with NLB in front. ECS is pulling images from ECR. The thing I cannot understand is why does ECS require me to open all TCP ports to be able to pull from ECR?

2 621567429603 eni-0f5e97a3c2d51a5db 18.136.60.252 10.0.12.61 443 55584 6 13 6504 1537798711 1537798719 ACCEPT OK
2 621567429603 eni-0f5e97a3c2d51a5db 10.0.12.61 54.255.143.131 44920 443 6 13 5274 1537798711 1537798719 ACCEPT OK
2 621567429603 eni-0f5e97a3c2d51a5db 54.255.143.131 10.0.12.61 443 44952 6 13 6504 1537798711 1537798719 ACCEPT OK
2 621567429603 eni-0f5e97a3c2d51a5db 10.0.12.61 18.136.60.252 55584 443 6 15 5378 1537798711 1537798719 ACCEPT OK
2 621567429603 eni-0f5e97a3c2d51a5db 10.0.12.61 18.136.60.252 55612 443 6 15 5378 1537798711 1537798719 ACCEPT OK
2 621567429603 eni-0f5e97a3c2d51a5db 52.219.36.183 10.0.12.61 443 51892 6 19 11424 1537798711 1537798719 ACCEPT OK
2 621567429603 eni-0f5e97a3c2d51a5db 10.0.12.61 54.255.143.131 44908 443 6 14 1355 1537798711 1537798719 ACCEPT OK
2 621567429603 eni-0f5e97a3c2d51a5db 52.219.36.183 10.0.12.61 443 51912 6 31807 44085790 1537798711 1537798719 ACCEPT OK
2 621567429603 eni-0f5e97a3c2d51a5db 18.136.60.252 10.0.12.61 443 55612 6 12 6452 1537798711 1537798719 ACCEPT OK

My flow logs above. 10.0.0.0/8 is my VPC private addresses. Notice say the first time SRC: 18.136.60.252:443 is accessing 10.0.12.61:55584 why this destination port?

Then next line 2 621567429603 eni-0f5e97a3c2d51a5db 10.0.12.61 54.255.143.131 44920 443 6 13 5274 1537798711 1537798719 ACCEPT OK. Why is my ECS requesting data using source port 44920. I am asking so I know how to open the correct ports. Currently because of the ports being so random, I need to open everything


Get this bounty!!!

#StackBounty: #ios #swift #amazon-web-services #aws-appsync #aws-appsync-ios How to set AWS Appsync request timeout limit || AWSAppSync…

Bounty: 50

I’m using AWS Appsync for the current App I’m developing and facing a serious issue that is Whenever I fire queries in Appsync client, when there is slow internet connection the request never end with a callback. I checked over internet there is limited source of information on this topic and also found this issue that is still open.

This is the code I used to get the response

func getAllApi(completion:@escaping DataCallback){
    guard isInternetAvailabele() else {
        completion(nil)
        return
    }
    // AppSyncManager.Client() is AWSAppSyncClient Object
    AppSyncManager.Client().fetch(query: GetlAllPostQuery(input: allInputs), cachePolicy:.fetchIgnoringCacheData) {
        (result, error) in
        var haveError:Bool = error != nil
        if let _ = result?.data?.getAllPostings?.responseCode {haveError = false} else {haveError = true}
        if haveError  {
            print(error?.localizedDescription ?? "")
            completion(nil)
            return
        }

        if result != nil{
            completion(result)
        }else{
            completion(nil)
        }
    }
}

The code works fine with internet connection and I have already checked at the top if there is no internet but when there is slow internet connection or the wifi is connected to a hotspot that I created with my mobile with internet data disabled the request doesn’t return any callback it should give failed alert like we get in other apis when the request time out.
Is there any support for request for request time out or did I miss something?

Note : I recieved these logs in Terminal

Task <06E9BBF4-5731-471B-9B7D-19E5E504E57F>.<45> HTTP load failed (error code: -1001 [1:60])
Task <D91CA952-DBB5-4DBD-9A90-98E2069DBE2D>.<46> HTTP load failed (error code: -1001 [1:60])
Task <06E9BBF4-5731-471B-9B7D-19E5E504E57F>.<45> finished with error - code: -1001
Task <D91CA952-DBB5-4DBD-9A90-98E2069DBE2D>.<46> finished with error - code: -1001


Get this bounty!!!

#StackBounty: #javascript #node.js #express.js #mongoose #amazon-web-services Upload .JSON product list to MongoDB and upload image to …

Bounty: 50

I’m using Node/Express/Mongoose to accept a JSON file containing a list of product details. These products are looped through, the images are uploaded to AWS S3, and the product is either accepted or rejected depending on validation. In the end the accepted are uploaded to Mongo via Mongoose and all are returned to provide info to the uploader. This is my first time with any of these frameworks so I’m looking to improve best practices and find potential points of failure.

ProductRoutes.js

const keys = require("../config/keys.js");
const mongoose = require("mongoose");
const requireLogin = require("../middlewares/requireLogin");
var validator = require("validator");
const fileUpload = require("express-fileupload");
var fs = require("fs");
const aws = require("aws-sdk");
const S3_BUCKET = keys.awsBucket;
var path = require("path");

var request = require("request");
aws.config.update({
  region: "us-east-2",
  accessKeyId: keys.awsAccessKey,
  secretAccessKey: keys.awsSecretKey
});

require("../models/Product");

const Product = mongoose.model("product");

function validate(value, type) {
  switch (type) {
    case "string":
      return value && !validator.isEmpty(value, { ignore_whitespace: true });
    case "url":
      return (
        value &&
        !validator.isURL(value, {
          protocols: ["https, http"],
          require_protocol: true
        })
      );
    default:
      return value && validator.isEmpty(value, { ignore_whitespace: true });
  }
  return value == null || value.length === 0;
}
function saveImage(url, key) {
  let ext = path.extname(url);
  let params = {
    Key: key + ext,
    Bucket: S3_BUCKET,
    ACL: "public-read"
  };
  return new Promise(function(resolve, reject) {
    request.get(url).on("response", function(response) {
      if (response.statusCode === 200) {
        params.ContentType = response.headers["content-type"];
        var s3 = new aws.S3({ params })
          .upload({ Body: response })
          .send(function(err, data) {
            resolve(data);
          });
      } else {
        // return false;
        reject(false);
      }
    });
  });
}

module.exports = app => {
  app.use(fileUpload());

  app.post("/product/addProduct", requireLogin, async (req, res) => {
    let products = req.files.file.data;
    try {
      products = JSON.parse(products);
    } catch (e) {
      return res
        .status(400)
        .json({ success: false, message: "Invalid JSON product feed" });
    }
    let accepted = [];
    let rejected = [];

    for (const product of products) {
      if (!validate(product.sku, "string")) {
        rejected.push(product);
        return;
      }
      if (!validate(product.image_url, "url")) {
        rejected.push(product);
        return;
      }
      try {
        let result = await saveImage(product.image_url, `${product.owner}/${product.sku}`);
        product.image_url = result.Location;
      } catch (err) {
        // catches errors both in fetch and response.json
        return res.status(400).json({
          success: false,
          message: "Could not upload image",
          error: err
        });
      }

      let upsertProduct = {
        updateOne: {
          filter: { sku: product.sku },
          update: product,
          upsert: true
        }
      };
      accepted.push(upsertProduct);
    }

    // now bulkWrite (note the use of 'Model.collection')
    Product.collection.bulkWrite(accepted, function(err, docs) {
      if (err) {
        return res.status(400).json({
          success: false,
          message: "Something went wrong, please try again"
        });
      } else {
        return res.status(200).json({
          success: true,
          message: "Company successfully created",
          accepted: { count: accepted.length, list: accepted },
          rejected: { count: rejected.length, rejected: rejected },

          affiliate: docs
        });
      }
    });
  });

  app.get("/product/fetchAffiliateProducts", requireLogin, (req, res) => {
    var affiliateId = req.query.affiliateId;

    Product.find({ owner: affiliateId }, function(err, products) {
      if (err) {
        return res.status(400).json({
          success: false,
          message: "Could not find the requested company's products"
        });
      } else {
        return res.status(200).json({
          success: true,
          message: "Products successfully found",
          products: products
        });
      }
    });
  });
};

Product.js (model):

const mongoose = require('mongoose');
const {Schema} = mongoose;

const productSchema = new Schema({

    sku: {type: String, unique: true, required: true},
    name: {type: String, required: true},
    owner: {type: String, required: true},
    image_url: {type: String, required: true}
});

mongoose.model('product', productSchema);

Sample input

[
  {
    "sku": "123",
    "name": "Test Product 1",
    "owner": "Test Company 1",
    "image_url": "https://exmaple.com/src/assets/product1.png"
  },
  {
    "sku": "456",
    "name": "Test Product 3",
    "owner": "Test Company 2",
    "image_url": "https://exmaple.com/src/assets/product2.png"
  },
  {
    "sku": "789",
    "name": "Test Product 3",
    "owner": "Test Company 3",
    "image_url": "https://exmaple.com/src/assets/product3.png"
  }
]

If theres other contextual files/code needed let me know this is all that seemed relevant.


Get this bounty!!!

#StackBounty: #node.js #amazon-web-services #amazon-s3 #aws-sdk #acl AWS S3 Generating Signed Urls ''AccessDenied''

Bounty: 100

I am using NodeJs to upload files to AWS S3. I want the client to be able to download the files securely. So I am trying to generate signed URLs, that expire after one usage. My code looks like this:

Uploading

const s3bucket = new AWS.S3({
    accessKeyId: 'my-access-key-id',
    secretAccessKey: 'my-secret-access-key',
    Bucket: 'my-bucket-name',
})
const uploadParams = {
    Body: file.data,
    Bucket: 'my-bucket-name',
    ContentType: file.mimetype,
    Key: `files/${file.name}`,
}
s3bucket.upload(uploadParams, function (err, data) {
    // ...
})

Downloading

const url = s3bucket.getSignedUrl('getObject', {
    Bucket: 'my-bucket-name',
    Key: 'file-key',
    Expires: 300,
})

Issue

When opening the URL I get the following:

This XML file does not appear to have any style information associated with it. The document tree is shown below.
<Error>
    <Code>AccessDenied</Code>
    <Message>
        There were headers present in the request which were not signed
    </Message>
    <HeadersNotSigned>host</HeadersNotSigned>
    <RequestId>D63C8ED4CD8F4E5F</RequestId>
    <HostId>
        9M0r2M3XkRU0JLn7cv5QN3S34G8mYZEy/v16c6JFRZSzDBa2UXaMLkHoyuN7YIt/LCPNnpQLmF4=
    </HostId>
</Error>

I coultn’t manage to find the mistake. I would really appreciate any help 🙂


Get this bounty!!!

#StackBounty: #amazon-web-services #chef #aws-opsworks #test-kitchen How to debug an Opsworks/Chef 11.10.4 cookbook locally on Linux (D…

Bounty: 250

I searched for this for 3 weeks but didn’t find any real answer.

The main goal is to save time to test dev Chef cookbooks locally before deploying on production on AWS.

All I found is some hints using Ubuntu with Vagrant :

Have anyone experienced to run kitchen locally with an Amazon Linux or a Centos guest with a repository of Chef cookbooks with a JSON (Chef node configuration) as the environment variable?

My .kitchen.yml file and tree directory :

---
driver:
  # specifies the software that manages the machine. We're using the Vagrant Test Kitchen driver
  name: vagrant

provisioner:
  #  specifies how to run Chef. We use chef_zero because it enables you to mimic a Chef server environment on your local machine. This allows us to work with node attributes and other Chef server feature
  name: chef_zero
  environments_path: './env' # JSON file (node config) is not used !:  env/preprod.json
  client_rb:
    environment: preprod

verifier:
  # specifies which application to use when running automated tests. You'll learn more about automated testing in a future module.
  name: inspec

platforms:
  - name: centos-7

suites:
  - name: default
    run_list:
      # list of cookbooks
      - recipe[nginx::default]
    attributes:

Tree without of the repository content without all files, just dir names :

(minified)

├── foobar-cookbooks
│   ├── agent_version
│   ├── apache2
│   │   └── templates
│   │       ├── default
│   ├── foobar
│   │   ├── attributes
│   │   ├── definitions
│   │   ├── recipes
│   │   └── templates
│   │       └── default
│   ├── foobar_app_akeneo
│   │   ├── definitions
│   │   ├── metadata.rb
│   │   └── templates
│   │       └── default
│   ├── foobar_app_drupal
│   │   ├── attributes
│   │   ├── definitions
│   │   ├── metadata.rb
│   │   ├── recipes
│   │   └── templates
│   ├── foobar_app_joomla
│   │   ├── attributes
│   │   ├── definitions
│   │   ├── metadata.rb
│   │   └── recipes
│   ├── Config
│   ├── dependencies
│   │   ├── attributes
│   │   ├── libraries
│   │   ├── metadata.rb
│   │   ├── recipes
│   │   └── specs
│   ├── deploy
│   │   ├── attributes
│   │   ├── definitions
│   │   ├── libraries
│   │   ├── metadata.rb
│   │   ├── specs
│   │   └── templates
│   │       └── default
│   ├── ebs
│   │   ├── attributes
│   │   ├── files
│   │   │   └── default
│   │   ├── libraries
│   │   ├── metadata.rb
│   │   ├── recipes
│   │   ├── specs
│   │   └── templates
│   │       └── default
│   ├── gem_support
│   │   ├── libraries
│   │   ├── metadata.rb
│   │   ├── recipes
│   │   └── specs
│   ├── haproxy
│   │   ├── attributes
│   │   ├── metadata.rb
│   │   ├── README.rdoc
│   │   ├── recipes
│   │   ├── specs
│   │   └── templates
│   │       └── default
│   ├── LICENSE
│   ├── memcached
│   │   ├── attributes
│   │   ├── metadata.rb
│   │   ├── recipes
│   │   ├── specs
│   │   └── templates
│   │       └── default
│   ├── mod_php5_apache2
│   │   ├── attributes
│   │   ├── metadata.rb
│   │   ├── recipes
│   │   ├── specs
│   │   └── templates
│   │       └── default
│   ├── mysql
│   │   ├── attributes
│   │   ├── files
│   │   │   └── default
│   │   ├── metadata.rb
│   │   ├── recipes
│   │   ├── specs
│   │   └── templates
│   │       └── default
│   ├── nginx
│   │   ├── attributes
│   │   ├── definitions
│   │   ├── metadata.rb
│   │   ├── specs
│   │   └── templates
│   │       └── default
│   ├── opsworks_agent_monit
│   │   ├── attributes
│   │   ├── metadata.rb
│   │   ├── recipes
│   │   ├── specs
│   │   └── templates
│   │       ├── default
│   ├── opsworks_aws_flow_ruby
│   │   ├── attributes
│   │   ├── definitions
│   │   ├── metadata.rb
│   │   ├── recipes
│   │   └── templates
│   │       └── default
│   ├── opsworks_berkshelf
│   │   ├── attributes
│   │   ├── libraries
│   │   ├── metadata.rb
│   │   ├── providers
│   │   ├── recipes
│   │   └── resources
│   ├── opsworks_bundler
│   │   ├── attributes
│   │   ├── metadata.rb
│   │   ├── recipes
│   │   └── specs
│   ├── opsworks_cleanup
│   │   ├── attributes
│   │   ├── metadata.rb
│   │   ├── recipes
│   │   └── specs
│   ├── opsworks_commons
│   │   ├── attributes
│   │   ├── definitions
│   │   ├── libraries
│   │   ├── metadata.rb
│   │   ├── providers
│   │   ├── recipes
│   │   └── resources
│   ├── opsworks_custom_cookbooks
│   │   ├── attributes
│   │   ├── libraries
│   │   ├── metadata.rb
│   │   ├── recipes
│   │   └── specs
│   ├── opsworks_ecs
│   │   ├── attributes
│   │   ├── files
│   │   │   └── default
│   │   ├── libraries
│   │   ├── metadata.rb
│   │   ├── recipes
│   │   └── templates
│   │       └── default
│   ├── opsworks_ganglia
│   │   ├── attributes
│   │   ├── files
│   │   │   └── default
│   │   ├── metadata.rb
│   │   ├── recipes
│   │   ├── specs
│   │   └── templates
│   │       └── default
│   ├── opsworks_initial_setup
│   │   ├── attributes
│   │   ├── libraries
│   │   ├── metadata.rb
│   │   ├── recipes
│   │   ├── specs
│   │   └── templates
│   │       └── default
│   ├── opsworks_java
│   │   ├── attributes
│   │   ├── metadata.rb
│   │   ├── recipes
│   │   ├── specs
│   │   └── templates
│   │       ├── amazon
│   │       ├── default
│   ├── opsworks_nodejs
│   │   ├── attributes
│   │   ├── definitions
│   │   ├── libraries
│   │   ├── metadata.rb
│   │   ├── recipes
│   │   ├── specs
│   │   └── templates
│   │       └── default
│   ├── opsworks_rubygems
│   │   ├── attributes
│   │   ├── metadata.rb
│   │   ├── recipes
│   │   └── specs
│   ├── opsworks_shutdown
│   │   ├── metadata.rb
│   │   ├── recipes
│   │   └── specs
│   ├── opsworks_stack_state_sync
│   │   ├── metadata.rb
│   │   ├── recipes
│   │   └── templates
│   │       └── default
│   ├── packages
│   │   ├── attributes
│   │   ├── libraries
│   │   ├── metadata.rb
│   │   ├── recipes
│   │   └── specs
│   ├── passenger_apache2
│   │   ├── attributes
│   │   ├── definitions
│   │   ├── metadata.rb
│   │   ├── recipes
│   │   ├── specs
│   │   └── templates
│   │       └── default
│   ├── php
│   │   ├── attributes
│   │   │   └── default.rb
│   │   ├── recipes
│   │   ├── specs
│   │   └── templates
│   │       └── default
│   ├── Rakefile
│   ├── README.md
│   ├── ruby
│   │   ├── attributes
│   │   ├── metadata.rb
│   │   ├── recipes
│   │   └── specs
│   ├── scm_helper
│   ├── ssh_host_keys
│   ├── ssh_users
│   ├── test_suite
├── attributes
├── Berksfile
├── chefignore
├── definitions
├── env
├── layers.json
├── metadata.rb
├── recipes
├── spec
├── specs
├── test


Get this bounty!!!

#StackBounty: #c# #amazon-web-services #.net-core #aws-lambda Phantom exception stops execution of child method, but doesn't bubble…

Bounty: 250

I have an AWS lambda function that executes in response to an SNS message. There is some inexplicable behavior happening where execution stops in child method suddenly (as if an exception is thrown), but the parent continues executing as normal afterward.

public void InsertEntity(Entity entity)
{
    _context.Entities.Add(entity);

    _logger.Log($"Before SaveChanges, entity.Id is {entity.Id}");

    _context.SaveChanges();

    _logger.Log($"After SaveChanges, entity.Id is {entity.Id}");

    if (entity.Id < 0)
        throw new Exception($"InsertEntity succeeded, but id is invalid: {entity.Id}");
}

public int? ImportEntity(string xml)
{
    var entity = new Entity(xml);

    _entityRepository.InsertEntity(entity);

    _logger.LogLine($"Entity inserted successfully with Id {entity.Id}");

    return entity.Id;
}

The context in this case is a standard DbContext using a PostgreSQL EF provider.

When the entity is first created, the Id is 0. When you add it to the context DbSet, the id becomes a temporary negative integer like -2147392341. After SaveChanges is called, the entity should be given an Id from the database, a positive number.

Any exceptions here should bubble up to the method that calls ImportEntity and it should handle the error. Instead my log file in AWS looks like this:

Before SaveChanges, entity.Id is -2147482647
Entity inserted successfully with Id -2147482647

And the negative value is being returned with no exception thrown at any point. Since I have code inside InsertEntity that specifically checks for a negative value, the only possibility I can see is that SaveChanges is throwing an exception that is causing both the logging and the negative check to not be run. But how does the parent method continue logging and executing and returning a value?

I know my code is being updated on AWS because I added the “Before SaveChanges” logging recently and it’s showing up, and I added the logging all at the same time.

EDIT:

I added a try/catch directly around the _context.SaveChanges(); and was able to catch the underlying exception. The question still remains of why/how it’s possible for execution to continue in the ImportEntity function even when an exception is being thrown from within InsertEntity.


Get this bounty!!!

#StackBounty: #ios #objective-c #swift #amazon-web-services #aws-appsync-ios Process for uploading image to s3 with appsync || iOS Apps…

Bounty: 50

I’m working on a new project that requires uploading attachments in the form of images. I’m using DynamoDB and AppSync API’s to insert and retrieve data from database. As we are new to the AppSync and all the amazon services and database we are using for the app i’m little bit confused about the authentication process. Right now we are using API key for authentication and I have tried these steps to upload image to s3.

1 Configue the AWSServiceManager with static configuration like :-

let staticCredit =  AWSStaticCredentialsProvider(accessKey: kAppSyncAccessKey, secretKey: kAppSyncSecretKey)
let AppSyncRegion: AWSRegionType = .USEast2
let config = AWSServiceConfiguration(region: AppSyncRegion, credentialsProvider: staticCredit)
AWSServiceManager.default().defaultServiceConfiguration = config

2 Uploding picture with this method : –

func updatePictureToServer(url:URL, completion:@escaping (Bool)->Void){
    let transferManager = AWSS3TransferManager.default()
    let uploadingFileURL = url
    let uploadRequest = AWSS3TransferManagerUploadRequest()
    let userBucket = String(format: "BUCKET")
    uploadRequest?.bucket = userBucket
    let fileName = String(format: "%@%@", AppSettings.getUserId(),".jpg")
    uploadRequest?.key = fileName
    uploadRequest?.body = uploadingFileURL
    transferManager.upload(uploadRequest!).continueWith(executor: AWSExecutor.mainThread(), block: { (task:AWSTask<AnyObject>) -> Any? in
        if let error = task.error as NSError? {
            if error.domain == AWSS3TransferManagerErrorDomain, let code = AWSS3TransferManagerErrorType(rawValue: error.code) {
                switch code {
                case .cancelled, .paused:
                    break
                default:
                    print("Error uploading: (String(describing: uploadRequest!.key)) Error: (error)")
                }
            } else {
                print("Error uploading: (String(describing: uploadRequest!.key)) Error: (error)")
            }
            completion(false)
            return nil
        }

        _ = task.result
        completion(true)
        print("Upload complete for: (String(describing: uploadRequest!.key))")
        return nil
    })
}

3 And finally i’m able to see the uploaded image on the S3 bucket

enter image description here

But i’m concerned about how to save the url of the image and how to retrieve the image because when i have to make the buket PUBLIC to retrieve the image and i don’t think that’s a good approach, plus is it necessary to have a Cognito user pool because we aren’t using Cognito user pool yet in our app and not have much knowledge about that too and documents are not helping in practical situations because we are implementing ti for the first time so we need some little help.

So two question : –

  1. Proper procedure to use for uploading and retrieving images for S3 and AppSync.
  2. Is it necessary to use Cognito user pool for image uploading and retrieving.

Thanks

Note: Any suggestion or improvement or anything related to the AppSync, S3 or DynamoDB will be truly appreciated and language is not a barrier just looking for directions so swift or objective-c no problem.


Get this bounty!!!

#StackBounty: #amazon-web-services #amazon-ec2 #elastic-beanstalk #amazon-cloudformation How to automate EBS encryption with Elastic Be…

Bounty: 50

I am looking to encrypt my root EBS volumes for new EC2 environments that I create. I know that I can do this from the AWS console and from CloudFormation, but would like to be able to do so via an Elastic Beanstalk config file.

I have tried by setting the EBS volume in the launch configuration, however this only creates additional volumes from the root volume:

Type: AWS::AutoScaling::LaunchConfiguration
Properties:
  BlockDeviceMappings: [ DeviceName: "/dev/sdf1", Ebs: { Encrypted: true, VolumeSize: 8, VolumeType: gp2}]

I have also tried to create a new EBS volume on environment creation, however I am unsure how to dynamically get the EC2 instance’s logical name (I used MyEC2 here for reference):

Type: AWS::EC2::Volume
Properties:
  AutoEnableIO: true
  AvailabilityZone: { "Fn::GetAtt" : [ "MyEC2", "AvailabilityZone" ] }
  Encrypted: true
  KmsKeyId: mykey
  Size: 8
  VolumeType: gp2

Essentially I need to be able to create a new environment with an encrypted root volume. Any help would be greatly appreciated!


Get this bounty!!!