#StackBounty: #python #amazon-web-services #aws-lambda #aws-api-gateway 500 error while trying to enable CORS on POST with AWS API Gate…

Bounty: 50

I have a response method that looks like this for my Lambda functions:

def respond(err, res=None):
    return {
        'statusCode': 400 if err else 200,
        'body': json.dumps(err) if err else json.dumps(res),
        'headers': {
            'Access-Control-Allow-Headers': 'content-type,X-Amz-Date,Authorization,X-Api-Key,X-Amz-Security-Token',
            'Access-Control-Allow-Methods': 'POST, GET, DELETE',
            'Access-Control-Allow-Origin': '*',
            'Access-Control-Allow-Credentials': True,
            'Content-Type': 'application/json',
        },
    }

When I test my endpoint with an OPTIONS request from Postman, I get a 500 internal server error. If I test it from the the API Gateway console, I get this additionally:

Execution log for request test-request
Wed Jul 05 14:25:26 UTC 2017 : Starting execution for request: test-invoke-request
Wed Jul 05 14:25:26 UTC 2017 : HTTP Method: OPTIONS, Resource Path: /login
Wed Jul 05 14:25:26 UTC 2017 : Method request path: {}
Wed Jul 05 14:25:26 UTC 2017 : Method request query string: {}
Wed Jul 05 14:25:26 UTC 2017 : Method request headers: {}
Wed Jul 05 14:25:26 UTC 2017 : Method request body before transformations: 
Wed Jul 05 14:25:26 UTC 2017 : Received response. Integration latency: 0 ms
Wed Jul 05 14:25:26 UTC 2017 : Endpoint response body before transformations: 
Wed Jul 05 14:25:26 UTC 2017 : Endpoint response headers: {}
Wed Jul 05 14:25:26 UTC 2017 : Execution failed due to configuration error: Output mapping refers to an invalid method response: 200
Wed Jul 05 14:25:26 UTC 2017 : Method completed with status: 500

I’m not really sure what I’m doing wrong. I think I am returning all the right headers. Any help is appreciated.


Get this bounty!!!

#StackBounty: #amazon-ec2 #amazon-web-services #database-replication Using Amazon EC2 for postgres_fdw and bds replication

Bounty: 50

I’ve been tasked to do two proofs of concept with Postgres on Amazon RDS, one with postgres_fdw and another one with BDS replication. After many searches on the internet, it seems to be not possible to do either the replication or setting up postgres_fdw on RDS.

However, someone on the internet (I cannot find the reference) mentioned EC2 as a possible way of creating Postgres Foreign DataWrapper or even building replication from a postgres db, let’s called it Frankfurt, to connect to a second postgres instance that we will call Seoul.

Can anyone confirm that both setting up postgres_fdw and also BDS can be set in EC2?

Thanks.


Get this bounty!!!

#StackBounty: #powershell #amazon-web-services #continuous-integration #elastic-beanstalk #devops AWS Elasticbeanstalk ebextensions ser…

Bounty: 200

I’ve got a elasticbeanstalk environment that needs to run a powershell script and restart before the application is deployed. According to the documentation this is supported as per the documentation

If the system requires a reboot after the command completes, the system reboots after the specified number of seconds elapses. If the system reboots as a result of a command, Elastic Beanstalk will recover to the point after the command in the configuration file. The default value is 60 seconds. You can also specify forever, but the system must reboot before you can run another command.

However when I add a reboot command to a ebextensions .config file I get the following exception from elasticbeanstalk

Error occurred during build: [Errno 4] Interrupted function call

The logs on the server after it has rebooted show that the command was executed so I assume the error is caused by a restart during the app deploy stage.

If I remove the restart command, deploy, wait for it to be ready then trigger a restart manually it works fine. But this is obviously not acceptable.

I’ve looked into the deployment hooks file system approach but that doesn’t work either, and seems unessesary given it sounds like it should support this requirement out of the box.

Does anybody have any ideas?


Get this bounty!!!

#StackBounty: #javascript #node.js #amazon-web-services #express #amazon-s3 Allowing users to upload content to s3

Bounty: 150

I have an S3 bucket named BUCKET on region BUCKET_REGION. I’m trying to allow users of my web and mobile apps to upload image files to these bucket, provided that they meet certain restrictions based on Content-Type and Content-Length (namely, I want to only allow jpegs less than 3mbs to be uploaded). Once uploaded, the files should be publicly accessible.

Based on fairly extensive digging through AWS docs, I assume that the process should look something like this on my frontend apps:

const a = await axios.post('my-api.com/get_s3_id');

const b = await axios.put(`https://{BUCKET}.amazonaws.com/{a.id}`, {
   // ??
   headersForAuth: a.headersFromAuth,
   file: myFileFromSomewhere // i.e. HTML5 File() object
});

// now can do things like <img src={`https://{BUCKET}.amazonaws.com/{a.id}`} />
// UNLESS the file is over 3mb or not an image/jpeg, in which case I want it to be throwing errors

where on my backend API I’d be doing something like

import aws from 'aws-sdk';
import uuid from 'uuid';
app.post('/get_s3_id', (req, res, next) => {
  // do some validation of request (i.e. checking user Ids)
  const s3 = new aws.S3({region: BUCKET_REGION});
  const id = uuid.v4();
  // TODO do something with s3 to make it possible for anyone to upload pictures under 3mbs that have the s3 key === id
  res.json({id, additionalAWSHeaders});
});

What I’m not sure about is what exact S3 methods I should be looking at.


Here are some things that don’t work:

  • I’ve seen a lot of mentions of (a very old) API accessible with s3.getSignedUrl('putObject', ...). However, this doesn’t seem to support reliably setting a ContentLength — at least anymore. (See http://stackoverflow.com/a/28699269/251162.)

  • I’ve also seen a closer-to-working example using an HTTP POST with form-data API that is also very old. I guess that this might get it done if there are no alternatives but I am concerned that it is no longer the “right” way to do things — additionally, it seems to doing a lot of manual encrypting etc and not using the official node SDK. (See http://stackoverflow.com/a/28638155/251162.)


Get this bounty!!!

#StackBounty: #amazon-ec2 #amazon-web-services #sftp #chroot #vsftpd Enabling ChrootDirectory breaks my SFTP on AWS, gives error for wr…

Bounty: 50

I’m trying to set up an SFTP server on AWS that multiple customers can use to upload data securely. It is important that they are not able to see the data of any other customer, and to do that I need to jail the directories with ChrootDirectory in

My sshd_config has the following:

Subsystem sftp internal-sftp
Match Group sftponly
        ChrootDirectory /home/chroot/ftptest/
        AllowTcpForwarding no
        ForceCommand internal-sftp

If I comment out the ChrootDirectory line everything works fine, except that you can see all the files on the system. I configured everything based off of the instructions here using vsftpd. I and am using ssh keys to control access to each of the customer accounts, as per Amazon’s instructions. I am using the Amazon AMI.

Edit: I changed the chroot directory to /home/chroot/ftptest/ and created directories with the following permissions:

ls -ld / /home /home/chroot /home/chroot/ftptest/
dr-xr-xr-x 25 root    root    4096 Feb 23 03:28 /
drwxr-xr-x  6 root    root    4096 Feb 23 20:26 /home
drwx--x--x  3 root    root    4096 Feb 23 20:27 /home/chroot
drwxr-xr-x  2 ftptest ftptest 4096 Feb 23 20:27 /home/chroot/ftptest/

It’s still not working. In /var/log/secure I see

Authentication refused: bad ownership or modes for directory /home/ftptest

even though /home/ftptest isn’t the directory I am trying to chroot to. Why would it be throwing an error for that directory? Could this be an issue with the ~/.ssh directory?


Get this bounty!!!

#StackBounty: #postfix #amazon-web-services #email-server #amazon-ses Rewrite "from" for specific "to" addresses

Bounty: 50

We have a setup where postfix sends mails via Amazon SES relay. All is working fine except email forwards.

While this topic has already been discuessed at least here and here, there are still some points which I can’t wrap my head around.

The problem is that Amazon SES won’t send emails, where From: is not verified. So when an internal address wants to forward to an external and the sender is external as well, the mail will not get sent.

To solve this, we currently use the following config in main.cf

header_checks = regexp:/etc/postfix/first_header_checks
smtp_header_checks = regexp:/etc/postfix/second_header_checks
sender_canonical_maps = regexp:/etc/postfix/sender_canonical
sender_canonical_classes = envelope_sender
smtpd_data_restrictions = check_sender_access pcre:/etc/postfix/sender_access

With first_header_checks

/^From:(s)?(.*)/i PREPEND X-Original-From: $2
/^To:(s)?(.*)$/i PREPEND X-Original-To: $2

second_header_checks

/^From:(.*)/i REPLACE From: <no-reply@verified-domain.com>

sender_canonical

/.*/    user@verified-domain.com

sender_access

/(.*)/  prepend Reply-To: <$1>

This works great for incoming mail. user@external.com sends the mail to me@verified-domain.com and it gets forwarded to new@another-external.com

Reply-To: <user@external.com>
X-Original-To: <me@verified-domain.com>
To: new@another-external.com
From: <no-reply@verified-domain.com>
X-Original-From: <user@external.com>

The problem, this also happens for outgoing mail from the server. Say me@verified-domain.com sends a mail, the from gets rewritten to no-reply and a Reply-To will be set. This I want to fix. The mail headers should only be rewritten for incoming mail that will be forwarded.

I have tried using regular expressions like !/^From:(s)?(.*@verified-domain.com)/ but so far with no luck.


Get this bounty!!!