#StackBounty: #amazon-web-services #mongodb #vpc-peering #aws-fargate How do I resolve a private DNS address from within an AWS Fargate…

Bounty: 100

I’m trying to setup a connection to a MongoDB Atlas database from an AWS Fargate container. The VPC peering is setup and works and I can successfully connect to the MongoDB Atlas cluster from a bastion within the private subnets of the AWS VPC. However when I try the same conenction from a Fargate task it fails to connect.

For instance if I attempt to connect with the following mongo cli command:

mongo "mongodb+srv://user:password@cluster0.foo0.mongodb.net/database"

The I get the following error.

MongoDB shell version v4.0.20
connecting to: mongodb://cluster0-shard-00-01.foo0.mongodb.net.:27017,cluster0-shard-00-02.tzhow.mongodb.net.:27017,cluster0-shard-00-00.foo0.mongodb.net.:27017/cxchat?authSource=admin&gssapiServiceName=mongodb&replicaSet=atlas-mdt101-shard-0&ssl=true
2020-09-09T13:16:46.295+0000 I NETWORK  [js] Starting new replica set monitor for atlas-mdt101-shard-0/cluster0-shard-00-01.foo0.mongodb.net.:27017,cluster0-shard-00-02.foo0.mongodb.net.:27017,cluster0-shard-00-00.foo0.mongodb.net.:27017
2020-09-09T13:16:56.351+0000 W NETWORK  [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set atlas-mdt101-shard-0
2020-09-09T13:16:56.351+0000 I NETWORK  [ReplicaSetMonitor-TaskExecutor] Cannot reach any nodes for set atlas-mdt101-shard-0. Please check network connectivity and the status of the set. This has happened for 1 checks in a row.
2020-09-09T13:17:11.867+0000 W NETWORK  [js] Unable to reach primary for set atlas-mdt101-shard-0
2020-09-09T13:17:11.867+0000 I NETWORK  [js] Cannot reach any nodes for set atlas-mdt101-shard-0. Please check network connectivity and the status of the set. This has happened for 2 checks in a row.
*** It looks like this is a MongoDB Atlas cluster. Please ensure that your IP whitelist allows connections from your network.
2020-09-09T13:17:11.868+0000 E QUERY    [js] Error: connect failed to replica set atlas-mdt101-shard-0/cluster0-shard-00-01.foo0.mongodb.net.:27017,cluster0-shard-00-02.foo0.mongodb.net.:27017,cluster0-shard-00-00.foo0.mongodb.net.:27017 :

The same command works fine from a EC2 in the VPC in a private subnet (same subnets as assigned to the ECS container).

I understand that Fargate networking is a bit different. The task is setup with AWSVPC as the NetworkMode. The error suggests that a whitelist entry might be needed on the Mongo Atlas side, but I’ve checked this and the task IP is which is comfortably within the white list assigned on Atlas of

Has anybody tried this with Fargate or anything similar? I would have thought that the VPC peering connection would also be active on the Fargate task given it is setup in the same VPC/ Subnets etc.

Get this bounty!!!

#StackBounty: #nginx #amazon-web-services #elastic-beanstalk NGINX force HTTPS except for some user agents on AWS Elastic Beanstalk

Bounty: 100

I have a partial NGINX config file that gets pulled into a main NGINX config file automatically by AWS Elastic Beanstalk. I have two instances running, one EC2 web server running PHP and another that’s an SQS worker.

I had set up my config file to force HTTPS using:

location / {
    try_files $uri $uri/ /index.php?$query_string;
    gzip_static on;

    return 301 https://$host$request_uri;

This worked great for forcing HTTPS, but I was running into issues with both the ELB health monitor failing due to a 301 (expected 200), and the SQS queue also failing due to returning a 301 — the queue is triggered by a POST from a configured cron job, which seemed to not work with the 301 redirect:

version: 1
 - name: "schedule"
   url: "/worker/schedule"
   schedule: "* * * * *"

This is using a package that listens for the POST request to that URL and then kicks off jobs.

To fix those issues, I tried this to check for the appropriate headers and turn off the redirect:

location / {
    try_files $uri $uri/ /index.php?$query_string;
    gzip_static on;

    set $redirect_to_https 0;
    if ($http_x_forwarded_proto != 'https') {
        set $redirect_to_https 1;

    if ($http_user_agent ~* '^ELB-HealthChecker/.*$') {
        access_log off;
        set $redirect_to_https 0;

    if ($http_user_agent ~* '^aws-sqsd/.*$') {
        set $redirect_to_https 0;

    if ($redirect_to_https = 1) {
        return 301 https://$host$request_uri;

This worked for the ELB health check (200), but the SQS worker is still failing, but now with a 404.

The only config that has working correctly on the SQS worker is a stripped down version (which I’m deploying manually right now specifically to the worker):

location / {
    try_files $uri $uri/ /index.php?$query_string;
    gzip_static on;

Here is a screenshot of the successful POST request using the above config:

enter image description here

So the question is, how do I set up my config file to ignore the HTTPS 301 redirect for both the ELB health check and the SQS queue without having to deploy separate NGINX config files?

Get this bounty!!!

#StackBounty: #java #amazon-web-services #primary-key #aws-iot #x509securitytokenmanager Get security token from AWS Credentials Provider

Bounty: 50

Can somebody explain me, how do I need to implement the first step from this blog?
I can’t find it in AWS documentation.

In other words, I need to translate a command:

curl --cert eeb81a0eb6-certificate.pem.crt --key eeb81a0eb6-private.pem.key -H "x-amzn-iot-thingname: myThingName" --cacert AmazonRootCA1.pem https://<prefix>.credentials.iot.us-west-2.amazonaws.com/role-aliases/MyAlias/credentials

to JAVA. How can I do it? I need AWS SDK for it (I prefer a solution without "custom client to make an HTTPS request")


I tried to use a custom client to make an HTTPS request, but I stuck when strated to export my keys to Java KeyStore (BUT curl command works for me fine):

$ winpty openssl pkcs12 -export -in eeb81a0eb6-certificate.pem.crt -inkey eeb81a0eb6-private.pem.key -chain -CAfile AmazonRootCA1.pem -name mycompany.com -out my.p12

Error unable to get local issuer certificate getting chain.

Get this bounty!!!

#StackBounty: #amazon-web-services #monitoring #amazon-ecs #amazon-cloudwatch create a CloudWatch Alarm when an ECS service unable to c…

Bounty: 50

If I release a new Docker image with a bug to my ECS Service, then the service will attempt to start new Tasks but will keep the old version around if the new tasks fail to start.

In that scenario, it will sometimes (not always) emit an Event to the bus like:

service xxx is unable to consistently start tasks successfully. For more information, see the Troubleshooting section.

and sometimes it will just emit loads of events like:

service xxx deregistered 1 targets in target-group yyy

I would like a CloudWatch Alarm to fire in this scenario. How can I achieve that?

I cannot see any CloudWatch metrics that track any relevant events that I could use to trigger this Alarm. https://docs.aws.amazon.com/AmazonECS/latest/developerguide/cloudwatch-metrics.html

If the Tasks fail to boot then I don’t even get any UnHealthyHostCount metrics on the LB Target Group.

I think I will have to create an EventBridge rule to watch for the above named event, but I can’t see an obvious way to have that rule trigger an Alarm. I have set a rule to forward "WARN" and "ERROR" events to SNS/email, but I don’t always get these events. So I frequently get a restart loop with no alarms firing. 🙁

Get this bounty!!!

#StackBounty: #python #amazon-web-services #aws-textract Using Textract for OCR locally

Bounty: 50

I want to extract text from images using Python. (Tessaract lib does not work for me because it requires installation).

I have found boto3 lib and Textract, but I’m having trouble working with it. I’m still new to this. Can you tell me what I need to do in order to run my script correctly.

This is my code:

import cv2
import boto3
import textract

#img = cv2.imread('slika2.jpg') #this is jpg file
with open('slika2.pdf', 'rb') as document:
    img = bytearray(document.read())

textract = boto3.client('textract',region_name='us-west-2')

response = textract.detect_document_text(Document={'Bytes': img}). #gives me error

When I run this code, I get:

botocore.exceptions.ClientError: An error occurred (InvalidSignatureException) when calling the DetectDocumentText operation: The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details.

I have also tried this:

# Document
documentName = "slika2.jpg"

# Read document content
with open(documentName, 'rb') as document:
    imageBytes = bytearray(document.read())

# Amazon Textract client
textract = boto3.client('textract',region_name='us-west-2')

# Call Amazon Textract
response = textract.detect_document_text(Document={'Bytes': imageBytes}) #ERROR


# Print detected text
for item in response["Blocks"]:
    if item["BlockType"] == "LINE":
        print ('33[94m' +  item["Text"] + '33[0m')

But I get this error:

botocore.exceptions.ClientError: An error occurred (InvalidSignatureException) when calling the DetectDocumentText operation: The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details.

Im noob in this, so any help would be good. How can I read text form my image or pdf file?

I have also added this block of code, but the error is still Unable to locate credentials.

session = boto3.Session(

Get this bounty!!!

#StackBounty: #amazon-web-services #ssl #springboot Easy AWS deployment of Spring Boot application with reasonable SSL costs

Bounty: 50

I am experimenting with deploying a standalone, executable Spring Boot JAR on AWS with SSL support.

I tried using Elastic Beanstalk, which created an EC instance. It uses Route 53 with an Elastic Load Balancer to get SSL. I deployed it on a URL nobody knows about, and it works.

But the prices are astronomical! Just last month the load balancer alone cost me $16—for nothing! Nobody even knows about the site. I even forgot I had the site for a while. Random hackers have accessed the site more than I have.

For comparison I have a static site served out of S3 and deployed around the world with SSL using CloudFront for pennies a month. Literally. I think last month the cost was $0.50.

I could manually deploy my JAR on a single EC instance. But then I would have to set up Apache and set up LetsEncrypt and keep the EC instance up-to-date, etc. Or maybe I could go some Docker route, but still I’d have to maintain the machine, and it’s not clear to me a way to automatically get SSL support baked into the Docker image just by deploying it.

I realize the benefit of Elastic Beanstalk’s ability to spin up other EC instances as needed, and maintain transparent SSL support to both of them. But surely there is a cheaper way to get SSL. (If it costs me this much for a site that isn’t used, imagine when I have a few users.)

So let’s start simple. Is there an easy way to deploy a Spring Boot application in an executable JAR on AWS using SSL without configuring and maintaining a machine and manually setting up SSL—but with reasonable costs? The current situation doesn’t seem much different than when I was deploying Java web sites two decades ago. I thought by now I would be able to just drop a JAR somewhere and get a cheap cloud deployment with SSL. (If another service besides AWS can do this for me better, please let me know that as well.)

Get this bounty!!!

#StackBounty: #javascript #amazon-web-services #amazon-cloudwatch #amazon-cloudwatchlogs How to submit the simple log with AWS CloudWat…

Bounty: 50

After about 1 hour of searching, I didn’t find anything about ‘how to submit a simple log to AWS CloudWatch Logs’ from the frontend side. Almost all examples are for Node.js, but I need to submit the errors from the frontend, not form backend. I even did not found which package which I should use for the frontend.

To save, your time, I prepared the template of solution.

import { AWSCloudWatch } from "?????";

  // minimal config


  // Submit 'errorMessage' to AWS CloudWatch
  // It would be something like
  // AWSCloudWatch.submit(errorMessage)

Get this bounty!!!

#StackBounty: #javascript #node.js #amazon-web-services #aws-sdk-mock Mock Javascript AWS.RDS.Signer

Bounty: 200

I have Connection class that is used to connect to AWS Rds Proxy via IAM Authentication. Part of that process is to create a token. I have a function to create the token but now I having a hard time to mock and test it.

Here is the Connection class with setToken method:

class Connection {
    constructor(username, endpoint, database) {
        this.username = username;
        this.endpoint = endpoint;
        this.database = database;

    setToken () {
        let signer = new AWS.RDS.Signer({
            region: 'us-east-1', // example: us-east-2
            hostname: this.endpoint,
            port: 3306,
            username: this.username

        this.token = signer.getAuthToken({
            username: this.username

And here I am trying to mock the return value of AWS.RDS.Signer.getAuthToken()

test('Test Connection setToken', async () => {
    AWSMock.mock('RDS.Signer', 'getAuthToken', 'mock-token');

    let conn = new connections.Connection(



I expected to see "mock-token" as the value for conn.token, but what I get is this:

  promise: [Function],
  createReadStream: [Function: createReadStream],
  on: [Function: on],
  send: [Function: send]

How can I get AWS.RDS.Signer.getAuthToken() to return a mock token?

Get this bounty!!!

#StackBounty: #amazon-web-services #load-balancing #amazon-elb Why does only one of the multiple address records for my ELB work at a s…

Bounty: 150

I’m using an elastic load balancer. When I issue nslookup dualstack.app.elb.us-east-2.amazonaws.com, the output is

Non-authoritative answer:
Name:   dualstack.app.elb.us-east-2.amazonaws.com
Address: 3.xxx.xxx.176
Name:   dualstack.app.elb.us-east-2.amazonaws.com
Address: 18.xxx.xxx.40

I noticed each of these IPs is in a different availability zone and that only one of the IPs is valid at a single time. Making a request to dualstack.app.elb.us-east-2.amazonaws.com/healthcheck via curl only works half of the time. However making the same request from my browser works 100% of the time because of how chrome has its own method of handling round-robin DNS (related: https://serverfault.com/a/774411, https://serverfault.com/a/852421)

Is this the intended behavior of ELB, that when multiple IPs are present, only one of them is expected to work at a time?

Get this bounty!!!

#StackBounty: #amazon-web-services #amazon-ec2 #load-balancing #nat #lets-encrypt How to work Stun/Turn Server (COturn) Under Aws Netwo…

Bounty: 50

I have just setup a coturn server, it works perfectly fine when using the ip or teh domain without loadbalancer, it was tested using this online tool :


The problem is when i use a network loadbalancer, rerouting tcp_udp works on port 80, but when trying to use tls for port 443, it doesn’t work.

I configured the Network load balancer to route tls traffic for port 443 to the target group under port 443 also.
I’m using letsencrypt certificate for domain.com and *.domain.com from letsencrypt in my network load balancer.
Same certificates are added in the config file the turnserver.conf.

And this is my config :

server-name=domain.com realm=domain.com


# Specify the process user and group

And this is what i get from the log :

3170: IPv4. tcp or tls connected to:
3170: session 001000000000003730: TCP socket closed remotely
3170: session 001000000000003730: closed (2nd stage), user <> realm <domain.com> origin <>, local, remote, reason: TCP connection closed by client (callback)

And btw, I always get 701 error from the online tool.

Thank you,

Get this bounty!!!