#StackBounty: #php #amazon-web-services #.htaccess #seo AWS- subdomain doesn't support HSTS

Bounty: 500

I am using Semrush and I have this one notice that is really bothering me and want to get rid of. 1 subdomain doesn’t support HSTS for the subdomain: www.domain.com , here is my .htdocs file:

Options -Indexes

<IfModule mod_rewrite.c>
    Options +FollowSymLinks
    RewriteEngine On
    RewriteCond %{REQUEST_FILENAME} !-f
    RewriteCond %{REQUEST_FILENAME} !-d
    RewriteRule ^(.*)$ index.php/$1 [L]
    RewriteCond %{HTTPS} !=on
    RewriteCond %{HTTP_USER_AGENT} ^(.+)$
    RewriteCond %{SERVER_NAME} ^example.com$
    RewriteRule .* https://www.%{SERVER_NAME}%{REQUEST_URI} [R=301,L]
    Header add Strict-Transport-Security "max-age=300"
<IfModule mod_headers.c>
    <If "%{REQUEST_SCHEME} == 'https' || %{HTTP:X-Forwarded-Proto} == 'https'">
        Header always set Strict-Transport-Security "max-age=31536000"

I add this part at the end:

<IfModule mod_headers.c>
        <If "%{REQUEST_SCHEME} == 'https' || %{HTTP:X-Forwarded-Proto} == 'https'">
            Header always set Strict-Transport-Security "max-age=31536000"

But still nothing, what am I doing wrong?

Get this bounty!!!

#StackBounty: #amazon-web-services #redis #amazon-elasticache aws elasticache redis on notify-keyspace-events event trigger not able ca…

Bounty: 100

I’m using AWS ElastiCache and checking the event trigger on expiration. This one I’m checking on redis-cli, I’ve added a screenshot also.
enter image description here

Even I’ve added new new parameter group and set notify-keyspace-events key value to KEA
enter image description here

but still, all the notify-keyspace-events are not able to catch. I’m I missing any another config in order catch these events?.

Get this bounty!!!

#StackBounty: #amazon-web-services #tensorflow #python-3.9 AWS Deep Learning AMI with Python3.9

Bounty: 50

I tried using the formal AWS Deep Learning AMI.
It is published here: https://aws.amazon.com/marketplace/pp/prodview-d5wlsowr2cimk
(currently version 49.0)

My problem is that it uses Python3.7 while my code uses Python3.9

I’m wondering what should I do. I can upgrade the python on the machine to 3.9, but this will obviously require reinstalling TensorFlow and other libraries, and I wonder if I will break the optimizations that come by default on this image.

I also couldn’t find any other formal images of Python3.9, with GPU support and TensorFlow, OpenCV and others.

Get this bounty!!!

#StackBounty: #nginx #amazon-web-services #amazon-s3 400 Bad Request errors (infrequent) on public Amazon S3 assets

Bounty: 100

We are hosting S3 public assets (images) under a local path using a reverse proxy from NGINX to S3.

We have noticed periodic errors in our logs (400 errors) which are very infrequent, but are causing issues for visitors. We can tell these are AWS errors since the content type returned is application/xml. Loading these same assets right after the logged error returns the correct response.

I’ve enabled logging on my relevant S3 buckets, but upon inspecting the logs I do not see any 400 errors listed during the timeframes the errors occured.

  • Would AWS throttle our requests since they are coming from one IP (through the NGINX reverse proxy)?
  • What types of 400 statuses would S3 return for public objects that are valid?
  • Is there another place in the AWS console that would display these 400 errors so we could investigate?

Updated specific example case:

Example of our asset local path:

Public S3 URL:

Example of the NGINX log during logged error:

response_content_type: application/xml

status: 400

content_length: 355 bytes

Get this bounty!!!

#StackBounty: #amazon-web-services #email-server #amazon-route53 #dkim #amazon-ses Amazon SES – Domain Verification Pending

Bounty: 50

I am trying to setup Amazon SES on my Route53 domain (writeurl.com). The domain verification status remains at pending verification and does not proceed. The following nslookup command does not show anything, even though the records are created in Route53 hosted zone.

nslookup -type=txt _amazonses.writeurl.com

What am I doing wrong here? Following are the Route53 records:

enter image description here

Get this bounty!!!

#StackBounty: #angular #amazon-web-services #asp.net-core #aws-lambda #angular5 Error Code: 0 ExceptionMessage: Http failure response f…

Bounty: 50

I am getting this error "Error Code: 0 ExceptionMessage: Http failure response for (unknown URL): 0 Unknown Error" on my Angular 5 project.

However, I am not able to recreate this issue, I saw this repeated issue on the Rollbar error log. I tried the same steps on my local machine, also tried to recreate in different browsers like chrome, safari, etc but no luck.

I’m using AWS lambda with .NET core 2.1, API proxy gateway, and Angular 5 on FrontEnd.

AngularJs code

getOrgPublicAssetName(id: number): Observable<any> {
    let content = this.appSharedService.getSitecontents();
    if (content !== '')
      return Observable.of(content);
    if (id !== undefined)
      return <Observable<any>>(
          .get(this.organizationUrl + '/SiteContent/' + id)
          .map(result => {
            return result;
    else return Observable.of();

Following is my startup.cs configure function where I am already using cors.

public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)

            Environment.SetEnvironmentVariable(EnvironmentVariableAWSRegion.ENVIRONMENT_VARIABLE_REGION, Configuration["AWS:Region"]);

            // Create a logging provider based on the configuration information passed through the appsettings.json
            // You can even provide your custom formatting.

            //Enable swagger endpoint if the environment is not production

            if (!IsProductionEnvironment)
                // Enable middleware to serve generated Swagger as a JSON endpoint.

                // Enable middleware to serve swagger-ui (HTML, JS, CSS, etc.), specifying the Swagger JSON endpoint.
                app.UseSwaggerUI(c =>
                    if (env.EnvironmentName == "Local")
                        c.SwaggerEndpoint(SwaggerLocalEndpoint, ApiName);
                        c.SwaggerEndpoint(SwaggerEndpoint, ApiName);

            app.UseCors(policy =>
                //TODO: This has to be addressed once we deploy both api and ui under the same domain
            app.UseMvc(routeBuilder =>
                routeBuilder.MapODataServiceRoute("odata", "odata", OrganizationModelBuilder.GetEdmModel());
            app.UseForwardedHeaders(new ForwardedHeadersOptions
                ForwardedHeaders = ForwardedHeaders.XForwardedFor | ForwardedHeaders.XForwardedProto

Get this bounty!!!

#StackBounty: #django #amazon-web-services #aws-lambda #zappa Request takes too long to Django app deployed in AWS Lambda via Zappa

Bounty: 50

I have recently deployed a Django backend application to AWS Lambda using Zappa.

After the lambda function has not been invoked for some time, the first request to be made takes from 10 to 15 seconds to be processed. At first I thought it would be because of the cold start but even for a cold start this time is unacceptable. Then, reading through Zappa’s documentation I saw that it enables by default the keep_warm feature that sends a dummy request to the lambda function every 4 minutes to keep it warm; so this excessive delay in the response to the first request to the lambda is not due to a cold start.

Then, I started using tools such as AWS X-Ray and Cloudwatch Insights to try to find the explanation for the delay. Here is what I found out:

The invokation that takes a very long time to be processed is the following:
enter image description here

Crossed out in red are the names of the environment variables the application uses. They are all defined and assigned a value directly in the AWS Console. What I don’t understand is, first of all, why it takes so long, and secondly, why it says the environment variables are casted as None. The application works perfectly (apart from the massive delay in the first request) so the environment variables are correctly set somewhere.

This request is made every two hours religiously and the first time someone invokes the lambda function in some time, as seen in the following chart:

enter image description here

The dots in the x axis correspond to Zappa’s dummy requests to keep the server warm. The elevated dots correspond to the invocation shown in the previous image. Finally, the spike corresponds to a user invocation. The time it took to process is the sum of the time it takes to process the long invocation (the one shown in the first image) and the time it takes to process the longest http request the client makes to the server. This request was the following:

enter image description here

It was a regular login request that should be resolved much faster. Other requests that are probably more demanding than this one were resolved in less than 100ms.

So, to sum up:

  1. There is one lambda invocation that takes more than 10 seconds to be resolved. This corresponds to the first image shown. It is done every 2 hours and when a user makes a request to the server after it has been idle for some time.
  2. Some requests take more than 2 seconds to be resolved and I have no idea as to why this could be.
  3. Apart from these previous function invocations, all other requests are resolved in a reasonable time frame.

Any ideas as to why these invocations could be taking so much time is very much appreciated as I have spent quite some time trying to figure it out on my own and I have ran out of ideas. Thank you in advance!

Edit 1 (28/07/21): to further support my suspicion that this delay is not due to a cold start here is the "Segments Timeline" of the function in Cloudwatch/Application monitoring/Traces:

enter image description here

If it were a cold start, the delay should appear in the "Initialization" segment and not in the "Invocation" one.

Get this bounty!!!

#StackBounty: #amazon-web-services #github #amazon-s3 How can I use automation in AWS to replicate a github repo to an S3 bucket (quick…

Bounty: 50

I’d like to try and automate an S3 bucket replication of a Github repo (for the sole reason that Cloudformation modules must reference templates in S3).

This quickstart I tried to use looked like it could do it, but it doesn’t result in success for me, even though github reports success in pushing via the webhook for my repository.


I configured these parameters.

I am not sure what to configure for allowed IP’s, so I tested fully open.

AllowedIps   -
ApiSecret   ****    -
CustomDomainName    -   -
ExcludeGit  True    -
OutputBucketName    -   -
QSS3BucketName  aws-quickstart  -
QSS3BucketRegion    us-east-1   -
QSS3KeyPrefix   quickstart-git2s3/  -
ScmHostnameOverride -   -
SubnetIds   subnet-124j124  -
VPCCidrRange   -
VPCId   vpc-l1kj4lk2j1l2k4j

I tried manually executing the code build as well but got this error:

COMMAND_EXECUTION_ERROR: Error while executing command: python3 - << "EOF" from boto3 import client import os s3 = client('s3') kms = client('kms') enckey = s3.get_object(Bucket=os.getenv('KeyBucket'), Key=os.getenv('KeyObject'))['Body'].read() privkey = kms.decrypt(CiphertextBlob=enckey)['Plaintext'] with open('enc_key.pem', 'w') as f: print(privkey.decode("utf-8"), file=f) EOF . Reason: exit status 1

The github webhook page reports this response:

Content-Length: 0
Content-Type: application/json
Date: Thu, 24 Jun 2021 21:33:47 GMT
Via: 1.1 9b097dfab92228268a37145aac5629c1.cloudfront.net (CloudFront)
X-Amz-Apigw-Id: 1l4kkn14l14n=
X-Amz-Cf-Id: 1l43k135ln13lj1n3l1kn414==
X-Amz-Cf-Pop: IAD89-C1
X-Amzn-Requestid: 32kjh235-d470-1l412-bafa-l144l1
X-Amzn-Trace-Id: Root=1-60d4fa3b-73d7403073276ca306853b49;Sampled=0
X-Cache: Miss from cloudfront

Get this bounty!!!

#StackBounty: #swift #string #amazon-web-services #character-encoding #amazon-polly AWS Polly – Highlighting special characters

Bounty: 150

I am using the AWS Polly service for text to speech. But if the text contains some special characters, it is returning the wrong start and end numbers.

For example if the text is : "Böylelikle" it returns : {"time":6,"type":"word","start":0,"end":11,"value":"Böylelikle"}

But it should start from 0 and end to 10.

I’ve searched AWS Documentation and they say for the start and end values, the offset in bytes not characters.

My question is how can I convert this byte value to the character.

My code is:

builder.continueOnSuccessWith { (awsTask: AWSTask<NSURL>) -> Any? in
    if builder.error == nil {
        if let url = awsTask.result {
            do {
                let txtData = try Data(contentsOf: url as URL)
                if let txtString = String(data: txtData, encoding: .utf8) {
                    let lines = txtString.components(separatedBy: .newlines)
                    for line in lines {
                        let jsonData = Data(line.utf8)
                        let pollyVoiceSentence = try JSONDecoder().decode(PollyVoiceSentence.self, from: jsonData)
            } catch {
                print("Could not parse TXT file")
    } else {
        print("ParseJSON: (builder.error!)")
    return nil

And to highlight words:

let start = pollyVoiceSentence.start
var end = pollyVoiceSentence.end
let voiceRange = NSRange(location: start, length: end - start)

print("RANGE: (voiceRange) - Word: (pollyVoiceSentence.value)")


Get this bounty!!!