#StackBounty: #windows #amazon-web-services #jenkins "This operation requires an interactive window station" error when launc…

Bounty: 200

I have a windows ec2 instance which runs a build server for a unity game, controlled by jenkins.

When running unity with the -batchMode command, I can make the game build successfully.

I’d like to run some automated tests inside unity, which require the physics system to be running, which can’t happen in batch mode. If I remove that command line parameter, I get this error:

<I> Failed to get cursor position:
This operation requires an interactive window station.

I know the GPU is powerful enough to run the game – if I remote desktop in, then I can run it at 30fps.

How do I get my ec2 instance to run a "window station" to make this launch successfully?


Get this bounty!!!

#StackBounty: #amazon-web-services #mongodb-query #nosql #nosql-aggregation #aws-documentdb Retrieving a random document from Amazon Do…

Bounty: 250

To support an application feature I need to retrieve a single document from a collection in an Amazon DocumentDB, and it would not be appropriate to retrieve the same document every time.

The MongoDB documentation states that the $sample aggregation stage can be used to select a number of documents using a pseudorandom cursor. I’ve tried this on a local MongoDB instance and it does return a randomly selected document which is what I need.

db.benchmark.aggregate([
    { $sample: { size: 1}}
])

However when I try to use this same query on Amazon DocumentDB, instead of returning a random record, it consistantly returns the first record in the collection. This doesn’t seem very useful as it’s the same functionality as limit. The Amazon documentation indicates that DocumentDB supports the $sample stage, but gives no further information on its implementation.

Is there a way to get DocumentDB to select a random record using the $sample aggregation stage operator?


Get this bounty!!!

#StackBounty: #php #amazon-web-services #class #sdk #namespaces AWS S3 PHP SDK, S3Client class not found

Bounty: 50

I am trying to set up a connection to an Amazon S3 storage using their PHP SDK v3.

I am following this guide: https://docs.aws.amazon.com/sdk-for-php/v3/developer-guide/getting-started_basic-usage.html

So I installed the SDK using Composer and created a file called ftptest.php (don’t mind the name), that contains this:

<?PHP
require '/home/printzel/public_html/new/vendor/autoload.php';

use AwsS3S3Client;

use AwsExceptionAwsException;

//Create a S3Client
$s3 = new AwsS3S3Client([
    'version' => 'latest',
    'region' => 'nl-ams1'
]);
?>

But when going to the page, I get a HTTP 500 error. When checking my error log, I see this:

[01-Apr-2021 14:01:27 UTC] PHP Fatal error:  Uncaught Error: Class 'AwsS3S3Client' not found in /home/printzel/public_html/new/ftptest.php:9
Stack trace:
#0 {main}
  thrown in /home/printzel/public_html/new/ftptest.php on line 9

I included my autoload file as you can see. But for some reason it can’t find the correct class, why?

This is currently how my structure looks like on my server:

Autoload location:

/home/printzel/public_html/new/vendor/autoload.php

AWS folder location:

/home/printzel/public_html/new/vendorawsaws-sdk-phpsrc

composer.json in my root:

{
    "require": {
        "aws/aws-sdk-php": "^3.176"
    }
}


Get this bounty!!!

#StackBounty: #amazon-web-services Missing AssumeRole Events for AssumedRole Identities

Bounty: 50

What are the reasons I might see AssumedRole identities whose access key ids don’t have corresponding AssumeRole events in CloudTrail?

For example, I have an event with userIdentity.accessKeyId = ASIAIVOTP66PNEXAMPLE, but I can’t find an AssumeRole event (or AssumeRoleWithSAML or AssumeRoleWithWebIdentity) with responseElements.credentials.accessKeyId = ASIAIVOTP66PNEXAMPLE


Get this bounty!!!

#StackBounty: #amazon-web-services #rest #authentication #jwt #authorization How can I set up ui-less external API auth using AWS?

Bounty: 50

I’m trying to create an external API using AWS API Gateway that will give users access to data stored in multiple databases. The APIs will mostly be accessed through scripts rather than through a web UI.

Are there any AWS services I can use to manage user access to my API?

I’ve read a little bit about Amazon Cognito and OAuth 2 but at a glance it seems like those might be more targeted towards cases with a UI for users to interact with. Is there a way to create and manage API keys with AWS?

Thanks in advance for your help!


Get this bounty!!!

#StackBounty: #amazon-web-services #kubernetes #amazon-eks Trying to hit EKS cluster from within container

Bounty: 50

Summary: Create EKS cluster. Attempt to run commands from docker container. Get error.

Set up.

Follow AWS tutorial on setting up an EKS cluster

  1. Create VPC and supporting infra via CFT
  2. Create IAM role and policy: myAmazonEKSClusterRole/AmazonEKSClusterPolicy
  3. Create EKS cluster via the console logged in as SSO
  4. Wait for cluster to be ready
  5. From laptop/CLI authenticated as SSO, execute

aws eks update-kubeconfig --name my-cluster

  1. Execute kubectl get svc, get good result

  2. Create identity provider in IAM and associate with EKS cluster OpenID connect provider URL

  3. Create CNI role and policy: myAmazonEKSCNIRole

  4. Associate role to cluster via aws eks update-addon command

  5. Create node role myAmazonEKSNodeRole and attach policies: AmazonEKSWorkerNodePolicy and AmazonEC2ContainerRegistryReadOnly

  6. Create key pair in AWS

  7. Add Node Group to cluster configured with above role and key pair

  8. Wait until node group is active

At this point, if I test with kubectl or helm I can manipulate the cluster. I’m still authenticating as the SSO user however, and running from my laptop.

Moving on. I want to manipulate the cluster from within a docker container. So I continue with the following steps.

EKS cluster is in AWS account B.

  1. Create role in AWS account B (RoleInAccountB). Role has admin access policy in account B.
  2. Establish trust between account A and account B so that user in Account A can assume role in account B

On local computer, outside of container (SSO authentication)

  1. Download aws-auth-cm.yaml and customize it to add new role:
apiVersion: v1
kind: ConfigMap
metadata:
  name: aws-auth
  namespace: kube-system
data:
  mapRoles: |
    - groups:
      - system:masters
      rolearn: arn:aws:iam::<Account B>:role/RoleInAccountB
      username: DOESNTMATTER
  1. Execute kubectl apply -f aws-auth-cm.yaml
  2. Watch nodes to ensure they are ready kubectl get nodes –watch
  3. Verify config kubectl describe configmap -n kube-system aws-auth
    Seems fine.
  4. SSH to EC2 in Account A
  5. Run docker container on EC2 (image has prerequisite dependencies installed such as aws cli, kubectl etc.)

From within the container

  1. Assume Account B role
  2. Add role to kube config via aws cli

aws eks update-kubeconfig --name my-cluster --role-arn arn:aws:iam::accountB:role/RoleInAccountB

  1. Execute test to check permission to cluster kubectl get svc

Receive error “error: You must be logged in to the server (Unauthorized)"


Get this bounty!!!

#StackBounty: #django #amazon-web-services #amazon-s3 #django-views #django-forms Django Form upload to AWS S3 with Boto3 results in em…

Bounty: 50

I am having a multipart/form-data form that should upload to a S3 bucket, using Boto3.

All is happening as expected, but at the end, the file at the S3 bucket has 0 bytes.

forms.py:

class DropOffFilesForm(forms.ModelForm):
    file = forms.FileField(error_messages={'invalid_type': _("Please upload only PDF, JPG, PNG, XLS(x), DOC(x) files")},
                       validators=[
                           FileTypeValidator(allowed_types=['application/pdf', 'image/*', 'application/msword', 'application/vnd.openxmlformats-officedocument.wordprocessingml.document', 'application/vnd.ms-excel', 'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet'], allowed_extensions=['.pdf', '.png', '.jpg', '.doc', '.docx', '.xls', '.xlsx'])], required=False)
    description = forms.CharField(widget=forms.Textarea, label=_("Description"), required=False)

    def clean_description(self):
        data = self.cleaned_data['description']
        return data

    class Meta:
        model = DropOffFiles
        exclude = ['filename',]
        fields = ['file', 'description']

views.py:

file_form = DropOffFilesForm(request.POST, request.FILES)

if file_form.is_valid():

   file_form = file_form.save(commit=False)

   s3 = boto3.client('s3', region_name=settings.AWS_ZONE,
                           aws_access_key_id=settings.AWS_KEY,
                           aws_secret_access_key=settings.AWS_SECRET)

   file = request.FILES['file']

   if file:

        file_type = file.content_type
        extension = file.name.split(".")[-1]

        # construct file name and location
        filename = calculate_md5(file) + '.'+extension

        # save the file
        try:
             response = s3.upload_fileobj(file, 'bucketname', filename)
        except ClientError as e:
             logging.error(e)
             return False



        file_form.filename = filename
        file_form.dropoff = dropoff
        file_form.save()

Happy for any suggestion.


Get this bounty!!!

#StackBounty: #amazon-web-services #macos #webserver #version #certbot Upgrade NGNIX version on MacOS

Bounty: 50

I have the current version of ngnix of:

nginx -v
nginx version: nginx/1.15.6


Get this bounty!!!

#StackBounty: #amazon-web-services #kubernetes #django #gunicorn #nginx-ingress Django + Gunicorn + Kubernetes: Website down few minute…

Bounty: 100

I am having this issue with Django + Gunicorn + Kubernetes.

When I deploy a new release to Kubernetes, 2 containers start up with my current image. Once Kubernetes registers them as ready, which they are since the logs show that gunicorn is receiveing requests, the website is down for several minutes. It just times out for roughly 7-10 minutes, until it’s fully available again:

enter image description here

The logs show requests, that are coming in and returning 200 responses, but when I try to open the website through the browser, it times out. Also health checkers like AWS Route53 notify me, that the website is not reachable for a few minutes.

I have tried to many things, playing around with gunicorn workers/threads, etc. But I just can’t get it working to switch to a new deployment without downtime.

Here are my configurations (just the parts I think are relevant):

Requirements

django-cms==3.7.4  # https://pypi.org/project/django-cms/
django==2.2.16  # https://pypi.org/project/Django/
gevent==20.9.0  # https://pypi.org/project/gevent/
gunicorn==20.0.4  # https://pypi.org/project/gunicorn/

Gunicorn config

/usr/local/bin/gunicorn config.wsgi 
    -b 0.0.0.0:5000 
    -w 1 
    --threads 3 
    --timeout 300 
    --graceful-timeout 300 
    --chdir=/app 
    --access-logfile -

Kubernetes Config

livenessProbe:
  path: "/health/"
  initialDelaySeconds: 60
  timeoutSeconds: 600
  scheme: "HTTP"
  probeType: "httpGet"
readinessProbe:
  path: "/health/"
  initialDelaySeconds: 60
  timeoutSeconds: 600
  scheme: "HTTP"
  probeType: "httpGet"

resources:
  limits:
    cpu: "3000m"
    memory: 12000Mi
  requests:
    cpu: "3000m"
    memory: 12000Mi

It’s worth noting, the codebase is roughly 20K lines large. In my opinion not very large. Also, the traffic is not very high, roughly 500-1000 users on average.

In terms of infrastructure, I am using the following AWS services and respective instance types:

Server Instances (3 Instances running, running 2 pods for my Django application)

m5a.xlarge (4 vCPUS, 16GB Memory)

Database (Amazon Aurora RDS)

db.r4.large (2 vCPUs, 16GB RAM)

Redis (ElastiCache)

cache.m3.xlarge (13.3GB memory)

ElasticSearch

m4.large.elasticsearch (2 vCPUS, 8GB RAM)

I rarely ask questions on Stack Overflow, so if you need more information, please let me know and I can provide them.


Get this bounty!!!

#StackBounty: #object-oriented #matrix #r #macros #amazon-web-services Was this an idiomatic and prudent way to extend R's matrix m…

Bounty: 50

After reading one too many Lisp books, I decided to try extending R’s syntax. My goal was to implement repeated matrix multiplication in a way such that I could write matrix%^%n to produce the result from multiplying matrix by itself n times (where n is a natural number). I produced the following working code that gives the expected outputs, but found it unsatisfactory.

`%^%`<-function(squareMat,exponent)
{
  stopifnot(is.matrix(squareMat),is.numeric(exponent),exponent%%1==0)
  out<-diag(nrow = nrow(squareMat))
  for(i in seq_len(exponent)){out<-squareMat%*%out}
  out
}
testMatrix<-matrix(c(0,10,20,30),2,2)
lapply(0:3,function(k) testMatrix%^%k)#tests

For the purposes of this question, I’m happy to ignore the contents of my for loop. I know that it’s not optimised. What I’m more interested in investigating is if this was a prudent way to extend R’s syntax. I have listed my objection to my code below. If they are valid, how can they be addressed?

  1. The method for checking if exponent is an integer is pathetic. I don’t think that doing better is possible, but I would dearly love to be proven wrong.
  2. My strongest objection is that I believe that this problem calls for an S3 or S4 solution, but I do not believe that one is possible. isS4(testMatrix) tells me that matrices are not S4 objects, ruling that out. S3 might be possible, but I’m unsure if this will make problem #1 even worse (because S3 is not very strict on types) or requiring risky hacking of the matrix type.
  3. Because of problem #1, stopifnot does not throw useful error messages.
  4. The job of throwing errors regarding the multiplication is delegated to %*%. I’m OK with that, but I don’t know if I should be.
  5. Again, ignoring the content of the for loop, was for actually the right thing to use? Is there something like an apply family function for "the index of my loop doesn’t actually matter, I just want to run this code seq_len(exponent) times"?


Get this bounty!!!