#StackBounty: #c #http #video-streaming #bare-metal How to create a video stream from a series of bitmaps and send it over IP network?

Bounty: 250

I have a bare-metal application running on a tiny 16 bit microcontroller (ST10) with 10BASE-T Ethernet (CS8900) and a Tcp/IP implementation based upon the EasyWeb project.

The application’s main job is to control a led matrix display for public traffic passenger information. It generates display information with about about 41 fps and configurable display size of e.g. 160 × 32 pixel, 1 bit color depth (each led can be just either on or off).

Example:

enter image description here

There is a tiny webserver implemented, which provides the respective frame buffer content (equals to led matrix display content) as PNG or BMP for download. So I can receive snapshots by e.g.:

wget http://$IP/content.png

or

wget http://$IP/content.bmp

or put appropriate html code into the controller’s index.html to view that in a web browser.
I also could write html / javascript code to update that picture periodically, e.g. each second so that the user can see changes of the display content.

Now for the next step, I want to provide the display content as some kind of video stream and then put appropriate html code to my index.html or just open that “streaming URI” with e.g. vlc.

As my framebuffer bitmaps are built uncompressed, I expect a constant bitrate.

I’m not sure what’s the best way to start with this.

(1) Which video format is the most easy to generate if I already have a PNG for each frame (but I have that PNG only for a couple of milliseconds and cannot buffer it for a longer time)?

Note that my target system is very resource restricted in both memory and computing power.

(2) Which way for distribution over IP?

I already have some tcp sockets open for listening on port 80. I could stream the video over HTTP (after received) by using chunked tranfer encoding (each frame as an own chunk).
(Maybe HTTP Live Streaming doing like this?)

I’d also read about thinks like SCTP, RTP and RTSP but it looks like more work to implement this on my target. And as there is also the potential firewall drawback, I think I prefer HTTP for transport.

Please note, that the application is coded in plain C, without operating system or powerful libraries. All stuff is coded from the scratch, even the web server and PNG generation.

Edit 2017-09-14, tryout with APNG

As suggested by Nominal Animal, I gave a try with using APNG.

I’d extend my code to produce appropriate fcTL and fdAT chunks for each frame and provide that bla.apng with HTTP Content-Type image/apng.

After downloading those bla.apng it looks useful when e.g. opening in firefox or chrome (but not in
konqueror,
vlc,
dragon player,
gwenview).

Trying to stream that apng works nicely but only with firefox.
Chrome wants first to download the file completely.

So APNG might be a solution, but with the disadvantage that it currently only works with firefox.


Get this bounty!!!

#StackBounty: #rest #http #asynchronous #websocket #sinatra Sinatra using a websocket client to respond to a http request

Bounty: 50

I am writing a web server that I would like to be RESTful, but the thing is that it has to interact with another server that communicates exclusively via web sockets. So, this needs to happen:

  1. A request comes into my Sinatra server from a client
  2. My server opens a web socket to the foreign server
  3. My server asynchronously waits for messages and things from the foreign server until the socket is closed (this should only take two hundred or so milliseconds)
  4. My server sends back a response to client

I’m sure this is not too complicated to accomplish, but I’m a bit stuck on it. What do you think? A simplified version of what I’ve got is below.

require 'sinatra'
require 'websocket-client-simple'

get '/' do
     ws = WebSocket::Client::Simple.connect(' ws://URL... ')

     ws.on :message do
          puts 'bar'
     end

     ws.on :close do
          # At this point we need to send an HTTP response back to the client. But how?
     end

     ws.on :open do
          ws.send 'foo'
     end

end


Get this bounty!!!

#StackBounty: #centos #http #python CentOS and mutiple python website developers

Bounty: 50

I need to be able to support multiple (~100) different users with their own websites on a CentOS based web server. They need to be able to use Python (v2&v3) along with Django. I understand that systemctl restart is required for apache, that can be arranged by a cron job. However, I have no idea as to the other tips & tricks and requirements from the admin side. Is there a website that will be use to me in setting up of the server? I understand that each of them can run their own web servers (simpleHTTPserver), but it looks very messy to me.

I would be grateful for any help regarding the issue.


Get this bounty!!!

#StackBounty: #java #file #http #logging Scanning through logs (tail -f fashion) parsing and sending to a remote server

Bounty: 100

I have a task at hand to build a utility which

  1. Scans through a log file.

  2. Rolls over if a log file is reset.

  3. Scans through each line of the log file.

  4. Each line is sent to an executor service and checks are performed: which include looking for a particular word in the line, if a match is found I forward this line for further processing which includes splitting up the line and forming JSON.

  5. This JSON is sent across to a server using a CloseableHttpCLient with connection keep alive and ServiceUnavailableRetryStrategy patterns.

EntryPoint FileTailReader:(Started from Main)

   public class FileTailReader implements Runnable {

    private final File file;
    private long filePointer;
    private String url;
    private static volatile boolean keepLooping = true; // TODO move to main class
    private static final Logger logger = LogManager.getLogger(Main.class);
    private ExecutorService executor;
    private List<Future<?>> futures;


    public FileTailReader(File file, String url, ExecutorService executor, List<Future<?>> futures) {
        this.file = file;
        this.url = url;
        this.executor = executor;
        this.futures = futures;

    }

    private HttpPost getPost() {
        HttpPost httpPost = new HttpPost(url);
        httpPost.setHeader("Accept", "application/json");
        httpPost.setHeader("Content-type", "application/json");
        return httpPost;
    }

    @Override
    public void run() {
        long updateInterval = 100;
        try {
            ArrayList<String> batchArray = new ArrayList<>();
            HttpPost httpPost = getPost();
            CloseableHttpAsyncClient closeableHttpClient = getCloseableClient();
            Path path = Paths.get(file.toURI());
            BasicFileAttributes basicFileAttributes = Files.readAttributes(path, BasicFileAttributes.class);
            Object fileKey = basicFileAttributes.fileKey();
            String iNode = fileKey.toString();  // iNode is common during file roll
            long startTime = System.nanoTime();
            while (keepLooping) {

                Thread.sleep(updateInterval);
                long len = file.length();

                if (len < filePointer) {

                    // Log must have been rolled
                    // We can spawn a new thread here to read the remaining part of the rolled file.
                    // Compare the iNode of the file in tail with every file in the dir, if a match is found
                    // - we have the rolled file
                    // This scenario will occur only if our reader lags behind the writer - No worry

                    RolledFileReader rolledFileReader = new RolledFileReader(iNode, file, filePointer, executor,
                            closeableHttpClient, httpPost, futures);
                    new Thread(rolledFileReader).start();

                    logger.info("Log file was reset. Restarting logging from start of file.");
                    this.appendMessage("Log file was reset. Restarting logging from start of file.");
                    filePointer = len;
                } else if (len > filePointer) {
                    // File must have had something added to it!
                    RandomAccessFile randomAccessFile = new RandomAccessFile(file, "r");
                    randomAccessFile.seek(filePointer);
                    FileInputStream fileInputStream = new FileInputStream(randomAccessFile.getFD());
                    BufferedReader bufferedReader = new BufferedReader(new InputStreamReader(fileInputStream));
                    String bLine;
                    while ((bLine = bufferedReader.readLine()) != null) {
                        // We will use an array to hold 100 lines, so that we can batch process in a
                        // single thread
                        batchArray.add(bLine);
                        switch (batchArray.size()) {

                            case 1000:
                                appendLine((ArrayList<String>) batchArray.clone(), closeableHttpClient, httpPost);
                                batchArray.clear();
                                break;
                        }
                    }

                    if (batchArray.size() > 0) {
                        appendLine((ArrayList<String>) batchArray.clone(), closeableHttpClient, httpPost);
                    }

                    filePointer = randomAccessFile.getFilePointer();
                    randomAccessFile.close();
                    fileInputStream.close();
                    bufferedReader.close();
                   // logger.info("Total time taken: " + ((System.nanoTime() - startTime) / 1e9));

                }

                //boolean allDone = checkIfAllExecuted();
               // logger.info("isAllDone" + allDone + futures.size());

            }
            executor.shutdown();
        } catch (Exception e) {
            e.printStackTrace();
            this.appendMessage("Fatal error reading log file, log tailing has stopped.");
        }
    }

    private void appendMessage(String line) {
        System.out.println(line.trim());
    }

    private void appendLine(ArrayList<String> batchArray, CloseableHttpAsyncClient client, HttpPost httpPost) {
        Future<?> future = executor.submit(new LocalThreadPoolExecutor(batchArray, client, httpPost));
        futures.add(future);

    }

    private boolean checkIfAllExecuted() {
        boolean allDone = true;
        for (Future<?> future : futures) {
            allDone &= future.isDone(); // check if future is done
        }
        return allDone;
    }

    //Reusable connection
    private RequestConfig getConnConfig() {
        return RequestConfig.custom()
                .setConnectionRequestTimeout(5 * 1000)
                .setConnectTimeout(5 * 1000)
                .setSocketTimeout(5 * 1000).build();
    }

    private PoolingNHttpClientConnectionManager getPoolingConnManager() throws IOReactorException {
        ConnectingIOReactor ioReactor = new DefaultConnectingIOReactor();
        PoolingNHttpClientConnectionManager cm = new PoolingNHttpClientConnectionManager(ioReactor);
        cm.setMaxTotal(1000);
        cm.setDefaultMaxPerRoute(1000);

        return cm;
    }

    private CloseableHttpAsyncClient getCloseableClient() throws IOReactorException {
        CloseableHttpAsyncClient httpAsyncClient = HttpAsyncClientBuilder.create()
                .setDefaultRequestConfig(getConnConfig())
                .setConnectionManager(getPoolingConnManager()).build();

        httpAsyncClient.start();

        return httpAsyncClient;


                /*.setServiceUnavailableRetryStrategy(new ServiceUnavailableRetryStrategy() {
                    @Override
                    public boolean retryRequest(
                            final HttpResponse response, final int executionCount, final HttpContext context) {
                        int statusCode = response.getStatusLine().getStatusCode();
                        return statusCode != HttpURLConnection.HTTP_OK && executionCount < 5;
                    }

                    @Override
                    public long getRetryInterval() {
                        return 0;
                    }
                }).build();*/
    }


}

I am using an implementation of Rabin Karp for string find:

public class RabinKarp {
    private final String pat;      // the pattern  // needed only for Las Vegas
    private long patHash;    // pattern hash value
    private int m;           // pattern length
    private long q;          // a large prime, small enough to avoid long overflow
    private final int R;           // radix
    private long RM;         // R^(M-1) % Q

    /**
     * Preprocesses the pattern string.
     *
     * @param pattern the pattern string
     * @param R       the alphabet size
     */
    public RabinKarp(char[] pattern, int R) {
        this.pat = String.valueOf(pattern);
        this.R = R;
        throw new UnsupportedOperationException("Operation not supported yet");
    }

    /**
     * Preprocesses the pattern string.
     *
     * @param pat the pattern string
     */
    public RabinKarp(String pat) {
        this.pat = pat;      // save pattern (needed only for Las Vegas)
        R = 256;
        m = pat.length();
        q = longRandomPrime();

        // precompute R^(m-1) % q for use in removing leading digit
        RM = 1;
        for (int i = 1; i <= m - 1; i++)
            RM = (R * RM) % q;
        patHash = hash(pat, m);
    }

    // Compute hash for key[0..m-1].
    private long hash(String key, int m) {
        long h = 0;
        for (int j = 0; j < m; j++)
            h = (R * h + key.charAt(j)) % q;
        return h;
    }

    // Las Vegas version: does pat[] match txt[i..i-m+1] ?
    private boolean check(String txt, int i) {
        for (int j = 0; j < m; j++)
            if (pat.charAt(j) != txt.charAt(i + j))
                return false;
        return true;
    }

    // Monte Carlo version: always return true
    // private boolean check(int i) {
    //    return true;
    //}

    /**
     * Returns the index of the first occurrrence of the pattern string
     * in the text string.
     *
     * @param txt the text string
     * @return the index of the first occurrence of the pattern string
     * in the text string; n if no such match
     */
    public int search(String txt) {
        int n = txt.length();
        if (n < m) return n;
        long txtHash = hash(txt, m);

        // check for match at offset 0
        if ((patHash == txtHash) && check(txt, 0))
            return 0;

        // check for hash match; if hash match, check for exact match
        for (int i = m; i < n; i++) {
            // Remove leading digit, add trailing digit, check for match.
            txtHash = (txtHash + q - RM * txt.charAt(i - m) % q) % q;
            txtHash = (txtHash * R + txt.charAt(i)) % q;

            // match
            int offset = i - m + 1;
            if ((patHash == txtHash) && check(txt, offset))
                return offset;
        }

        // no match
        return -1;
    }


    // a random 31-bit prime
    private static long longRandomPrime() {
        BigInteger prime = BigInteger.probablePrime(31, new Random());
        return prime.longValue();
    }
}

Here is my RolledFileReader

public class RolledFileReader implements Runnable {

    private static final Logger logger = LogManager.getLogger(RolledFileReader.class);

    private String iNode;
    private File tailedFile;
    private long filePointer;
    private ExecutorService executor;
    private CloseableHttpAsyncClient client;
    private HttpPost httpPost;
    List<Future<?>> futures;

    public RolledFileReader(String iNode, File tailedFile, long filePointer, ExecutorService executor,
                            CloseableHttpAsyncClient client, HttpPost httpPost, List<Future<?>> futures) {
        this.iNode = iNode;
        this.tailedFile = tailedFile;
        this.filePointer = filePointer;
        this.executor = executor;
        this.client = client;
        this.httpPost = httpPost;
        this.futures = futures;
    }

    @Override
    public void run() {
        try {
            inodeReader();
        } catch (Exception e) {
            e.printStackTrace();
        }
    }


    public void inodeReader() throws Exception {
        String fParent = tailedFile.getParentFile().toString();
        File[] files = new File(fParent).listFiles();
        if (files != null) {
            Arrays.sort(files, Collections.reverseOrder()); // Probability of finding the file at top increases
            for (File file : files) {
                if (file.isFile()) {
                    Path path = Paths.get(file.toURI());
                    BasicFileAttributes basicFileAttributes = Files.readAttributes(path, BasicFileAttributes.class);
                    Object fileKey = basicFileAttributes.fileKey();
                    String matchInode = fileKey.toString();
                    if (matchInode.equalsIgnoreCase(iNode) && file.length() > filePointer) {
                        //We found a match - now process the remaining file - we are in a separate thread
                        readRolledFile(file, filePointer);

                    }
                }
            }

        }
    }


    public void readRolledFile(File rolledFile, long filePointer) throws Exception {
        ArrayList<String> batchArray = new ArrayList<>();
        RandomAccessFile randomAccessFile = new RandomAccessFile(rolledFile, "r");
        randomAccessFile.seek(filePointer);
        FileInputStream fileInputStream = new FileInputStream(randomAccessFile.getFD());
        BufferedReader bufferedReader = new BufferedReader(new InputStreamReader(fileInputStream));
        String bLine;
        while ((bLine = bufferedReader.readLine()) != null) {

            batchArray.add(bLine);
            switch (batchArray.size()) {
                case 1000:
                    executor.execute(new LocalThreadPoolExecutor((ArrayList<String>) batchArray.clone(), client, httpPost));
            }
        }

        if (batchArray.size() > 0) {
            executor.execute(new LocalThreadPoolExecutor((ArrayList<String>) batchArray.clone(), client, httpPost));
        }
    }


}

And my executor service LocalThreadPoolExecutor:

   public class LocalThreadPoolExecutor implements Runnable {
    private static final Logger logger = LogManager.getLogger(Main.class);

    private final ArrayList<String> payload;
    private final CloseableHttpAsyncClient client;
    private final HttpPost httpPost;
    private HttpContext context;
    private final RabinKarp searcher = new RabinKarp("JioEvents");

    public LocalThreadPoolExecutor(ArrayList<String> payload, CloseableHttpAsyncClient client,
                                   HttpPost httpPost) {
        this.payload = payload;
        this.client = client;
        this.httpPost = httpPost;
    }

    @Override
    public void run() {
        try {
            for (String line : payload) {
                int offset = searcher.search(line);
                switch (offset) {
                    case -1:
                        break;
                    default:
                        String zeroIn = line.substring(offset).toLowerCase();
                        String postPayload = processLogs(zeroIn);
                        if (null != postPayload) {
                            postData(postPayload, client, httpPost);
                        }
                }
            }
       // logger.info("Processed a batch of: "+payload.size());
        } catch (Exception e) {
            e.printStackTrace();
        }
    }
    private String processLogs(String line) {
        String[] jsonElements = line.split("\|");
        switch (jsonElements.length) {
            case 15:
                JSONObject jsonObject = new JSONObject();
                jsonObject.put("customerID", jsonElements[1]);
                jsonObject.put("mobileNumber", jsonElements[2]);
                jsonObject.put("eventID", jsonElements[3]);
                jsonObject.put("eventType", jsonElements[4]);
                jsonObject.put("eventDateTime", jsonElements[5]);
                jsonObject.put("eventResponseCode", jsonElements[6]);
                jsonObject.put("sourceSystem", jsonElements[7]);
                jsonObject.put("clientID", jsonElements[8]);
                jsonObject.put("serverHostName", jsonElements[9]);
                jsonObject.put("serverIPAddress", jsonElements[10]);
                jsonObject.put("serverSessionID", jsonElements[11]);
                jsonObject.put("softwareVersion", jsonElements[12]);
                jsonObject.put("deviceInfo", jsonElements[13]);
                jsonObject.put("userAgent", jsonElements[14]);
                return jsonObject.toString();
        }
        return null;
    }

    private void postData(String data, CloseableHttpAsyncClient client, HttpPost httpPost) throws Exception {

        StringEntity entity = new StringEntity(data);
        httpPost.setEntity(entity);
        Future<HttpResponse> future = client.execute(httpPost, context, null);
     //   HttpResponse response = future.get();
     //   logger.info("Resp is: "+response.getStatusLine().getStatusCode());

    }

}

And finally the Main class:

public class Main {
    private static final Logger logger = LogManager.getLogger(Main.class);
    private static final ExecutorService executor = Executors.newFixedThreadPool(25);
    private static final List<Future<?>> futures = new ArrayList<>();

    private static void usage() {
        System.out.println("Invalid usage");
    }

    public static void main(String[] args) {

        if (args.length < 2) {
            usage();
            System.exit(0);
        }
        String url = args[0];
        String fPath = args[1];

        File log = new File(fPath);
        FileTailReader fileTailReader = new FileTailReader(log, url, executor, futures);

        new Thread(fileTailReader).start(); // Can issue multiple threads with an executor like so, for multiple files


    }

}

The purpose of declaring member variables in Main is that I can later on add ShutdownHooks.

I am interested in knowing how I can make this code faster. Right now I am getting a throughput of 300000 lines per 8876 millis. Which is not going well with my peers.

Edit:

I changed the way RandomAccessFile is reading from the file and I have observed a considerable increase in speed, however I am still looking for fresh pointers to enhance and optimize this utility:

else if (len > filePointer) {
                    // File must have had something added to it!
                    long startTime = System.nanoTime();
                    RandomAccessFile randomAccessFile = new RandomAccessFile(file, "r");
                    randomAccessFile.seek(filePointer);
                    FileInputStream fileInputStream = new FileInputStream(randomAccessFile.getFD());
                    BufferedReader bufferedReader = new BufferedReader(new InputStreamReader(fileInputStream));
                    String bLine;
                    logger.info("Pointer: "+filePointer+" fileLength: "+len);
                    while ((bLine = bufferedReader.readLine()) != null) {
                        this.appendLine(bLine, httpclient, httpPost);
                    }
                    logger.info("Total time taken: " + ((System.nanoTime() - startTime) / 1e9));
                    filePointer = randomAccessFile.getFilePointer();
                    logger.info("FilePointer reset to: "+filePointer);
                    randomAccessFile.close();
                    fileInputStream.close();
                    bufferedReader.close();
                }

I also added a bit of batch processing in the above snippet (Code from FileTailReader is edited to demonstrate the same in particular addition of batchArray which is a list) – I see an improvement of 10 seconds. Now the program executes in 21 point some milli seconds.


Get this bounty!!!

#StackBounty: #angular #http #asynchronous #rxjs #observable Angular Looped HTTP Request Map Value to Responses

Bounty: 50

In my service class I am looping an http get request and using rxjs forkJoin to combine all the responses into an observable which I return to my component. For each response that comes back I need to add two properties to the json (readySystem which is an object and serviceType which is a string). The value of each of these is different for each iteration of the loop.

How do I keep/store/retain the values for both and and map/add them to the correct response?

With the way I’ve attempted to do it below, the values for both are the same in every response returned in the final observable.

  getServices() {

  for (var x = 0; x < this.service.items.length; x++ ){
        var num = Object.keys(this.service.items[x].links).length;

         for (var key in this.service.items[x].links) {
            var systemName = this.service.items[x].systemName;
            var environment = this.service.items[x].environment;
            var server = this.service.items[x].server;
            var port = this.service.items[x].port;
            var linkName = this.service.items[x].links[key];
            var serviceType = key;
            this.observables.push(
            this.http.get('http://localhost:3000/myapi/get/service?server=' + server + '&service=' + linkName)
            .map((res:Response) => { 
                var output = res.json()
                for (var obj in output) {
                    if (output.hasOwnProperty(obj)){

                        var readySystem = new System(systemName,
                         environment,
                         server,
                         port,
                         linkName);

                    output[obj].System = readySystem;
                    output[obj].serviceType = serviceType;
                    }
                }
                return output;
        })
            );
        }
        };
        return Observable.forkJoin(this.observables);
};

Update: With the suggested code changes provided in the answer below, I get output like:

0: Array(33)
1: System
    systemName: "my system"
    environment: "my environment"
    etc.
2: "myservice"
3: Array(35)
4: System
   etc.
5: "myotherservice"

However, what is needed is:

0: Array(33)
 0: Object
  > System
      systemName: "my system"
      environment: "my environment"
      etc.
   serviceType: "myservice"
 1: Object
  > System
      systemName: "my system"
      environment: "my environment"
      etc.
   serviceType: "myotherservice"
 etc.
1: Array(35)
 0: Object


Get this bounty!!!

#StackBounty: #web-application #http #same-origin-policy #crossdomain #cors Setting Access-Control-Allow-Origin: * when session identif…

Bounty: 50

Is it considered as secure for an application to set a header access-control-allow-origin: * if during the normal usage of the application, the client credentials are injected in the headers by the JS code? E.g.:

GET /application/secretStuff

X-Authorization-Key: aaa
X-Authorization-Secret: bbbb

This means that if an external malicious code tries to make a call to this URL it will be able to see the response, but this will be an authorization error anyway.

I understand there is at least one important drawback with this, namely the increase of the attack surface. But I’m looking to understand if this approach has other major problems.


Get this bounty!!!

#StackBounty: #http #ubuntu #redirect #nginx #https How do you redirect a bare "example.com" domain to "https://example….

Bounty: 500

Title really says it all. I cannot get my domain to redirect from site.com to http://example.com.

My nginx conf is as follows with sensitive paths removed. Here is a GIST link to all my nginx setup, it is currently the only enabled domain in my entire nginx:

https://gist.github.com/rublev/c75cc58a5ca051ddafa99c00673ea911

Console output on my local vs my server:

rublev@rublevs-MacBook-Pro ~
• curl -I rublev.io
^C
rublev@rublevs-MacBook-Pro ~
• ssh r
Welcome to Ubuntu 16.04.2 LTS (GNU/Linux 4.4.0-75-generic x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage

  Get cloud support with Ubuntu Advantage Cloud Guest:
    http://www.ubuntu.com/business/services/cloud

78 packages can be updated.
0 updates are security updates.


Last login: Fri May 12 16:41:35 2017 from 198.84.225.249
rublev@ubuntu-512mb-tor1-01:~$ curl -I rublev.io
HTTP/1.1 200 OK
Server: nginx/1.10.0 (Ubuntu)
Date: Fri, 12 May 2017 16:41:43 GMT
Content-Type: text/html
Content-Length: 339
Last-Modified: Thu, 20 Apr 2017 20:47:12 GMT
Connection: keep-alive
ETag: "58f91e50-153"
Accept-Ranges: bytes

I am at my wits end, I truly have no idea what to do now, I’ve spent weeks trying to get this working.


Get this bounty!!!

#StackBounty: #html #http #firefox #nginx #iframe <iframe> and <object> are both blank, but only in Firefox

Bounty: 150

I am attempting to embed one site into another site. I control both servers, which I will refer to here as “site1.com” (the site in the browser) and “site2.com” (the site I am trying to embed).

HTML embed code

Attempt 1, using iframe tag:


    Unable to display--your browser does not support frames.

Attempt 2, using object tag:


Things I know are not the problem

Secure/insecure mismatch

I’ve read that Firefox will not allow an HTTP embed into an HTTPS page. Both sites are HTTPS, so there is no mismatch. The loaded resources (CSS, etc) are also https, from same origin, so there is no mixed-content problem.

I have tried setting security.mixed_content.block_active_content to false, in case I was mistaken about this, but the iframe was still blank.

Invalid or untrusted certificates

Both sites are using valid certificates, signed by a proper trusted authority, and are not expired. In fact, we are using a subdomain wildcard certificate, so they are both using the same certificate (they both are in the same subdomain).

X-Frame-Options

The site that I am trying to embed has this response header:

X-Frame-Options: ALLOW-FROM SITE1.COM

Content-Security-Policy

The site that I am trying to embed has this response header (wrapped here for readability):

Content-Security-Policy:
    frame-ancestors https://site1.com;
    default-src 'self';
    script-src https://site1.com 'self' 'unsafe-inline';
    style-src https://site1.com 'self' 'unsafe-inline'

Extra disclosure, possibly not needed – these headers are being generated by a Django application server, using this config and the “django-csp” module.

X_FRAME_OPTIONS = 'Allow-From site1.com'

CSP_FRAME_ANCESTORS = ('https://site1.com',)
CSP_STYLE_SRC = ('https://site1.com', "'self'", "'unsafe-inline'")
CSP_SCRIPT_SRC = ('https://site1.com', "'self'", "'unsafe-inline'")

CORS

My understanding is that CORS is only in play when the request contains an “Origin” header. That doesn’t seem to be happening here. I have also tried addressing CORS by using this header:

Access-Control-Allow-Origin: https://site1.com

But that appears to have no effect.

Ad blocker

I do not have an ad blocker in this Firefox install. I also removed all of my extensions and re-tested after a Firefox restart, the “blank iframe” behavior remains the same with no extensions installed at all.

Observed behavior

I have tested using the following browsers.

  • Google Chrome 58.0.3029.81 (64-bit) (macOS)
  • Safari 10.1 (macOS)
  • Firefox 53.0 (64-bit) (macOS)
  • Microsoft Edge 38.14393.0.0 (Windows 10)

Using Chrome, Safari, and Edge, the frame is shown like I expect – site2.com appears as a box inside of the site1.com page.

Using Firefox, I am shown an empty space of the size specified (600×600). If I used iframe, then there is a black border around it. If I used object, it’s just a blank area with no border.

The most interesting thing is that if I open the developer console and reload the page, I see the requests to fetch site1.com and its CSS and so on, but there are no requests made for site2.com. It isn’t that there is a problem showing site2.com, it is never requested at all.

Also, the developer console shows no errors or warnings about this. If there were an error condition or security exception preventing the loading of the second site, I would expect some sort of warning to be logged.

This has been driving me crazy for a few days. Any suggestions appreciated.


Get this bounty!!!

#StackBounty: #python #datetime #http #django #geospatial Set time zone from cookie

Bounty: 50

Is the set_cookie variable the correct way to signal that the cookie should be set? It does not seems pythonic to me:

cookie = request.COOKIES.get(TIMEZONE_COOKIE_NAME)
set_cookie = False
if cookie and cookie in pytz.all_timezones_set:
    timezone.activate(pytz.timezone(cookie))
elif request.ip:
    geo = GeoIP2()
    try:
        time_zone = geo.city(request.ip).time_zone
        if time_zone:
            timezone.activate(pytz.timezone(time_zone))
            set_cookie = True
    except GeoIP2Error:
        self.logger.warning('Could not determine time zone of ip address: %s', request.ip)

response = self.get_response(request)
if set_cookie:
    response.set_cookie(TIMEZONE_COOKIE_NAME, time_zone)
return response


Get this bounty!!!

#StackBounty: #web-server #http #proxy Proxy server with customizeable logic of selecting upstream server

Bounty: 100

I’m looking for a proxy server, written in any language, that would allow to customize selection of the upstream server somehow. For example to let it take a random upstream server from a list.


Get this bounty!!!