#StackBounty: #debian #kde #url #dolphin How to properly move weblinks from my browser to a folder via drag and drop in Debian with KDE…

Bounty: 50

Besides having my desktop span over multiple monitors I’d also like to be able to move weblinks to it (or to folders on it).

I’m running Debian 9.1 with KDE and am using the standard Dolphin file-manager. My desktop is set to folder-view. When moving links onto it or into folders on it I get a menu showing the options “Move here”, “Copy here”, “Link here” and “Cancel”. When I choose “Link here” I get a clickable weblink. However its name is not the page title but only the url as can be seen in the screenshot below:

enter image description here

I need them to be named after the page-title. Additionally, if possible I’d like to have it create a link automatically without me having to choose “Link here”. Furthermore it would be nice if the link was moved into a folder on the desktop if dropped onto one (not just if it’s openend with the file-manager). I’d like to keep Dolphin and KDE because otherwise they were great choices. Is this possible somehow?


Get this bounty!!!

#StackBounty: #nginx #rewrite #url An optimal way of nginx rewrite rule building for pretty URL

Bounty: 50

The webpages are accecced in the following manner:

example.com/?site=website 

In this example, I’m accessing the website located on example.com domain.

Could anyone suggest an optimal way of rewritning rules, so the above would be shown and accessed by example.com/website?

nginx, Centos 7


Get this bounty!!!

#StackBounty: #html #email #url #mailto #outlook-web-app Outlook webmail access mailto body is blank

Bounty: 50

I am trying to create a mailto url for Outlook (web version).

On Outlook everything works perfectly fine, but now I have to implement it on web version OWA as well, so I am struggling with this url:

<a href='https://company.domain.com/owa/?ae=Item&a=New&t=IPM.Note&to=someone@expample.com&subject=Hello%20again&body=Body%20Text' target=_blank>testy</a>

Because the body part is empty. Any ideas on what’s happening?


Get this bounty!!!

#StackBounty: #amazon #url Why is "Amazon" written with strange characters in url?

Bounty: 100

When I do a search on Amazon Fr, one of the url parameters is “ÅMÅŽÕÑ”, as in the url below:

https://www.amazon.fr/s/ref=nb_sb_noss/256-9830746-1362835?__mk_fr_FR=ÅMÅŽÕÑ&url=search-alias%3Daps&field-keywords=Webcam

Any idea why they have this strange parameter there?


Get this bounty!!!

#StackBounty: #url #query-string #parameter Source Querystring Parameter

Bounty: 50

I am trying to use the Source QueryString Parameter once the user makes changes to the Edit Form.

Here’s the Source URL I am using:

https://mysite/sites/hhc/finance/ap/Lists/APInvoiceForm/EditForm.aspx?ID=76&Source=/sites/hhc/finance/ap/

I paste it into the URL, hit enter, and it will work the first time when you press save, but every other time after, it defaults back.

Is this not possible to use with the Edit Form?


Get this bounty!!!

#StackBounty: #linux #directory #raspberry-pi #wget #url Wget Won't Recursively Download

Bounty: 50

I’m trying to copy a forum thread with this directory structure:

The first page has a URL like this:

https://some.site.com/foo/bar/threadNumber

And the rest of the pages follow this format:

https://some.site.com/foo/bar/threadNumber/page/2
https://some.site.com/foo/bar/threadNumber/page/3
https://some.site.com/foo/bar/threadNumber/page/*

I’m using the command:

wget --recursive --page-requisites --adjust-extension --no-parent --convert-links https://some.site.com/foo/bar/threadNumber

This command can copy any single URL just fine. However, I want to put in the higher directory, and get all of the /page/* files as well. I want no higher directories, and nothing other than the lower /page/ files. I have also thrown --mirror into the mix with no success.

Any ideas why this command isn’t going any lower to download the rest of the pages?


Get this bounty!!!

#StackBounty: #linux #directory #raspberry-pi #wget #url Wget Won't Recursively Download

Bounty: 50

I’m trying to copy a forum thread with this directory structure:

The first page has a URL like this:

https://some.site.com/foo/bar/threadNumber

And the rest of the pages follow this format:

https://some.site.com/foo/bar/threadNumber/page/2
https://some.site.com/foo/bar/threadNumber/page/3
https://some.site.com/foo/bar/threadNumber/page/*

I’m using the command:

wget --recursive --page-requisites --adjust-extension --no-parent --convert-links https://some.site.com/foo/bar/threadNumber

This command can copy any single URL just fine. However, I want to put in the higher directory, and get all of the /page/* files as well. I want no higher directories, and nothing other than the lower /page/ files. I have also thrown --mirror into the mix with no success.

Any ideas why this command isn’t going any lower to download the rest of the pages?


Get this bounty!!!

#StackBounty: #linux #directory #raspberry-pi #wget #url Wget Won't Recursively Download

Bounty: 50

I’m trying to copy a forum thread with this directory structure:

The first page has a URL like this:

https://some.site.com/foo/bar/threadNumber

And the rest of the pages follow this format:

https://some.site.com/foo/bar/threadNumber/page/2
https://some.site.com/foo/bar/threadNumber/page/3
https://some.site.com/foo/bar/threadNumber/page/*

I’m using the command:

wget --recursive --page-requisites --adjust-extension --no-parent --convert-links https://some.site.com/foo/bar/threadNumber

This command can copy any single URL just fine. However, I want to put in the higher directory, and get all of the /page/* files as well. I want no higher directories, and nothing other than the lower /page/ files. I have also thrown --mirror into the mix with no success.

Any ideas why this command isn’t going any lower to download the rest of the pages?


Get this bounty!!!

Fetch GET parameters in JS/jQuery

If you have a URL with some GET parameters as follows:

www.test.com/t.html?a=1&b=3&c=m2-m3-m4-m5 

and need to get the values of each parameters then below is a nifty piece of code solving your requirement.

JavaScript has nothing built in for handling query string parameters.

You could access location.search, which would give you from the ? character on to the end of the URL or the start of the fragment identifier (#foo), whichever comes first.

You can then access QueryString.c

Code to Download File from URL in Java

Below is the code to download the complete contents of a URL to your local hard drive.

//IMPORTS
import java.io.BufferedInputStream;
import java.io.ByteArrayOutputStream;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.net.URL;
import java.io.File;
METHOD CODE
    private static void download(String url) throws IOException {

        String fileName="Path\to\File.extn";//The file that will be saved on your computer
        
        URL link = new URL(url); //The file that you want to download

        //Code to download
        InputStream in = new BufferedInputStream(link.openStream());
        ByteArrayOutputStream out = new ByteArrayOutputStream();
        byte[] buf = new byte[2048];
        int n = 0;
        while (-1 != (n = in.read(buf))) {
            out.write(buf, 0, n);
            System.out.print("|");//Progress Indicator
        }
        out.close();
        in.close();
        byte[] response = out.toByteArray();

        FileOutputStream fos = new FileOutputStream(fileName);
        fos.write(response);
        fos.close();
        //End download code
        System.out.println("#");

    }

Note: For large file downloads, the JVM may throw OutOfMemoryError