#StackBounty: #seo #google-search #amp Why is Google showing the outdated contents of www.google.com/amp/s/MYAMPURL?

Bounty: 50

Recently I updated the contents of 100,000+ AMP pages of my server. However, when I search on my mobile, the results which Google shows are on the following URL:
https://www.google.com/amp/s/amp.mywebpage.com/foo.html

which are completely different from:
https://amp.mywebpage.com/foo.html

How can I tell Google to make users to visit my website (at ‘amp.mywebpage.com’), instead of showing the outdated contents of ‘www.google.com’?


Get this bounty!!!

#StackBounty: #seo #user-generated-content How to exclude or tag user-generated content that shares hostname with first-party site

Bounty: 50

I’m running an SaaS application example.com, which in addition to its landing pages has several pages of "first party" content, for example:

  • example.com/features
  • example.com/pricing
  • example.com/support

Once a customer signs up, the design of the application is that they get to name & use a subpath, where they manage custom content. For example:

  • example.com/joes-place
  • example.com/bobs-place

For a real world example of this pattern, look at GitHub: You sign up and then get github.com/:username.

Challenge: I’m looking for best practices to clearly distinguish first party content (like /pricing) from third-party content (like /joes-place), when it comes to search and SEO. Specifically:

  • Google has occasionally decided to auto-onebox customers like /joes-place; I’d like it to not do that.
  • I want search engines to keep crawling third-party content, since it’s important to the customers that they show up in search.
  • For vanity/aesthetic reasons, I cannot move third party content to its own domain (e.g. I want to keep doing what github does).

So far what I’ve done:

  • First party & customer content use different Google Analytics accounts
  • First party content is in sitemap.xml, customer content isn’t mentioned at all.

Are there other best practices I should be following here?


Get this bounty!!!

#StackBounty: #localization #seo #canonical-link #canonicalization The canonical link points to the site root error

Bounty: 50

I have a website that uses country-specific pages. So for every page, there is a country-specific URL. For example: example.com/au/blog, example.com/us/blog, example.com/uk/blog. This is great as we can show content more relevant to each country.

This idea is the same for our home page: example.com/au, example.com/us, example.com/uk.

When a user goes to a non-country specific URL (ie example.com, example.com/blog) the server falls back to serving more generalised content. On the client, we then show a banner for the user to decide if they want to go to a country-specific page.

With this in mind, I have added the following meta tags and receiving the below error when testing using Lighthouse.

<link rel="canonical" href="https://www.example.com/">
<link rel="alternate" hreflang="x-default" href="https://www.example.com/">
<link rel="alternate" hreflang="en-GB" href="https://www.example.comt/uk/">
<link rel="alternate" hreflang="en-US" href="https://www.example.com/us/">
<link rel="alternate" hreflang="en-AU" href="https://www.example.com/au/">
//error
The document does not have a valid rel=canonical. Points to the domain's root URL (the homepage), instead of an equivalent page of content. 

Is this the correct way to inform crawlers that:

  • The site root is the original document
  • The site root doesn’t target any language or locale
  • The alternatives to this page are en-GB, en-US and en-AU

If so, why does Lighthouse complain about this error on the home page? It doesn’t complain about this on any other page.

I am new to canonicalisation and providing alternative lang pages so I might be missing something obvious.


Get this bounty!!!

#StackBounty: #seo #redirects #301-redirect Redirect shop.mywebsite.com to mywebsite.com/shop

Bounty: 50

I am looking for the cleanest/SEO friendly way to redirect a subdomain (i.e. shop.example.com) to a sub-directory on the main domain (i.e. example.com/shop).

Further information

  • I will not keep the hosting where the shop. subdomain site is currently hosted. The contents of shop. will be transferred to the main domain which sits on a separate hosting platform.
  • I am able to add the subdomain to my existing (main website) hosting and update DNS A record pointing. As I use Plesk Server and can add it as a subdomain of the main domain.
  • I do have access to Google Search Console (unsure if I can advise of a change there)

Any advice much appreciated.


Get this bounty!!!

#StackBounty: #seo #google-search #ranking Does googlebot index base64 encoded images

Bounty: 50

I was wondering if google can crawl image like this:

<img src="data:image/png;base64..." title="relevant title" alt="relevant alt" />

I found this link that says no: Base64 encoded images and availability of their metadata for Googlebot

But it’s been 6 years since that post, and I was wondering if things had changed. I want to provide the best user experience for my users, but not at the cost of my ranking.

In my case I have more than 3 million pages which all have "small" images that I would like to embed/inline to increase initial performance of page load, but also these images are well ranked on images and search, I don’t want to loose that.


Get this bounty!!!

#StackBounty: #seo #hreflang How do I prevent keyword cannibalization with href lang tag?

Bounty: 50

I’m afraid that "keyword cannibalization" takes place when for example I have the following two pages on my site: /en/shoe and /en-us/shoe. Can I assume with absolute certainty that Google’s search engine in the United States only shows /en-us/shoe in the serp when it is clear to the search engine that the person doing the search with the keyword "shoe" speaks english and is in the United States. And when it is only clear to the search engine that the person speaks english that only the /en/shoe page is shown in the serp. And when both are not known the "x-default" page is the only one indexed in the serp for that person. I understand that this situation – if it can occur at all – can only occur if hreflang html tags are perfectly applied on the site.

The perfect application hreflang should look like this if I have understood everything correctly.

You can write the word ‘shoe’ in the following languages:

  • French (fr) : chaussure
  • Spanish (es) : zapato

My fictitious site has the following pages with the corresponding hreflangs:

  • /en/shoe (en) and (x-default)
  • /es/zapato (es)
  • /fr/chaussure (fr)
  • /en-us/shoe (en-us)
  • /en-ca/shoe (en-ca)
  • /en-uk/shoe (en-uk)
  • /es-us/zapato (es-us)
  • /es-ca/zapato (es-ca)
  • /es-uk/zapato (es-uk)
  • /fr-us/chaussure (fr-us)
  • /fr-ca/chaussure (fr-ca)
  • /fr-uk/chaussure (fr-uk)

And the next href long structure has each of the pages mentioned above.

<head>
  <title>Wha efa</title>
  <link rel="alternate" hreflang="x-default" href="http://example.com/en/shoe" />
  <link rel="alternate" hreflang="en" href="http://example.com/en/shoe" />
  <link rel="alternate" hreflang="es" href="http://example.com/es/zapato" />
  <link rel="alternate" hreflang="fr" href="http://example.com/fr/chaussure" />
  <link rel="alternate" hreflang="en-us" href="http://example.com/en-us/shoe" />
  <link rel="alternate" hreflang="en-ca" href="http://example.com/en-ca/shoe" />
  <link rel="alternate" hreflang="en-uk" href="http://example.com/en-uk/shoe" />
  <link rel="alternate" hreflang="es-us" href="http://example.com/es-us/zapato" />
  <link rel="alternate" hreflang="es-ca" href="http://example.com/es-ca/zapato" />
  <link rel="alternate" hreflang="es-uk" href="http://example.com/es-uk/zapato" />
  <link rel="alternate" hreflang="fr-us" href="http://example.com/fr-us/chaussure" />
  <link rel="alternate" hreflang="fr-ca" href="http://example.com/fr-ca/chaussure" />
  <link rel="alternate" hreflang="fr-uk" href="http://example.com/fr-uk/chaussure" />
</head>

Before I go any further, it is important to note the following.Every page about "shoe" that is written in english has almost exactly the same text. So it’s duplicate text.The same goes for Spanish and French pages on the subject of "shoe". However, the countries have to be distinguished, because that is a requirement because the shoes are the only thing that is country specific in a manner of speaking. Using only the /en, /es and /fr variants is therefore not sufficient.

So I am looking for sources and facts that conclusively establish that keyword cannibalization will not take place. Hopefully you can provide them. 🙂

FAQ:
Do you need the /en directory? Is that for customers from Australia and New Zealand? Do the pricing and shipping options even make sense for them? – Stephen Ostermiller♦ 4 hours ago

Very good point. Thank you for your comment. In practice I will also have an /en-au/shoe and /en-nz/shoe page. It could be that someone in a non-English speaking country searches with the keyword "shoe" and then these people can go to /en/shoe. That is the idea behind it. Because what if someone in Pakistan is looking for the keyword "shoe"? That person is not supposed to land on the /en-us/shoe page.

Just so we’re clear. In my question I gave the impression that this is a webshop site, but the idea behind my question is that people should land on pages that best (!!) match the language they speak (and use to google) and where they come from. And sometimes not all this information is there for the search engine and therefore instead of showing /en-us/shoe it can show /en/shoe. So I am willing to make all those different pages for the same keyword but only if there is no keyword cannibalization.

To sum up: I’m afraid that with all these different pages with different geo-targets and languages the google algorithm will lead to confusion and may show up in the search results in the united states for the keyword "shoe" several variants of the same page –> /shoe, /en/shoe, /en-us/shoe for example. This is that cannibalization I’m so afraid of.


Get this bounty!!!

#StackBounty: #seo #ranking How to optimize keywords against inferior websites

Bounty: 50

I have a website that ranks well for several long tail keywords. These keywords attract a few customers, but mainly provide good content for readers. Additionally, according to Moz, Ahrefs and others, I have a significantly higher domain rank than my competitors.

My competitors typically offer very thin content, purchased from a vendor that sells websites with the same content, but optimized by city. Mainly, by keyword stuffing.

The problem is I can’t seem to rank for a very competitive, long tail keyword combination against these competitors. I’ve read the following posts here several times, but still not sure what to do next:

Why do some bad websites rank well?

Does long tail keyword rank impact the rank of short tail keywords too?

What are the best ways to increase a site's position in Google?

It’s not a new domain, it has substantially more and better backlinks than my competitors, decent CTR and bounce rate. Speed shouldn’t be a factor as my site loads a full 1-2 seconds faster than my competitors.

My question is, what should I be doing to improve rank for specific keyword combinations? Should I write more pages/content that uses these keywords? I have just a few landing pages now with these keywords as I’ve tried to avoid being spammy and looking too much like my competition. Is there a different post here I should also be reading for this?


Get this bounty!!!

#StackBounty: #seo #url #google-index #dates #freshness URL structure for specific years in blog

Bounty: 50

I have an URL like this one for an article with a good traffic.

example.com/best-x-for-2019

Now I want to update the content to 2020. I have made the mistake of having the year in slug for the first edition.

What is my best option:

  1. Just change the URL and text from example.com/best-x-for-2019 to website.com/best-x-for-2020
  2. Maintain the example.com/best-x-for-2019 and add link to a new URL example.com/best-x-for-2020 and have a new indexed page for each year
  3. Other?

The content will be updated every year. It will be mainly about prices or some details. Lets say 30% of the content will be new/rewritten.

The point of maintain the same URL is the content be about 2020 and the URL have the 2019 reference. And next year, 2021 and the URL 2019 and so on.

My concern here is just about SEO / SERP.


Get this bounty!!!

#StackBounty: #seo #subdomain #cms #subdirectory Subdomain vs Subfolder for Billing Software; Specifically Knowledgebase, Downloads, St…

Bounty: 50

Intro

“Subdomain vs Subfolder” Yes this question has been asked many, many, times but I will try to ask this in a way which is not considered a duplicate as my question pertains more to some of the features that most modern billing and automated hosting systems provide.

Please correct me if I am wrong but I have read that when it comes to the search bots, specifically google, subdomains and sub directories are equal. But reading through some Moz articles show that there has been conflicting cases to what Google is claiming; specifically that subdomains can still somewhat act as their own self contained website.

Reading a post on Cloudflare puts it in this way:

A subdomain is equal and distinct from a root domain. This means that a subdomain’s keywords are treated separately from the root domain.


Current Setup/Question

Now my question comes from some features billing software offers that I would rather not be seperated from the main domain. Specifically the Knowledgebase but also a download and server status section albeit to a lesser extent.

Currently I have a FAQ page setup on my main subdomain which is www.domain.com (non-www redirects to www). The main bulk of my website is on this main site (the www.domain.com). I have setup a billing system hosted on a different server under a subdomain, lets call it my.domain.com.

I would really like to start using the knowledgebase which is built into the billing software CMS over the current /faq page which isnt running on any sort of CMS. This would make it a lot easier to manage, add, edit any of the information in these sections of the site.

But do I risk losing these keywords that would normally be associated with my main (www.) website because now they would be hosted on the sub domain of my billing CMS?

A solution I thought of was to have my billing software located on the main www. but in a sub directory instead of a subdomain. For security reasons the billing CMS would still be hosted on a different server but I would just setup a reverse proxy for www.domain.com/billing/etc… to load the billing CMS.

Am I way over thinking this?


Direct Questions

  1. Hosting a billing CMS on a subdomain is no issue but would the billing CMS’s Knowledgebase pages be associated with my main domain or only the subdomain in which the billing CMS is access from?

  2. Is using a reverse proxy to allow access to the billing CMS under a subfolder instead of from a subdomain a good idea?

  3. Are there problems or issues that I am just not seeing when it comes to where my FAQ/Knowledgebase, Downloads, Status pages are accessed from?


Get this bounty!!!

#StackBounty: #seo #search-engine-indexing #ecommerce How to add additional e-Commerce product SEO metadata when the web page content i…

Bounty: 100

I have some product pages on my e-Commerce website that have variants on them. Each variant has three things that are relevant to this discussion:

  1. A SKU that is unique to our business
  2. A manufacturer part number (MPN)
  3. A URL that correlates to the configuration that is relevant to that SKU/MPN pairing, such that if you visit this URL it will automatically load the selected configuration. URL is something like: www.example.org/product1 ?variant-opt-1=123&variant-opt-2=581 with the bold part getting added based on the selection.

When the page loads for a product that is structured with variants, the page is used as a more general landing area with the ability to customize the product configuration using some drop downs on the side. That means that if a crawler visited the page, it wouldn’t see those SKU/MPN pairings.

When changing these drop downs a different SKU/MPN pairing will be loaded via an AJAX callback. There could feasibly be hundreds of variants that all have a SKU/MPN pairing. What I’m wondering is, how can I properly add these pairings to my HTML content such that Google/indexers will be able to link queries with those SKUs or MPNs to my site? Ideally I wouldn’t have to add all these variants as separate products on my site (which I know would be properly crawled based on some other products structure like this).

Please let me know your suggestions. The e-Commerce site is custom and is not Shopify, etc so I do have a high level of control over the generated HTML.

My current thoughts are:

  1. Add all the variant links into my sitemap (I’d have to change how this is currently generated), at least this way if every URL is crawled then it should be able to pull out the SKU and MPN properly.
  2. Create some sort of XML data that is loaded into the web page with all three of those fields, something like:
    <tag sku="SKU-1", mpn="MPN-1", url="/unique?variant=12002148" /> I’m just not sure how to “tell” Google that these are relevant for searches.

Mainly I want the SKU and MPN when searched, to link to this page, and if possible, to the unique URL.

Edit – 2020-04-17: I attempted to fix this by adding all the variant links in my sitemap however it turns out Google doesn’t seem to like that. It ignores those links by flagging them as duplicate or canonically ignored.


Get this bounty!!!