#StackBounty: #url #internationalization #translation Is translating an URI completely a good idea?

Bounty: 50

Today I received a request to translate the URIs of the site I was developing. I am not convinced about the suggested approach because it seems to me error prone. Is translating URI paths a common practice or is it a better idea to avoid it?

I know that some piece of the URI may be translated (eg : WordPress lets the post title appear in the URI and if you change the language, the title is translated if a translation exists) but a complete URI translation seems wrong to me.

If my URI is www.example.en/en/contacts/headquarter i can understand that this URI may be also "translated" in www.example.en/it/contacts/sede-centrale because probably there are two "post" with those two title (one in each languages) associated with contacting headquarter.

What is asked to me is to completely translate URI so www.example.en/en/contacts/headquarter will became www.example.en/it/contatti/sede-centrale and that is what I don’t understand.

I haven’t found any examples of such behavior in other sites. I don’t see benefits (a bit of cosmetics of course but nothing more) and it will probably became rapidly costly to maintain. (If the translation turns out to be wrong and is already indexed by spider what would I have to do, setup a 301 redirect?)

Get this bounty!!!

#StackBounty: #css #internationalization #page-flow How does CSS writing mode interact with page flow

Bounty: 50

The CSS Writing Modes Level 3 specification establishes logical terms such as "block dimension", which would be vertical for horizontal writing modes (as I’m writing now). Thus "block size" in Western writing would correspond to the physical dimension "height". I understand that part fine.

The specification also defines the "block flow direction" as "the direction in which block-level boxes stack and the direction in which line boxes stack within a block container", and says that the writing-mode property determines the block flow direction. So if Japanese were using a vertical-rl writing mode, the block flow direction would be horizontal (right to left). And elsewhere when discussing abstract dimensions, the specification defines "block axis" as equivalent to the vertical axis in Western right modes (on this page on which I’m writing, blocks flow vertically), and to the horizontal axis in vertical writing modes.

And this is where I’m not clear about the distinction (if any) between the writing mode logical axes and the overall page flow. Is the overall page flow layout (of the CSS box model) equivalent to the block flow determined by the writing mode?

Here is an example to illustrate my doubt. If there is a page written in Japanese vertically using the vertical-rl writing mode, the "block axis" is the horizontal axis. So does that mean the page flows horizontally? Rather than scrolling down, would a user scroll left to see the rest of the page? Consider a typical landing page with a "hero" at the top of the screen and then various sections below it, with a footer at the bottom. In the vertical-rl writing mode, would the user scroll left to see the sections under the hero?

I guess the question comes down to: is the overall page flow really equivalent to the block flow direction, or does ultimately the page always flow and scroll down regardless of the writing mode?

Get this bounty!!!

#StackBounty: #keyboard #libreoffice #keyboard-layout #google #internationalization How to use a virtual keyboard inside Libre Office W…

Bounty: 500

My current laptop has a standard (QWERTY) English-Hebrew keyboard that came with my Dell Latitude laptop; nothing is customized.

I need to use Thai letters in general and in a Libre Office Writer (LBW) document in particular.

My problem

I thought to install a Google Virtual Keyboard but I am not sure output would automatically be inside a document edited with LBW.

My question?

How to use a virtual keyboard inside Libre Office Writier?

That is to ask; This Gmail solution seems nice but how could I implement this or similar FOSS and Gratis solution (not necessarily by Google) in the OS layer?

Get this bounty!!!

#StackBounty: #ruby-on-rails #internationalization #ruby-on-rails-3.2 Why don't my locale settings in number_to_currency work?

Bounty: 100

Per the Rails 3.2 API Docs, to use different locales for number_to_currency, I need to do the following:

<%= number_to_currency(1234567890.506, :locale => :fr) %>

I was expecting the following output:

# => 1 234 567 890,51 €

Even though I literally use that exact thing within my app and it keeps outputting the following:


When I check for the available_locales within my app I get the following:

> I18n.available_locales
=> [:en, :de, :es, :fr, :ja, :pl, :"pt-BR", :ru, :sv, :"zh-CN"]

So it SHOULD work, but it doesn’t.

What am I missing?

Update 1

Per @s3tjan’s comment, I did some digging in that linked Rails issue and that led me to my application.rb where I discovered I18n.enforce_available_locales = false. I changed that to true and restarted the server.

When I tried the above again, I am now getting this error:

ActionView::Template::Error (:fr is not a valid locale):

Not sure how to fix this.

Update 2

So I just realize that I never had a locale file in my config/locales. What I really want is to use the GBP Pounds for currency, so I added an en-GB.yml file in my config/locales, then I restarted my server and console.

In my application.rb, I have the following:

I18n.enforce_available_locales = true

Then I checked my console and got this:

[1] pry(main)> I18n.available_locales
=> [:en, :de, :es, :fr, :ja, :pl, :"pt-BR", :ru, :sv, :"zh-CN", :"en-GB"]
[2] pry(main)> 

So the :"en-GB" was added successfully to my app’s load path.

But when I do this in my view:

<%= number_to_currency(1234567890.506, :locale => :"en-GB") %>

This is the error I get:

:"en-GB" is not a valid locale excluded from capture due to environment or should_capture callback

ActionView::Template::Error (:"en-GB" is not a valid locale):

So still not working.

Update 3

My en-GB.yml file was taken directly from https://github.com/svenfuchs/rails-i18n/blob/master/rails/locale/en-GB.yml

So it looks exactly like that.

Get this bounty!!!

Internationalization tips — I/O operations

I/O Operations

Whenever text is being read from / written to a file, the encoding should be specified. (Preferably as UTF-8 but need to keep in mind the OS / Language / Locale)

FileOutputStream fos = new FileOutputStream("test.txt");
Writer out = new OutputStreamWriter(fos, "UTF-8");
catch (IOException e)

Internationalization tips for XML

In order for the XML to support Unicode, the following statement needs to be mentioned at the start of the XML:

Apart from these there is BOM issue while saving the XMLs with Unicode characters. Many Windows based text editors add the bytes 0xEF,0xBB,0xBF at the start of document saved in UTF-8 encoding. These set of bytes are Unicode byte-order mark (BOM) though are not relevant to byte order. The BOM can also appear if another encoding with a BOM is translated to UTF-8 without stripping it.

The presence of the UTF-8 BOM may cause interoperability problems with existing software that could otherwise handle UTF-8, for example:

  • Older text editors may display the BOM as “” at the start of the document, even if the UTF-8 file contains only ASCII and would otherwise display correctly.
  • Programming language parsers can often handle UTF-8 in string constants and comments, but cannot parse the BOM at the start of the file.
  • Programs that identify file types by leading characters may fail to identify the file if a BOM is present even if the user of the file could skip the BOM. Or conversely they will identify the file when the user cannot handle the BOM. An example is the UNIX shebang syntax.
  • Programs that insert information at the start of a file will result in a file with the BOM somewhere in the middle of it (this is also a problem with the UTF-16 BOM). One example is offline browsers that add the originating URL to the start of the file

If compatibility with existing programs is not important, the BOM could be used to identify if a file is UTF-8 versus a legacy encoding, but this is still problematical due to many instances where the BOM is added or removed without actually changing the encoding, or various encodings are concatenated together. Checking if the text is valid UTF-8 is more reliable than using BOM. It’s better to omit the BOM while saving the Unicode files. One of the solutions and some discussion surrounding the problem can be found here