December 6th, 2009 § § permalink
In an unguarded moment of misguided enthusiasm (and, there is no other way to put it) I volunteered to translate a couple of my favorite TED talks. The idea was simple – challenging myself enough to learn the literary side of translating whole pieces of text would allow me to get to the innards of the language that is my mother tongue and, I use for conversation. Turns out that there was an area that I never factored in.
Talks have transcripts and, they are whole blocks of dialogue which have a different feel when undergoing translations than the User Interface artifacts that make of the components of the software I translate. In some kind of confusion I turned to the person who does this so often that she’s real good at poking holes in any theory I propound. In reality, it was my turn to be shocked. When she does translations of documents, Runa faces problems far deeper than what I faced during the translation of transcripts. And, her current toolset is woefully inadequate because they are tuned to the software translation way of doing things rather than document/transcript/pieces of text translation.
In a nutshell, the problem relates to the breaking of text into chunks that are malleable for translation. More often than not, if the complete text is a paragraph or, at least a couple of sentences – the underlying grammar and the construction are built to project a particular line of thought – a single idea. Chunking causes that seamless thread to be broken. Additionally, when using our standard tools viz. Lokalize/KBabel, Virtaal, Lotte, Pootle, such chunks of text make coherent translation more difficult because of the need to fit things within tags.
Here’s an example from the TED talk by Alan Kay. It is not representative, but would suffice to provide an idea. If you consider it as a complete paragraph expressing a single idea, you could look at something like:
So let's take a look now at how we might use the computer for some of this. And, so the first idea here is just to how you the kind of things that children can do. I am using the software that we're putting on the 100 dollar laptop. So, I'd like to draw a little car here. I'll just do this very quickly. And put a big tire on him. And I get a little object here, and I can look inside this object. I'll call it a car. And here's a little behavior car forward. Each time I click it, car turn. If I want to make a little script to do this over and over again, I just drag these guys out and set them going.”
Do you see what is happening ? If you read the entire text as a block, and, if you are grasping the idea, the context based translation that can present the same thing lucidly in your target language starts taking shape.
Now, check what happens if we chunk it in the way TED does it for translation.
So let's take a look now at how we might use the computer for some of this.
And, so the first idea here is
just to how you the kind of things that children can do.
I am using the software that we're putting on the 100 dollar laptop.
So, I'd like to draw a little car here.
I'll just do this very quickly. And put a big tire on him.
And I get a little object here, and I can look inside this object.
I'll call it a car. And here's a little behavior car forward.
Each time I click it, car turn.
If I want to make a little script to do this over and over again,
I just drag these guys out and set them going.
Get them out of context and, it does make threading the idea together somewhat difficult. At least, it seems difficult for me. So, what’s the deal here ? How do other languages deal with similar issues ? I am assuming you just will not be considering the entire paragraph, translating accordingly and then slicing and dicing according to the chunks. That is difficult isn’t it ?
On a side note, the TED folks could start looking at an easier interface to allow translation. I could not figure out how one could translate and save as draft, and, return again to pick up from where one left off. It looks like it mandates a single session sitdown-deliver mode of work. That isn’t how I am used to doing translations in the FOSS world that it makes it awkward. Integrating translation memories which would be helpful for languages with substantial work and, auto translation tools would be sweet too. Plus, they need to create a forum to ask questions – the email address seems to be unresponsive at best.
November 22nd, 2009 § § permalink
There are two points with which I’d like to begin:
- One, in their Credits to Contributors section, Mozilla (for both Firefox and Thunderbird) state that “We would like to thank our contributors, whose efforts make this software what it is. These people have helped by writing code and documentation, and by testing. They have created and maintained this product, its associated development kits, our build tools and our web sites.” (Open Firefox, go to Help -> About Mozilla Firefox -> Credits, and click on the Contributors hyperlink)
- Two, whether with design or, with inadvertent serendipity, projects using Transifex tend to end up defining their portals as “translate.<insert_project_name>.domain_name”. Translation, as an aesthetic requirement is squarely in the forefront. And, in addition to the enmeshed meaning with localization, the mere usage of the word translation provides an elevated meaning to the action and, the end result.
A quick use of the Dictionary applet in GNOME provides the following definition of the word ‘translation’:
The act of rendering into another language; interpretation; as, the translation of idioms is difficult. [1913 Webster]
With each passing day innovative software is released under the umbrella of various Free and Open Source Software (FOSS) projects. For software that is to be consumed as a desktop application, the ability to be localized into various languages makes the difference in wide adoption and usage. Localization (or, translation) projects form important and integral sub-projects of various upstream software development projects.
In somewhat trivial off-the-cuff remarks which make translation appear easier than it actually is, it is often said that translation is the act of rendering into a target language the content available in the source language. However, localization and translation are not merely replacing the appropriate word or phrases from one language (mostly English) to another language. It requires an understanding of the context, the form, the function and most importantly the idiom of the target language ie. the local language. And yet, in addition to this, there is the fine requirement of the localized interface being usable, while being able to appropriate communicate the message to users of the software – technical and non-technical alike.
There are multiple areas that were briefly touched in the above paragraph. The most important of them being the interplay of context-subtext and inter-text. Translation, by all accounts, provides a referential equivalence. This is because languages and, word forms evolve separately. And, in spite of adoption and assimilation of words from languages, the core framework of a language remains remarkably unique. Add to this mix the extent with which various themes (technology, knowledge, education, social studies, religion) organically evolve and, there is a distinct chance that idioms and meta-data of words,phrases which are so commonplace in a source language, may not be relevant or, present at all in the target language.
This brings about two different problems. The first, whether to stay true to the source language or, whether to adapt the form to the target language. And, the second, as to how far would losses in translations be acceptable. The second is somewhat unique – translations, by their very nature have the capacity to add/augment to the content, to take away/subtract from the content thereby creating a ‘loss’ or, they can adjust and hence provide an arbitrary measure of compensation. The amount of improvement or, comprehension a piece of translated term can bring forward is completely dependent on the strength of the local language and, the grasp over the idiomatic usage of the same that the translator brings to the task at hand. More importantly, it becomes a paramount necessity that the translator be very well versed in the idioms of the source language in additional to being colloquially fluent in the target language.
The first problem is somewhat more delicate – it differs when doing translations for content as opposed to when translating strings of the UI. Additionally, it can differ when doing translations for a desktop environment like, for example, Sugar. The known user model of such a desktop provides a reference, a context that can be used easily when thinking through the context of words/strings that need to be translated. A trivial example is the need to stress on terms that are more prevalent or, commonly used. A pit-fall is of course it might make the desktop “colloquial”. And yet, that would perhaps be what makes it more user-friendly. This paradox of whether to be source-centric or, target-friendly is amplified when it comes to terms which are yet to evolve their local equivalents in common usage. For example, terms like “Emulator” or, “Tooltip” or, “Iconify”being some of the trivial and quick examples.
I can pick up the recent example of “Unmove” from PDFMod to illustrate a need to appreciate the evolution of English as a language and, to point to the need for the developers to listen to the translators and localization communities. The currently available tools and, processes do not allow a proper elaboration of the context of the word. In English, within the context of an action word “move” it is fairly easy to take a guess at what “Unmove” would mean. In languages where the usage of the action word “move” in the context of an operation on a computer desktop (here’s a quirk – the desktop is a metaphor that is being adopted to be used within the context of a computation device) is evolving, Unmove itself would not lend itself well to translation. Such “absent contexts” are the ones which create a “loss in translation”.
The singularity here is that the source language strings can evolve beautifully if feedback is obtained from the translated language in terms of what does improve the software. The trick is perhaps how best to document the context of the words and phrases to enable a much richer and useful translated UI. And, work on tooling that can include and incorporate such feedback. For example, there are enormous enhancements that can be trivially (and sometimes non-trivially) made to translation memory or, machine translation software so as to enable a much sharper equivalence.
(The above is a somewhat blog representation of what I planned to talk about at GNOME.Asia
had my travel agent
not made a major mess of the visa papers.)
November 7th, 2009 § § permalink
I have been using Transifex based systems for a couple of days/weeks now. And, in line with what I did mention on my micro-blog, Transifex and Lotte make things really easy. The coolest devel crew makes that happen. And, since they lurk online and engage with their users, every little tweak or, improvement that is suggested and considered makes the consumers feel part of the good work they are doing. Good karma and awesome excitement all around.
At some point in time during the week, I’d put them in the tickets as feature enhancements. However, for the time being, here’s a couple:
- Lotte should allow me to click on a file that is not yet translated for my language and, add it to the collection. If I recall correctly, the current way to add it is to download the .pot, convert to the appropriate .po and, upload it with comments etc
- Lotte needs to allow “Copy from Source”. This should accelerate translation by removing the extra step of having to actually select, copy and paste. This comes in handy when translating strings within tags or, brands/trademarks and so forth
- Handling and using translation memory could be built into Lotte. For a particular file in a specific language within a project, it could perhaps provide suggestions of translated words. In the future, allowing teams to add their glossaries would make it a more powerful tool too. Having said that, I’ve always wondered what happens when team glossaries are created from files across various projects – is there a license compatibility soup problem that could crop up ?
- A Transifex installation could provide notifications of new files or, updated files for the language. This could be limited to the files for which the last translator is the person receiving the notices or, ideally, could be for the language itself.
- Statistics – providing each language a visual representation of commits over time or, per contributor commits would also be a nice addition
So much for Transifex, in fact, I need to write out all of that in a nicer way so as to allow the possibility of these turning into GSoC projects within Transifex.
Coming to Virtaal. With lokalize being unbearably useless for me (it adds garbled text or whitespaces into files when using the stock F11 supplied one) and, before it is commented, no I haven’t filed a bug yet, getting the files done was a bit more important at that specific point. So, mea culpa. But I do check with every yum update and, it is still the same. The specific issue with Virtaal is that each time one gets a new string loaded for translation, the text input area loses the input method details. Which means that it is a constant game of switch back and forth between the inputs. Sadly enough, this is the only software that currently works for me (I don’t want to set up a local pootle/transifex instance and, do web based translation)