Reductionism, agility and localization

My last blog post about a newspaper article turned into a comment stream about ‘supported status’ and ‘reducing translation’ and ‘redefinitions’.

 

In all that spurred me to read up the discussion on Friedel’s blog as well as the i18n list. All said it is an interesting one and something which the teams and communities participating in i18n/l10n should provide feedback upon. It also fits into the current model of defining the ‘supported’ status of a language using application/package translation statistics. The discussion started by Friedel and Gil make sense. Especially so when looked into the context of community enthusiasm that I described earlier – the ability of the teams to sustain a level of translated strings and refine/review them over a period of time. Although there’s also the implied bit from Friedel’s blog about the utility of the statistics – in pure terms, what do the percentages mean and indicate ?

 

While reductionism in areas of re-defining supported status based on a set of packages being localized would work in the short term, a much deeper re-thinking needs to be in place for a longer run. Developers are the ones who create the applications, the end-users consume those applications. In between these two groups are the translation communities who tend to look at the content to translate – they look at strings. Thus there needs to exist a system or, method to enable the translators/localizers to look at strings that must be translated in order to provide a clean homogenous experience to the user of a localized desktop. This is opposite of the current method of looking at it from the level of an application/package. This blog provides one way of doing this. Looking at strings inside of packages that are required to be translated and then doing it across a set of packages aimed at a released and thereafter building on the Translation Memory created to re-use it for translation suggestions offered to translators would be a good means to provide an ability to be “agile” when localization is underway.

 

Being nimble would also allow the teams to quickly build the translations and check against tests (which need to be written, maintained and evolved) with the aim of improving translation quality. In short, there is a need for an additional layer of software between the Translation Management System and the translators themselves that parses the meta-information associated with strings and present them in a way for translation that can provide an optimal quality of the localized interface. Add to this the ability of the system to build and show the application also provides the advantage of translation review (remember that there is always a high chance that English strings can be misinterpreted while translating) and, checking whether the quantum of translation provides an usable experience. Availability of agility via tooling allows iterative runs of translation sprints over a set of strings. While this may be contrary to the way of how teams currently work, it would provide a better way of handling translations even when string freezes are requested to be broken – it would be another sprint with a sign-off via the review tool.

 

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.