Tag Archives: Localization

On why the current state of L10n drivers in Mozilla could lead to unmaking of the communities.

I was reading through two email threads on the dev l10n mailing list for Mozilla and wondered what it would take for the project to actually have a conversation with the language communities.

The threads are here and, here

And, to continue in that context is a post I made (on G+) . I strongly believe that the very fact of this bug being open and yet not bringing forth a discussion is something Mozilla should think about.

If you’d like to comment I’d request that you use the G+ link above and comment there.

Reductionism, agility and localization

My last blog post about a newspaper article turned into a comment stream about ‘supported status’ and ‘reducing translation’ and ‘redefinitions’.

 

In all that spurred me to read up the discussion on Friedel’s blog as well as the i18n list. All said it is an interesting one and something which the teams and communities participating in i18n/l10n should provide feedback upon. It also fits into the current model of defining the ‘supported’ status of a language using application/package translation statistics. The discussion started by Friedel and Gil make sense. Especially so when looked into the context of community enthusiasm that I described earlier – the ability of the teams to sustain a level of translated strings and refine/review them over a period of time. Although there’s also the implied bit from Friedel’s blog about the utility of the statistics – in pure terms, what do the percentages mean and indicate ?

 

While reductionism in areas of re-defining supported status based on a set of packages being localized would work in the short term, a much deeper re-thinking needs to be in place for a longer run. Developers are the ones who create the applications, the end-users consume those applications. In between these two groups are the translation communities who tend to look at the content to translate – they look at strings. Thus there needs to exist a system or, method to enable the translators/localizers to look at strings that must be translated in order to provide a clean homogenous experience to the user of a localized desktop. This is opposite of the current method of looking at it from the level of an application/package. This blog provides one way of doing this. Looking at strings inside of packages that are required to be translated and then doing it across a set of packages aimed at a released and thereafter building on the Translation Memory created to re-use it for translation suggestions offered to translators would be a good means to provide an ability to be “agile” when localization is underway.

 

Being nimble would also allow the teams to quickly build the translations and check against tests (which need to be written, maintained and evolved) with the aim of improving translation quality. In short, there is a need for an additional layer of software between the Translation Management System and the translators themselves that parses the meta-information associated with strings and present them in a way for translation that can provide an optimal quality of the localized interface. Add to this the ability of the system to build and show the application also provides the advantage of translation review (remember that there is always a high chance that English strings can be misinterpreted while translating) and, checking whether the quantum of translation provides an usable experience. Availability of agility via tooling allows iterative runs of translation sprints over a set of strings. While this may be contrary to the way of how teams currently work, it would provide a better way of handling translations even when string freezes are requested to be broken – it would be another sprint with a sign-off via the review tool.

 

This one is unfunny

On the sidelines of the GNOME.Asia 2011 at Bangalore, an article in The HinduThis one’s no gnome” was published. When Srinivasa Ragavan tweeted about it, I mentioned about the ill-formed comment
The paragraph of interest is here:

Excited to be in India, he concedes that community interest here is still on the lower side. Setting this straight is particularly important when it comes to GNOME localisation. “Localisation is a huge challenge here, mainly because there are so many languages, and also because of the way the alphabet font is linked. This is where Free software is critical, because smaller the user base, lesser the chances proprietary firms will take this up.” We count on enthusiastic developers, who are proud of their language and want to preserve it in the digital realm, Mr. Cameron adds. In fact, he points out there are “big business opportunities”, domestic and international, for those who commit themselves to such projects.

 

Brian mentions about community interest in localization being less. On the face of it, this appears to be a conclusion derived from looking at number of participants in the language communities. The way I’ve learnt to look at localization is to try and see the communities and the efforts which sustain. Localization is a steady and incremental process. The communities which take the time and make the effort to reach and maintain their place in and around the “supported” percentage for the individual Indic languages are the ones doing great work.

 
With each release of GNOME there is an obvious increase in the number of strings up for translation. I don’t have the exact statistics to support this fact but I guess a trend-line generated from the total number of strings available in each release super-imposed over the trend-line for the ‘supported percentage’ figure would see an increase with each release. I’m sure someone actually has this data. This basically means that within a short period of ~4 months (factor in the code freeze and string freeze etc), which may or may not also overlap with other project releases, the localization teams end up completing their existing release work, review pending modules and polish up translation consistency ensuring that with each release of the desktop there is a step towards making it more user-friendly.

 

That’s why localization is a big challenge. Not because of the number of languages and certainly not because of the “way the alphabet font is linked”. For what it is worth, the latter bit is more in the realms of internationalization and there are efforts at multiple levels to ensure that the remaining few outstanding issues get fixed.

 

This brings us to a small gripe I’ve had about a lot of Free and Open Source Software projects who take their L10n communities and participants for granted. I’ve rarely seen a project board reach out and talk with the participants to figure out what can be done to make things better. For example, would a community doing L10n do more awesome work if they were funded for a 2 day translation sprint ? Do the language communities have specific requirements on the translation and content infrastructure or, the translation workflow ? Have these issues been brought up in Technical Board meetings ? GNOME isn’t the only one which repeatedly fails to do this transparently, but it is among the highly visible FOSS projects which seems to assume that it is an obligation of its volunteer contributors to keep the statistics ticking.

Do we need to look for new software ?

In an unguarded moment of misguided enthusiasm (and, there is no other way to put it) I volunteered to translate a couple of my favorite TED talks. The idea was simple – challenging myself enough to learn the literary side of translating whole pieces of text would allow me to get to the innards of the language that is my mother tongue and, I use for conversation. Turns out that there was an area that I never factored in.

Talks have transcripts and, they are whole blocks of dialogue which have a different feel when undergoing translations than the User Interface artifacts that make of the components of the software I translate. In some kind of confusion I turned to the person who does this so often that she’s real good at poking holes in any theory I propound. In reality, it was my turn to be shocked. When she does translations of documents, Runa faces problems far deeper than what I faced during the translation of transcripts. And, her current toolset is woefully inadequate because they are tuned to the software translation way of doing things rather than document/transcript/pieces of text translation.

In a nutshell, the problem relates to the breaking of text into chunks that are malleable for translation. More often than not, if the complete text is a paragraph or, at least a couple of sentences – the underlying grammar and the construction are built to project a particular line of thought – a single idea. Chunking causes that seamless thread to be broken. Additionally, when using our standard tools viz. Lokalize/KBabel, Virtaal, Lotte, Pootle, such chunks of text make coherent translation more difficult because of the need to fit things within tags.

Here’s an example from the TED talk by Alan Kay. It is not representative, but would suffice to provide an idea. If you consider it as a complete paragraph expressing a single idea, you could look at something like:

So let's take a look now at how we might use the computer for some of this. And, so the first idea here is just to how you the kind of things that children can do. I am using the software that we're putting on the 100 dollar laptop. So, I'd like to draw a little car here. I'll just do this very quickly. And put a big tire on him. And I get a little object here, and I can look inside this object. I'll call it a car. And here's a little behavior car forward. Each time I click it, car turn. If I want to make a little script to do this over and over again, I just drag these guys out and set them going.

Do you see what is happening ? If you read the entire text as a block, and, if you are grasping the idea, the context based translation that can present the same thing lucidly in your target language starts taking shape.

Now, check what happens if we chunk it in the way TED does it for translation.

So let's take a look now at how we might use the computer for some of this.

And, so the first idea here is

just to how you the kind of things that children can do.

I am using the software that we're putting on the 100 dollar laptop.

So, I'd like to draw a little car here.

I'll just do this very quickly. And put a big tire on him.

And I get a little object here, and I can look inside this object.

I'll call it a car. And here's a little behavior car forward.

Each time I click it, car turn.

If I want to make a little script to do this over and over again,

I just drag these guys out and set them going.

Get them out of context and, it does make threading the idea together somewhat difficult. At least, it seems difficult for me. So, what’s the deal here ? How do other languages deal with similar issues ? I am assuming you just will not be considering the entire paragraph, translating accordingly and then slicing and dicing according to the chunks. That is difficult isn’t it ?

On a side note, the TED folks could start looking at an easier interface to allow translation. I could not figure out how one could translate and save as draft, and, return again to pick up from where one left off. It looks like it mandates a single session sitdown-deliver mode of work. That isn’t how I am used to doing translations in the FOSS world that it makes it awkward. Integrating translation memories which would be helpful for languages with substantial work and, auto translation tools would be sweet too. Plus, they need to create a forum to ask questions – the email address seems to be unresponsive at best.

In the company of a ninja

It looks like watching the Ninja Assassin hasn’t done Shreyank any good. Else, he would have figured out that it is easy-peasy for a Founder and Chief Ninja like Dimitris Glezos (who is also known as DeltaGamma) to be at Bangalore and, elsewhere. Dimitris paid a surprise visit to Pune yesterday and it was fun. It isn’t always that you get a CEO of a startup provide you with an in-person repeat of his keynote with added wisecracks and side-talks that are too scandalous for a “keynote” 🙂 And, that too, at a fairly crowded Barista. It was awesome.

In fact I wanted to talk with him about how massive the momentum built up by Transifex has been. Just two years ago, in 2007, Tx was a GSoC project within The Fedora Project aimed at looking at managing translations from a developer’s perspective. Today, it is a start-up which is hiring employees, relocating to newer offices, has a foot-print across a significant portion of upstream community projects and, most importantly, has clients willing to pay for customization services and, developer services. Tx isn’t only helping translation communities by allowing them to craft their work in peace – it is keeping developer sanity with the fire-n-forget model of the architecture. I hear that PulseAudio, PackageKit developers are strong supporters of Tx. That is tremendous news. The provocative nature of Tx is also based on the charm that it has been bootstrapped. That should provide hope to developers thinking along the “product” route.

I would say that these two years have done Dimitris good. His focus on the road Tx should take has become more vivid and, he has a deeper insight into the changes he wants to bring about via Indifex. There’s nothing more exciting than keeping a close watch on his team and his company for news that would come up soon. Tx is coming up with a killer set of features in the upcoming releases. That should get the attention of a couple of clients too.

Throughout the afternoon we ended up talking about getting youngsters up to speed to think beyond patches as contributions and, starting tuning their thoughts to products. Dimitris opines that patches are excellent jump-off points but in order to become a valuable contributor, one must start thinking about “architecture”, “design”, “roadmap”, “milestones” and all such issues that form part of the theory classes but never see implementation in real-life scenarios. In addition, there is also the need to inculcate the “CC thinking” in everyday work of creativity – be it code or, content or even be it hardware and standards (the “CC thinking” is a fancy short-hand towards thinking about Open Standards, Open Protocols and so forth. In a somewhat twitter-ish way, we compressed it to a meta-statement we both could relate to and agree with).

Dinner and post-dinner with a couple of us was another story. Having a bunch of hard-core “Fedora” folks in the room creates a passion. Sitting back to savor the flames of discussions and, interjecting with a leading viewpoint to keep the debate flowing is the best way to get action items resolved. Nothing wasn’t touched upon – from the way to get best out of *SCos to mundane stuff like getting feature requests into Tx, OLPC and Sugar, or, talking about the general issues within the IT development community in Greece. And of course, the frequent checks on Wikipedia to validate various points in the argument. We could have done with an offline Wiki Reader yesterday 🙂

I think I finally went to sleep at something around 0200 today – which is impossibly past my standard time. There are photos aplenty, though I don’t know who will be uploading them. There was food, there was coffee, cakes, and, there were friends – in short, a nice day.

Pleasant experiences and project loyalty

As a general case, my experience with most of the FOSS projects whose products I consume or, contribute to, have been very pleasant. Feedback has generally been well received, requests listened to. So, what I am going to write is not very special. But, they are striking by themselves.

Sometime ago, I was shopping for an off-line translation tool. I was fed up with Lokalize’s issues and, the fact that it wasn’t letting me do what I wanted to do at that point in time – translate. Additionally, I wasn’t in the mood to actually install a translation content management system to do stuff. Face it, I am an individual translator and, calling in the heavy shots to get the job done was a bit silly. So, I turned to virtaal. Actually, I think I was goaded into giving it a try by Runa.

Virtaal was, at that point in time, not really a good tool 😉 And, you can figure from the blog link above that I wasn’t interested in it too much. However, since I ended up giving it a chance (you cannot simply ignore a recommendation from her) I ended up running into two issues. One was predominantly more annoying than the other and, in effect was what was putting me off the tool. However, the developers took interest to get it fixed and, in the latest release have resolved it.

The other bug was resolved in an even more interesting way – over IRC with hand-holding to obtain the appropriate debug information and, then on to editing the file to put in the fix. At the end, the fix might be trivial. But the level of interest and care taken by the team to listen to their users is what makes me happy. In this aspect, the other development crew I can mention is Transifex. I haven’t met most of them and yet they keep taking suggestions, reports via every communication channel they are on – blogs, micro-blogs, IMs, IRC and trac. That makes them visible, gets them into the shoes of the users and, I am sure it earns them invaluable karma points.

Yesterday, while helping (I just did the file editing while Walter did all the brain muscling) to close the other bug, I felt incredibly happy to be part of a system where it isn’t important who you are or, where you are from. What is important that you have a real desire to develop better software and, make useful artifacts for all.

As it goes – “Your mother was right, it is better to sharelink to video.

The post is brought to you by lekhonee v0.8

Context,subtext and inter-text

There are two points with which I’d like to begin:

  • One, in their Credits to Contributors section, Mozilla (for both Firefox and Thunderbird) state that “We would like to thank our contributors, whose efforts make this software what it is. These people have helped by writing code and documentation, and by testing. They have created and maintained this product, its associated development kits, our build tools and our web sites.” (Open Firefox, go to Help -> About Mozilla Firefox -> Credits, and click on the Contributors hyperlink)
  • Two, whether with design or, with inadvertent serendipity, projects using Transifex tend to end up defining their portals as “translate.<insert_project_name>.domain_name”. Translation, as an aesthetic requirement is squarely in the forefront. And, in addition to the enmeshed meaning with localization, the mere usage of the word translation provides an elevated meaning to the action and, the end result.

A quick use of the Dictionary applet in GNOME provides the following definition of the word ‘translation’:

The act of rendering into another language;  interpretation; as, the translation of idioms is  difficult. [1913 Webster]

With each passing day innovative software is released under the umbrella of various Free and Open Source Software (FOSS) projects. For software that is to be consumed as a desktop application, the ability to be localized into various languages makes the difference in wide adoption and usage. Localization (or, translation) projects form important and integral sub-projects of various upstream software development projects.

In somewhat trivial off-the-cuff remarks which make translation appear easier than it actually is, it is often said that translation is the act of rendering into a target language the content available in the source language. However, localization and translation are not merely replacing the appropriate word or phrases from one language (mostly English) to another language. It requires an understanding of the context, the form, the function and most importantly the idiom of the target language ie. the local language. And yet, in addition to this, there is the fine requirement of the localized interface being usable, while being able to appropriate communicate the message to users of the software – technical and non-technical alike.

There are multiple areas that were briefly touched in the above paragraph. The most important of them being the interplay of contextsubtext and inter-text. Translation, by all accounts, provides a referential equivalence. This is because languages and, word forms evolve separately. And, in spite of adoption and assimilation of words from languages, the core framework of a language remains remarkably unique. Add to this mix the extent with which various themes (technology, knowledge, education, social studies, religion) organically evolve and, there is a distinct chance that idioms and meta-data of words,phrases which are so commonplace in a source language, may not be relevant or, present at all in the target language.

This brings about two different problems. The first, whether to stay true to the source language or, whether to adapt the form to the target language. And, the second, as to how far would losses in translations be acceptable. The second is somewhat unique – translations, by their very nature have the capacity to add/augment to the content, to take away/subtract from the content thereby creating a ‘loss’ or, they can adjust and hence provide an arbitrary measure of compensation. The amount of improvement or, comprehension a piece of translated term can bring forward is completely dependent on the strength of the local language and, the grasp over the idiomatic usage of the same that the translator brings to the task at hand. More importantly, it becomes a paramount necessity that the translator be very well versed in the idioms of the source language in additional to being colloquially fluent in the target language.

The first problem is somewhat more delicate – it differs when doing translations for content as opposed to when translating strings of the UI. Additionally, it can differ when doing translations for a desktop environment like, for example, Sugar. The known user model of such a desktop provides a reference, a context that can be used easily when thinking through the context of words/strings that need to be translated. A trivial example is the need to stress on terms that are more prevalent or, commonly used. A pit-fall is of course it might make the desktop “colloquial”. And yet, that would perhaps be what makes it more user-friendly. This paradox of whether to be source-centric or, target-friendly is amplified when it comes to terms which are yet to evolve their local equivalents in common usage. For example, terms like “Emulator” or, “Tooltip” or, “Iconify”being some of the trivial and quick examples.

I can pick up the recent example of “Unmove” from PDFMod to illustrate a need to appreciate the evolution of English as a language and, to point to the need for the developers to listen to the translators and localization communities. The currently available tools and, processes do not allow a proper elaboration of the context of the word. In English, within the context of an action word “move” it is fairly easy to take a guess at what “Unmove” would mean. In languages where the usage of the action word “move” in the context of an operation on a computer desktop (here’s a quirk – the desktop is a metaphor that is being adopted to be used within the context of a computation device) is evolving, Unmove itself would not lend itself well to translation. Such “absent contexts” are the ones which create a “loss in translation”.

The singularity here is that the source language strings can evolve beautifully if feedback is obtained from the translated language in terms of what does improve the software. The trick is perhaps how best to document the context of the words and phrases to enable a much richer and useful translated UI. And, work on tooling that can include and incorporate such feedback. For example, there are enormous enhancements that can be trivially (and sometimes non-trivially) made to translation memory or, machine translation software so as to enable a much sharper equivalence.

(The above is a somewhat blog representation of what I planned to talk about at GNOME.Asia had my travel agent not made a major mess of the visa papers.)

Looking forward to some improvements

I have been using Transifex based systems for a couple of days/weeks now. And, in line with what I did mention on my micro-blog, Transifex and Lotte make things really easy. The coolest devel crew makes that happen. And, since they lurk online and engage with their users, every little tweak or, improvement that is suggested and considered makes the consumers feel part of the good work they are doing. Good karma and awesome excitement all around.

At some point in time during the week, I’d put them in the tickets as feature enhancements. However, for the time being, here’s a couple:

  • Lotte should allow me to click on a file that is not yet translated for my language and, add it to the collection. If I recall correctly, the current way to add it is to download the .pot, convert to the appropriate .po and, upload it with comments etc
  • Lotte needs to allow “Copy from Source”. This should accelerate translation by removing the extra step of having to actually select, copy and paste. This comes in handy when translating strings within tags or, brands/trademarks and so forth
  • Handling and using translation memory could be built into Lotte. For a particular file in a specific language within a project, it could perhaps provide suggestions of translated words. In the future, allowing teams to add their glossaries would make it a more powerful tool too. Having said that, I’ve always wondered what happens when team glossaries are created from files across various projects – is there a license compatibility soup problem that could crop up ?
  • A Transifex installation could provide notifications of new files or, updated files for the language. This could be limited to the files for which the last translator is the person receiving the notices or, ideally, could be for the language itself.
  • Statistics – providing each language a visual representation of commits over time or, per contributor commits would also be a nice addition

So much for Transifex, in fact, I need to write out all of that in a nicer way so as to allow the possibility of these turning into GSoC projects within Transifex.

Coming to Virtaal. With lokalize being unbearably useless for me (it adds garbled text or whitespaces into files when using the stock F11 supplied one) and, before it is commented, no I haven’t filed a bug yet, getting the files done was a bit more important at that specific point. So, mea culpa. But I do check with every yum update and, it is still the same. The specific issue with Virtaal is that each time one gets a new string loaded for translation, the text input area loses the input method details. Which means that it is a constant game of switch back and forth between the inputs. Sadly enough, this is the only software that currently works for me (I don’t want to set up a local pootle/transifex instance and, do web based translation)

Tools of the translation trade

I begin with a caveat – I am a dilettante translator and hence the tools of my trade (these are the tools I have used in the past or, use daily) or, the steps I follow might not reflect reality or, how the “real folks” do translation. I depend to a large extent on folks doing translation-localization bits for my language and, build heavily on their works.

KBabel

I used it only infrequently when it was around in Fedora (it is still available in Red Hat Enterprise Linux 5) but once I did get over the somewhat klunky interface, it was a joy to work with. Seriously rugged and, well formed into the ways of doing translations, KBabel was the tool of choice. However, it was replaced by Lokalize (more on that later) and so, I moved on to Lokalize.

Lokalize

This has so much promise and yet, there is so much left to be desired in terms of stability. For example, a recent quirk that I noticed is that in some cases, translating the files using Lokalize and, then viewing it using a text editor shows the translated strings. However, loading them in KBabel or, another tool shows the lines as empty. The Kbabel -> Lokalize transformation within KDE could have perhaps done with a bit of structured requirements definition and, testing (I am unaware as to whether such things were actually done and, would be glad to read up any existing content on that). Then there’s this quirk for the files in the recent GNOME release – copying across the content when it is in the form Address leaves the copied form as empty space. The alternative is to input the tags again. Which is a cumbersome process. There are a number of issues reported against the Lokalize releases which actually gives me enough hope, because more issues mean more consumers and hence a need to have a stable and functional application.

Virtaal

I have used it very infrequently. The one reason for that is that it takes some time to get used to the application/tool itself. I guess sometimes too much sparseness in UI is a factor in shying away from the tool. The singular good point which merits a mention is the “Help” or, documentation in Virtaal – it is very well done and, actually demonstrates how best to use the application for day to day usage in translation. This looks to be a promising tool and, with the other parts like translation memory, terminology creator etc tagged on, it will have the makings of a strong toolchain

Pootle

I had been initially reluctant to use a web-based tool to do translations. This however might have been a factor of the early days of Pootle. With the recent Pootle releases, having a web-based translation tool is a good plus. However, it isn’t without its queer flaws – for example, it doesn’t allow one to browse to a specific phrase to translate (or, in other words, in a 290 line file, if you last left it at 175, the choices are either to traverse from the start in bunches of 10 or, 7 or, traverse from the end till one reaches the 176th line), the instances of Pootle that I have used don’t use any translation memory or, terminology add-ons to provide suggestions.

I have this evolving feeling that having a robust web-based tool would provide a better way of handling translations and, help manage content. That is perhaps one of the reasons I have high expectations from the upcoming Pootle releases and, of course, Lotte.

Irrespective of the tools, some specific things that I’d see being handled include the following. I hope that someone who develops tools to help get translations done takes some time out to talk with the folks doing it daily to understand the areas which can do with significant improvements.

  • the ability to provide a base glossary of words (for a specific language) and, the system allowing it to be consumed during translation so as to provide a semblance of consistency
  • the ability to take as input a set of base glossaries across languages (for example, a couple of Indic languages do check how other Indic languages have handled the translation) and, the system allowing the translator/reviewer to exercise the option of choosing any of the glossaries to consult
  • provide robust translation suggestions facilitating re-use and, increasing consistency
  • a higher level of handling terminology than what is present now
  • a stronger set of spell checking plumbing
  • store and display the translation history of a file
  • the ability to browse to a specific string/line which helps a lot when doing review sprints or, just doing translation sprints

Update: Updated the first line to ensure that it isn’t implied that these are the only tools anyone interested in translation can use. These are tools I have used or, use daily.

Update: Updated the “wish-list” to reflect the needs across tools as opposed to the implied part about they being requested only in Pootle

Digital Content in Local Languages: Technology Challenges

I was reading through an article of the same name by Vasudeva Varma. Barring a whopper of a statement, the author does a reasonable job of pointing out some of the areas that needs to be worked on. To begin with however, let’s take that statement:

For example, Hindi is rendered properly only on Windows XP and beyond. Though there are some efforts to create Indic versions of the Linux, largely there is very little support for Indian languages.

It is a bit out out of context but nevertheless it is worth pointing out that one would have expected a bit more accuracy from the author. Especially because availability of Indian languages and their ease of use on Linux distributions have improved significantly. And, folks who use the Indian language Linux desktop on a regular basis for their usual workflow are somewhat unanimous that “things do work”. In fact, it would have been nicer if the author had taken the time to test out a few Linux distributions in the native language mode to identify the weak points. Most of the upstream projects do have very active native language projects with a significant quantum of participants from Indian language communities. For example, translate.fedoraproject.org, l10n.gnome.org, l10n.kde.org etc are the ones that come to mind immediately.

At a larger level, I would whole heartedly agree with the author that there exists gaps which need to be filled up. For example, with the desktop and applications getting localized, there is an urgent need to have “Cookbook” like documentation in native languages primarily for desktop applications. There is a greater need to improve existing work on the following:

  • spell checkers
  • dictionaries
  • OCR

for the various Indic languages so as to enable a more wholesome usage of desktop applications. Sadly enough, a large bulk of the work around the above three bits are still “in captivity” at the various R&D initiatives across institutes in India with not much hope of being made available under an appropriate license allowing integration into FOSS applications.

The other part of the equation are folks who create content or, collate content ie. the writers and the publishers. To a large extent, there is a dearth of large volume of local language content on the Internet. And while it could have been said that the difficulty with Linux and Indian languages was a show stopper, it isn’t really so any more. “Better search” has been a buzzword that has been around for a while, but till the time a quantification of better does happen, it isn’t impossible to get along with what is available right now. The primary barriers to input methods, display/rendering and printing have been largely overcome and, the tools that allow content to be created in Indian languages are somewhat more encoding aware than before. With projects like Firefox taking an active interest in getting things going around Indic, I would hazard a guess that things would get better.

Which brings us to the Desktop Publishing folks. I have talked about them and the need to figure out their requirements a lot of times. Suffice to state, the DTP tools need to be able to handle Indic stuff far better than they do now. And, probably we do have the work cut out there.