On FAmSCo again

It seems that I keep writing about the FAmSCo 🙂 I was reading Joerg Simon’s post on the Membership statistics and wondered if the FAmSCo has considered the following aspects:

  • the load on each Ambassador Mentor ie. how many candidates are they mentoring in a specific period of time
  • whether there is a need to sponsor and approve new Mentors
  • whether there is a need to focus on regions from where there is a single or, no Ambassador
  • the pattern, if any, in the reasons for the candidates whose applications to be a Fedora Ambassador is rejected
  • whether there is a need and, a way the FAmSCo can get back in touch with such candidates and see if they can be coached to become Ambassadors (once rejected isn’t rejected forever)

I had earlier written about a few different things FAmSCo could look into. These are interesting times. The Wikipedia Ambassador/Campus Ambassador program seems to be partly based on the benefits derived from the structured workflow within the Fedora Ambassadors process. FAmSCo has an opportunity to reach out and collaborate to share knowledge about the process and at the same time incorporate suggestions which prepare the Ambassadors for higher achievements. More importantly, it would provide FAmSCo with a clearer way to measure its own success.

Reductionism, agility and localization

My last blog post about a newspaper article turned into a comment stream about ‘supported status’ and ‘reducing translation’ and ‘redefinitions’.

 

In all that spurred me to read up the discussion on Friedel’s blog as well as the i18n list. All said it is an interesting one and something which the teams and communities participating in i18n/l10n should provide feedback upon. It also fits into the current model of defining the ‘supported’ status of a language using application/package translation statistics. The discussion started by Friedel and Gil make sense. Especially so when looked into the context of community enthusiasm that I described earlier – the ability of the teams to sustain a level of translated strings and refine/review them over a period of time. Although there’s also the implied bit from Friedel’s blog about the utility of the statistics – in pure terms, what do the percentages mean and indicate ?

 

While reductionism in areas of re-defining supported status based on a set of packages being localized would work in the short term, a much deeper re-thinking needs to be in place for a longer run. Developers are the ones who create the applications, the end-users consume those applications. In between these two groups are the translation communities who tend to look at the content to translate – they look at strings. Thus there needs to exist a system or, method to enable the translators/localizers to look at strings that must be translated in order to provide a clean homogenous experience to the user of a localized desktop. This is opposite of the current method of looking at it from the level of an application/package. This blog provides one way of doing this. Looking at strings inside of packages that are required to be translated and then doing it across a set of packages aimed at a released and thereafter building on the Translation Memory created to re-use it for translation suggestions offered to translators would be a good means to provide an ability to be “agile” when localization is underway.

 

Being nimble would also allow the teams to quickly build the translations and check against tests (which need to be written, maintained and evolved) with the aim of improving translation quality. In short, there is a need for an additional layer of software between the Translation Management System and the translators themselves that parses the meta-information associated with strings and present them in a way for translation that can provide an optimal quality of the localized interface. Add to this the ability of the system to build and show the application also provides the advantage of translation review (remember that there is always a high chance that English strings can be misinterpreted while translating) and, checking whether the quantum of translation provides an usable experience. Availability of agility via tooling allows iterative runs of translation sprints over a set of strings. While this may be contrary to the way of how teams currently work, it would provide a better way of handling translations even when string freezes are requested to be broken – it would be another sprint with a sign-off via the review tool.

 

This one is unfunny

On the sidelines of the GNOME.Asia 2011 at Bangalore, an article in The HinduThis one’s no gnome” was published. When Srinivasa Ragavan tweeted about it, I mentioned about the ill-formed comment
The paragraph of interest is here:

Excited to be in India, he concedes that community interest here is still on the lower side. Setting this straight is particularly important when it comes to GNOME localisation. “Localisation is a huge challenge here, mainly because there are so many languages, and also because of the way the alphabet font is linked. This is where Free software is critical, because smaller the user base, lesser the chances proprietary firms will take this up.” We count on enthusiastic developers, who are proud of their language and want to preserve it in the digital realm, Mr. Cameron adds. In fact, he points out there are “big business opportunities”, domestic and international, for those who commit themselves to such projects.

 

Brian mentions about community interest in localization being less. On the face of it, this appears to be a conclusion derived from looking at number of participants in the language communities. The way I’ve learnt to look at localization is to try and see the communities and the efforts which sustain. Localization is a steady and incremental process. The communities which take the time and make the effort to reach and maintain their place in and around the “supported” percentage for the individual Indic languages are the ones doing great work.

 
With each release of GNOME there is an obvious increase in the number of strings up for translation. I don’t have the exact statistics to support this fact but I guess a trend-line generated from the total number of strings available in each release super-imposed over the trend-line for the ‘supported percentage’ figure would see an increase with each release. I’m sure someone actually has this data. This basically means that within a short period of ~4 months (factor in the code freeze and string freeze etc), which may or may not also overlap with other project releases, the localization teams end up completing their existing release work, review pending modules and polish up translation consistency ensuring that with each release of the desktop there is a step towards making it more user-friendly.

 

That’s why localization is a big challenge. Not because of the number of languages and certainly not because of the “way the alphabet font is linked”. For what it is worth, the latter bit is more in the realms of internationalization and there are efforts at multiple levels to ensure that the remaining few outstanding issues get fixed.

 

This brings us to a small gripe I’ve had about a lot of Free and Open Source Software projects who take their L10n communities and participants for granted. I’ve rarely seen a project board reach out and talk with the participants to figure out what can be done to make things better. For example, would a community doing L10n do more awesome work if they were funded for a 2 day translation sprint ? Do the language communities have specific requirements on the translation and content infrastructure or, the translation workflow ? Have these issues been brought up in Technical Board meetings ? GNOME isn’t the only one which repeatedly fails to do this transparently, but it is among the highly visible FOSS projects which seems to assume that it is an obligation of its volunteer contributors to keep the statistics ticking.

D-Link Wireless N 150 USB Adapter woes

I got myself one these. The D-Link DWA 125 Wireless N 150 USB Adapter. Turns out it doesn’t get detected/work on Fedora 14. Has anyone been able to get working on a similar distribution ? Or, is there a document that says what I should be trying to do ? I couldn’t seem to find one myself.