Category Archives: Catchall Scribbles

Those jottings which could not be clipped with any other papers…

On FAmSCo again

It seems that I keep writing about the FAmSCo šŸ™‚ I was reading Joerg Simon’s post on the Membership statistics and wondered if the FAmSCo has considered the following aspects:

  • the load on each Ambassador Mentor ie. how many candidates are they mentoring in a specific period of time
  • whether there is a need to sponsor and approve new Mentors
  • whether there is a need to focus on regions from where there is a single or, no Ambassador
  • the pattern, if any, in the reasons for the candidates whose applications to be a Fedora Ambassador is rejected
  • whether there is a need and, a way the FAmSCo can get back in touch with such candidates and see if they can be coached to become Ambassadors (once rejected isn’t rejected forever)

I had earlier written about a few different things FAmSCo could look into.Ā These are interesting times. The Wikipedia Ambassador/Campus Ambassador program seems to be partly based on the benefits derived from the structured workflow within the Fedora Ambassadors process. FAmSCo has an opportunity to reach out and collaborate to share knowledge about the process and at the same time incorporate suggestions which prepare the Ambassadors for higher achievements. More importantly, it would provide FAmSCo with a clearer way to measure its own success.

Reductionism, agility and localization

My last blog post about a newspaper article turned into a comment stream about ‘supported status’ and ‘reducing translation’ and ‘redefinitions’.

 

In all that spurred me to read up the discussion on Friedel’s blog as well as the i18n list. All said it is an interesting one and something which the teams and communities participating in i18n/l10n should provide feedback upon. It also fits into the current model of defining the ‘supported’ status of a language using application/package translation statistics. The discussion started by Friedel and Gil make sense. Especially so when looked into the context of community enthusiasm that I described earlier – the ability of the teams to sustain a level of translated strings and refine/review them over a period of time. Although there’s also the implied bit from Friedel’s blog about the utility of the statistics – in pure terms, what do the percentages mean and indicate ?

 

While reductionism in areas of re-defining supported status based on a set of packages being localized would work in the short term, a much deeper re-thinking needs to be in place for a longer run. Developers are the ones who create the applications, the end-users consume those applications. In between these two groups are the translation communities who tend to look at the content to translate – they look at strings. Thus there needs to exist a system or, method to enable the translators/localizers to look at strings that must be translated in order to provide a clean homogenous experience to the user of a localized desktop. This is opposite of the current method of looking at it from the level of an application/package. This blog provides one way of doing this. Looking at strings inside of packages that are required to be translated and then doing it across a set of packages aimed at a released and thereafter building on the Translation Memory created to re-use it for translation suggestions offered to translators would be a good means to provide an ability to be “agile” when localization is underway.

 

Being nimble would also allow the teams to quickly build the translations and check against tests (which need to be written, maintained and evolved) with the aim of improving translation quality. In short, there is a need for an additional layer of software between the Translation Management System and the translators themselves that parses the meta-information associated with strings and present them in a way for translation that can provide an optimal quality of the localized interface. Add to this the ability of the system to build and show the application also provides the advantage of translation review (remember that there is always a high chance that English strings can be misinterpreted while translating) and, checking whether the quantum of translation provides an usable experience. Availability of agility via tooling allows iterative runs of translation sprints over a set of strings. While this may be contrary to the way of how teams currently work, it would provide a better way of handling translations even when string freezes are requested to be broken – it would be another sprint with a sign-off via the review tool.

 

This one is unfunny

On the sidelines of the GNOME.Asia 2011 at Bangalore, an article in The HinduThis one’s no gnome” was published. When Srinivasa Ragavan tweeted about it, I mentioned about the ill-formed comment
The paragraph of interest is here:

Excited to be in India, he concedes that community interest here is still on the lower side. Setting this straight is particularly important when it comes to GNOME localisation. ā€œLocalisation is a huge challenge here, mainly because there are so many languages, and also because of the way the alphabet font is linked. This is where Free software is critical, because smaller the user base, lesser the chances proprietary firms will take this up.ā€ We count on enthusiastic developers, who are proud of their language and want to preserve it in the digital realm, Mr. Cameron adds. In fact, he points out there are ā€œbig business opportunitiesā€, domestic and international, for those who commit themselves to such projects.

 

Brian mentions about community interest in localization being less. On the face of it, this appears to be a conclusion derived from looking at number of participants in the language communities. The way I’ve learnt to look at localization is to try and see the communities and the efforts which sustain. Localization is a steady and incremental process. The communities which take the time and make the effort to reach and maintain their place in and around the “supported” percentage for the individual Indic languages are the ones doing great work.

 
With each release of GNOME there is an obvious increase in the number of strings up for translation. I don’t have the exact statistics to support this fact but I guess a trend-line generated from the total number of strings available in each release super-imposed over the trend-line for the ‘supported percentage’ figure would see an increase with each release. I’m sure someone actually has this data. This basically means that within a short period of ~4 months (factor in the code freeze and string freeze etc), which may or may not also overlap with other project releases, the localization teams end up completing their existing release work, review pending modules and polish up translation consistency ensuring that with each release of the desktop there is a step towards making it more user-friendly.

 

That’s why localization is a big challenge. Not because of the number of languages and certainly not because of the “way the alphabet font is linked”. For what it is worth, the latter bit is more in the realms of internationalization and there are efforts at multiple levels to ensure that the remaining few outstanding issues get fixed.

 

This brings us to a small gripe I’ve had about a lot of Free and Open Source Software projects who take their L10n communities and participants for granted. I’ve rarely seen a project board reach out and talk with the participants to figure out what can be done to make things better. For example, would a community doing L10n do more awesome work if they were funded for a 2 day translation sprint ? Do the language communities have specific requirements on the translation and content infrastructure or, the translation workflow ? Have these issues been brought up in Technical Board meetings ? GNOME isn’t the only one which repeatedly fails to do this transparently, but it is among the highly visible FOSS projects which seems to assume that it is an obligation of its volunteer contributors to keep the statistics ticking.

D-Link Wireless N 150 USB Adapter woes

I got myself one these. The D-Link DWA 125 Wireless N 150 USB Adapter. Turns out it doesn’t get detected/work on Fedora 14. Has anyone been able to get working on a similar distribution ? Or, is there a document that says what I should be trying to do ? I couldn’t seem to find one myself.

Of mentors and mentoring

While reading through the mailing list archives I chanced across a new Mentoring Proposal for Fedora Ambassadors. The list has seen some discussion going on around the topic of “How to be a mentor” and, the current proposal is part of a thread about New Ambassador Mentors.

To me a mentor is a “trusted counselor who serves as a teacher” and, mentoring or, mentorship is a personal developmental relationship in which a more experienced or more knowledgeable person helps a less experienced or less knowledgeable person (this is from the Wikipedia article which I’d recommend as a reading material).

Why would Ambassadors need a mentor anyway ? There are two answers. The simple and cop-out answer is that “everyone does”. The more complex and somewhat thought provoking answer drove the then FAmSCo folks to think through this issue. And that is because the Fedora Project puts the Ambassadors squarely in a public facing role. Over a period of time the profile of fine folks who stood up and signed-up for an Ambassador role varied. With the complexity and depth of issues that the project brings forth and, the need to always “be excellent” resonating through every activity within the project, it was a good idea to request some of the older/wiser/experienced heads to spend some time coaching the newer ranks. At no point in time was this responsibility thrust down to unwilling hands and, yet at the same time, these groups of mentors spent an inordinate amount of time ensuring that as the number of Ambassadors increased, time and effort was invested in maintaining to the high standards.

Additionally, FAmSCo has made it quite clear that it is agreeable to looking at newer mentors and, which is why there is a reasonably clear path available to any Ambassador who wishes to work with a current mentor and, thus be peer reviewed and accepted as a mentor. Having a group of one’s peers reviewing one’s performance and skills, especially soft skills is indeed a daunting experience. However, each of the newer mentors have been excellent Ambassadors and would eventually become wonderful coaches as well. In that context I somewhat like Christoph’s response. And, while the process might seem to be very “secretive” to few (it isn’t if you check the workflow), it does work because of the formal workflow that it has. Including the fact that discussions about new mentors have a section where the contributions of the Ambassador are discussed and the mentor peers provide their comments.

I don’t see a reason to keep a list of mentors-in-waiting. And, I certainly disagree with the disingenuous hint that being a ‘mentor’ is an honor or, a special title (do the mentors get a special button ? :)).

Mentoring, in my book, is a responsibility and it pleases me to see the Ambassadors who take time and make effort to coach new Ambassadors and also take time to select new mentors thus helping the project recognizing talent and appreciating contributions. Everyone can, and should, help the other person find their feet within the project and encourage contributions. Coupling this facet of a FOSS project with the idea that ‘mentor’ is a title is not only plain wrong now but wrong forever. And saying that someone who volunteers to spend time and effort to coach and help another person become a better contributor doesn’t possess any special skills (what skills are special anyway ?) is also being facile. I could draw analogies from various everyday situations at home where the “this role doesn’t require special skills” would lead to volatile situations, but you understand what I am talking about.

It is not in the special skills. It is in the special person.

How would you accelerate the adoption of OLPC in India?

OLPC News has an article with the original headline (in fact I took the lazy way out and re-used it). It seems to be posted by ‘Guest Writer’ but the footer of the article says that “Satish Jha is the President and CEO, OLPC India” so I guess OLPC India is in some form involved with the content that is has.

It is an interesting piece. There’s another interesting thread on a mailing list here.

I would have expected it to talk more about the possibilities of doing OLPC stuff in India rather than becoming a somewhat neither-here-nor-there kind of non-committal response to the $35 device that the Ministry of HRD so loudly released. To understand what can bring about the adoption of OLPC India, one would have to probably go back to a post I wrote some time back.

The problem that was highlighted still remains. There is no community of any form,shape or sort around the OLPC in India when compared to OLPC efforts/initiatives and deployments in other countries (the nations that are so eloquently held up as shining examples of OLPC success). There is a significant lack of a downstream community of volunteers and participants and, more importantly, a lack of any sort of publicly discussed plans as to whether any educational institute would volunteer students for a while to keep the deployments going forward. Then of course there is the added discourse around availability of the actual XO hardware.

When I met Dr. Nagarjuna at GNUnify (that’s February this year), he indicated that he was actively looking at using the Sugar Desktop Environment on standard COTS desktops available much easily from vendors because there wasn’t much clarity about the how and when of the hardware availability. In fact, this has been a murmur that has been around for a while – what specifically is the value add of the hardware if the desktop environment is available via a standard Linux desktop/distribution. Which is where an active group of developers working on activities that would be useful in the context of the deployment is a good thing to have. And for that to happen, there needs to be work on building a downstream community – contributors who use the artifacts provided by OLPC and Sugar to develop their own thing.

A distinct advantage that OLPC/XO/Sugar has is brand recognition. Anyone who is peripherally involved in doing things around Free and Open Source Software in India know these names. They may not fully understand the depth of work or, the roadmap of the individual projects, but the name recognition is a jump-off point that should be utilized much more. For example, in a space like the College of Engineering Pune, which has a fairly active mailing list for FOSS related stuff, holding a 2 day event with the aim getting work started on new or, un-maintained activities, teaching the basics of testing/QA stuff would probably be more useful than just wishing about growing a community. I am fairly certain that there would be other institutions like CoEP where a day-long or, similar camps can be organized. Why aren’t they happening ? On that I have no clue.

A question, a survey, a conversation and some feedback

During the recent elections Richard Stallman had a specific question for the candidates. Copying from the archives, here’s the question:

Here is a question for the candidates.
To advance to the goal of freedom for software users, we need to develop good free software, and we need to teach people to value and demand the freedom that free software offers them. We need to advance at the practical level and at the philosophical level.
GNOME is good free software, and thus contributes at the practical level. How will candidates use the user community’s awareness of GNOME to contribute to educating the communityn about freedom?
At a stretch the question is similar in theme to the questions/concerns around GNOME and Free Software ideals that come up from time to time. I recall reading similar questions during earlier elections and, it isn’t specifically new or, something that has come out of the blue.
I came to know of the survey when I read the micro-blog from Lefty. And, then we had a bit of conversation.
Personally, I don’t feel comfortable about the survey.
The line of reasoning is as follows – as a member of the GNOME Foundation one has the right to express one’s opinion about the direction and focus of the Foundation by supporting the appropriate (set of) candidate(s). From the perspective of a Foundation it is perfectly valid to focus on areas which are aligned with the very reason for the Foundation and the project to exist. In fact, focussing only on those areas wouldn’t and shouldn’t be taken amiss. In short, the Foundation can choose to exercise what it should work on in the near or long term future or, it shouldn’t. As long as such goals and tasks do not appear to be detrimental to the cause of Free and Open Source Software things should work out nicely.
I hasten to add that similar should be the focus of the Free Software Foundation as well. The survey attempts to somewhat codify this implicit responsibility areas and, I do get the feeling that the specific question
“In what way would you ordinarily refer to “an operating system based on a Linux kernel and using mainstream, mainly community-developed components and applications”? (Distributions representing such include Debian, Gentoo, Fedora, Open SuSE, etc. Android does not qualify, nor does WebOS, etc.)”
is implicitly divisive. An “us/them” meme that has been festering on the foundation-list for a while now.
I did not participate in the survey. I don’t want to. I’d rather like GNOME to focus on being an excellent desktop environment with strong technical and technology focus going back to the times when I started using it as my primary desktop. The Board needs to work out its focus and, work on the project’s future with much more rigor than it does now. To me the survey is just a passing distraction. Mildly entertaining but probably not productive.

A new lekhonee-gnome release

Sometime after the release announcement, Kushal asked me to use it to post a blog and see how things are. It took me a while to get to the blog (primarily because prolonged typing causes my finger joints immense pain and, it is easier to walk over to where Kushal is and do the “you seriously consider this a feature” thing ;))

The new release is a re-write in Vala which came as a surprise to me since Kushal was toying with Vala only a couple of nights before the release. However, that did mean that a couple of us were the lab-rats in the “release early, release often and release private” game that he plays before pushing it to the build system. That is especially fun because at some point during the game all the lab-rats end up having different private builds which expose unique sets of bugs.
I wonder when this becomes a default option on Fedora.
Things I like about the release include
  • the unordered list creation button
  • newer icon set
  • the ability of not barfing on a wrong auth entry for blog
  • right click for spell checking
  • save as drafts

The bits I look forward to are

  • ability to handle multiple blogs or, account management
  • better WSIWYG rendering (the fonts look a bit weird for me)
  • auto-save of blogs

If you like to see it translated in a language of your choice, sign-up here.

Student,Contributor,Ambassador

I often hear good things about the strength of the Fedora Ambassadors in India. With a 110+ group of people, it does allow one to look at upsides and, areas of improvement. But more importantly, what it stands as testimony to is the tough work that is put in behind the scenes by various individuals and, groups within Fedora to make that happen. (Hint : some of the said individuals are also mentors for the Ambassadors in India, so, if you chance onto them on IRC, be sure that you thank them for doing a job well and, doing it with a passion that is unique to folks within Fedora.)

This year we have been able to reach out to a number of events and groups which helped us take the message of the Four Foundations to them. That has been good. We have also noticed that a larger number of those signing up to become Ambassadors are students or, are dipping their feet into the FOSS way of doing things. So, here’s the area in which we need to work our hardest.

Earlier I wrote:

Additionally, if during the initial days, the new Ambassadors are encouraged to actively participate in any other part of the project, it should lead to greater involvement and appreciation of the Foundations. This of course has the advantage of helping them build the social connects and network across projects/amongst individuals which is an invaluable part of being an Ambassador. It also builds up the required confidence in the Ambassador to go out and evangelize about contributing back to various projects and upstream. Because, if one has already drunk the Kool-Aid, talking about it is dead simple.

And, it is true. An Ambassador is the face of the project to the external world. It requires people skills but more importantly, it requires an intrinsic knowledge about the project that takes time and effort to build up. Unless an Ambassador takes a keen interest in the various projects within Fedora and, contributes to at least one of them, it is an uphill climb for most. More so for a student who is just learning the ways of FOSS and, gathering experiences via Fedora.

In the coming months, the plan is to put in place a stronger coaching plan for these student contributors so as to tap into their huge talent and, the capacity to produce stunning results. We have always been surprised by the sheer amount ideas that come up when students are gradually pointed to a direction.

Stay tuned. Exciting stuff is going to happen.

Do we need to look for new software ?

In an unguarded moment of misguided enthusiasm (and, there is no other way to put it) I volunteered to translate a couple of my favorite TED talks. The idea was simple – challenging myself enough to learn the literary side of translating whole pieces of text would allow me to get to the innards of the language that is my mother tongue and, I use for conversation. Turns out that there was an area that I never factored in.

Talks have transcripts and, they are whole blocks of dialogue which have a different feel when undergoing translations than the User Interface artifacts that make of the components of the software I translate. In some kind of confusion I turned to the person who does this so often that she’s real good at poking holes in any theory I propound. In reality, it was my turn to be shocked. When she does translations of documents, Runa faces problems far deeper than what I faced during the translation of transcripts. And, her current toolset is woefully inadequate because they are tuned to the software translation way of doing things rather than document/transcript/pieces of text translation.

In a nutshell, the problem relates to the breaking of text into chunks that are malleable for translation. More often than not, if the complete text is a paragraph or, at least a couple of sentences – the underlying grammar and the construction are built to project a particular line of thought – a single idea. Chunking causes that seamless thread to be broken. Additionally, when using our standard tools viz. Lokalize/KBabel, Virtaal, Lotte, Pootle, such chunks of text make coherent translation more difficult because of the need to fit things within tags.

Here’s an example from the TED talk by Alan Kay. It is not representative, but would suffice to provide an idea. If you consider it as a complete paragraph expressing a single idea, you could look at something like:

So let's take a look now at how we might use the computer for some of this. And, so the first idea here is just to how you the kind of things that children can do. I am using the software that we're putting on the 100 dollar laptop. So, I'd like to draw a little car here. I'll just do this very quickly. And put a big tire on him. And I get a little object here, and I can look inside this object. I'll call it a car. And here's a little behavior car forward. Each time I click it, car turn. If I want to make a little script to do this over and over again, I just drag these guys out and set them going.

Do you see what is happening ? If you read the entire text as a block, and, if you are grasping the idea, the context based translation that can present the same thing lucidly in your target language starts taking shape.

Now, check what happens if we chunk it in the way TED does it for translation.

So let's take a look now at how we might use the computer for some of this.

And, so the first idea here is

just to how you the kind of things that children can do.

I am using the software that we're putting on the 100 dollar laptop.

So, I'd like to draw a little car here.

I'll just do this very quickly. And put a big tire on him.

And I get a little object here, and I can look inside this object.

I'll call it a car. And here's a little behavior car forward.

Each time I click it, car turn.

If I want to make a little script to do this over and over again,

I just drag these guys out and set them going.

Get them out of context and, it does make threading the idea together somewhat difficult. At least, it seems difficult for me. So, what’s the deal here ? How do other languages deal with similar issues ? I am assuming you just will not be considering the entire paragraph, translating accordingly and then slicing and dicing according to the chunks. That is difficult isn’t it ?

On a side note, the TED folks could start looking at an easier interface to allow translation. I could not figure out how one could translate and save as draft, and, return again to pick up from where one left off. It looks like it mandates a single session sitdown-deliver mode of work. That isn’t how I am used to doing translations in the FOSS world that it makes it awkward. Integrating translation memories which would be helpful for languages with substantial work and, auto translation tools would be sweet too. Plus, they need to create a forum to ask questions – the email address seems to be unresponsive at best.