Recently, Lingoport CEO Adam Asnes wrote an article that appeared in the January issue of MultiLingual magazine. Take a look at the text below.
_________
I’m going to share a vision of software localization that some will resist or refute. This vision changes enterprise translation workflows, internal globalization departments, and vendor landscapes. All this from the humble QA practice, which is often overlooked, late or piling up backlogs.
Linguistic and functional QA for software is important in delivering a quality experience for global users. But the typical software QA process is clumsy and slow compared to most other software development practices. Localization QA is a late waterfall process, despite whatever continuous label the industry uses. There may be exceptions, but at my company, we see this as just about universal. Still, people are in love with their processes and screen shots, but that is exactly holding enterprises back from timely collaboration. In today’s world, people must be willing to learn, unlearn and learn again as technology and processes shift.
Let’s first address how software Linguistic QA (LQA) often goes:
Linguistic Reviewers may navigate an application based on testing instructions, then when finding an issue: create a screenshot, highlight the text to be corrected, open a defect in a bug tracking system, assign it to a manager who will then have to find which team/dev/repository that may belong to, have a developer (who is busy with feature development) figure out where the error is located for a string in a language they most likely don’t speak and may not even read (think Chinese or Japanese for an English speaker), and send the file with a correction via email to a language service provider. The language service provider team then needs to handle that retranslation: start a workflow, assign that correction, return the correction, send the correction some time later back to the development team (which may be busy with some other tasks), figure from the email where to push the file back, and finally have a correction in the repo. This tends to take days, weeks or even years.
This is a horrendous workflow. To be sure, we’ve seen that some companies have found ways to condense some parts of the LQA effort. But at almost all companies, it’s an onerous process that people have gotten used to because they didn’t have much choice.
Alternatives include using proxies (always limiting), complex changes to DevOps systems which have poor results across web, mobile and desktop platforms, or asking developers to capture screenshots and comments and associate them with strings (hard to scale and depends on people doing the right thing).
The software development world is where we orbit at Lingoport, so that is my perspective. As translator outsiders, we like to think of ourselves as coming up with development-centric ideas that the industry doesn’t see, because of its focus on words and word handling. Whereas we focus on developer problems like internationalization, continuous localization updates, and, you guessed it, now QA.
When we talk about the needs of software developers working as individuals, in teams and multiplied across an enterprise, scale and speed become clear issues. Developer time is expensive. Product release delays are even more expensive. Poor releases are more expensive still. Missed or bungled opportunities are among the most competitively expensive. And I find loss is often more persuasive than gains.
The biggest loss that software localization teams are facing is their rejection and friction with development teams.
The biggest cause of that dysfunction is that localization usually doesn’t work the way developers have come to expect integrated services should perform.
Agile development is the absolute norm for most organizations, at least in some form. This has a few conditions that localization teams must heed if they are to eliminate friction within their organization and make their companies global powerhouses.
Agile development requires that coding quality and QA practices are integrated into everyday development. That is not solved by a TMS connector alone. The point is to find software issues early based on analysis and feedback, fix the issues fast and early when they are easy and fast to correct, and then move on. So the saying goes, iterate, iterate, and iterate. Demonstrating success of a new feature and the feedback mechanism are essential. Developers should not be waiting until a release has been localized to start finding and fixing issues, or dealing with replacing words.
What developers want:
Developers like and are used to automated processes that help them with repetitive tasks, which in turn limits exposure to revisiting code for fixes later. They are used to systems that automatically help them assess coding quality, assist in code review amongst themselves, find potential security issues, and visually/functionally inspect that what they are working on performs as planned.
Internationalization issues that impact localization, like embedded strings, concatenations, locale-unsafe methods, static files and more can be found via static analysis as an automated process tied to continuous automation. In other words, if a developer hard codes a font that doesn’t work well for traditional Chinese, that should be indicated in a pull request, dashboard, their IDE or the like. Developers are used to seeing and fixing these sorts of issues as part of their daily coding and review. It’s usually a fast fix if you know the problem and where it is in the code. If a QA team is finding it later based on, for example, a screenshot showing a broken date format, it takes longer to fix. Bug reports, rather than trackable issues in code, means stopping what they are currently doing (hurting velocity), verifying the bug, figuring out where in the code it exists, fixing it, verifying the fix, and then closing out the bug, which then gets reviewed by others. Multiply that by developers, teams and products across an enterprise. Clearly, fixing it in real time during development is easier. This is the i18n process improvement we’ve been known at my company for for years.
We would like localization to be a “no touch” automated experience for the development teams. This should be the normal expectation by now. Developers should not have to wait on localization to actually see how their software performs in various locales. They should not be finding out later on that an address format doesn’t work, or that someone has hard coded as string or concatenated a dialog so that the word order doesn’t work everywhere. Country managers should be able to view development in target languages, right in step with development, well before human localization is complete. In fact, what we call continuous localization, that is dependent on human localization, is not really continuous at all. It’s a streamlined process, but it doesn’t continuously move with development. And the whole reason it doesn’t is because LQA is an onerous process.
Think of it this way, if you knew it was easy to update translations during LQA, then you could make it just like any other QA human process and have it take place in complete parity with functional QA.
The new workflow would automate trained machine translation that updates with every branching feature development effort. Words get continuous updates with changes via machine. The human translators could then interact with those translations either through the TMS or better yet, directly within the U/I, both of which are solutions that Lingoport has built.
How much easier would it be if instead of the long list of activities mentioned earlier, a linguistic tester would simply indicate what needs to be changed, that change is tied to the offending string, and after approval, it all updates in the source code automatically? The new workflow reduces to inspect, change, review/accept/reject the change, and then the source is automatically updated. We’ve reduced to three streamlined steps that can happen in step with development.
This was a difficult trick to implement, and make work across platform types, programming languages and development eccentricities, but it was a vision we’ve had for some years.
The full LQA leveraged workflow goes like this:
- Developer(s) creates a branch(s) to work on a new feature(s)
- Their code is automatically statically analyzed real time for i18n issues
- U/I strings in resource files are automatically detected, transformed if needed, translated, via a favored/trained MT engine, and returned to the source code
- Developers can see how the application looks and functions in target locales, plus pseudo-translations
- Source strings can be easily edited by the product team and the edits in turn are automatically translated
- Optional: The strings that have be machine translated, can be reviewed by humans in supporting TMS
- LQA testers review the application, following manual test script instructions that are already produced for the functional testing teams.
- LQA team updates strings, which are then queued for localization management review
- Upon acceptance of the human translator edits, the resource files are automatically updated in their respective source code repositories.
- The translation edits automatically go full circle back to the MT engine to update the corpus or TM. The MT system gets better with continuous use.
We leave the initial translation to the machines, and put humans where they can be of greater value – actually reviewing the application where they can have input beyond changing 33 words in a table as fast as they can. Functional and linguistic quality is faster and better when review and updates are right there at the tester’s screen and fingertips.
New technologies, new processes and old thinking
With any change in technology and related processes, there is the challenge of changing thinking. It’s hard for people to imagine changes in workflow and there’s work to be done up front to gain the benefits. For instance, the integrated process I’ve mentioned means getting comfortable with machine translation, which has been making big impacts on the industry anyway. You have to be ok that the MT might not be initially perfect, and comfortable in your team catching the issues during LQA. You should put in the work to assemble glossaries and translation memory assets. Then final QA edits also automatically go back to the MT engine or TMS to iteratively improve results. Your current vendors may not be flexible enough.
Changing the role and measurement of Translation
Globalization managers have to think of hiring a team that becomes part of your regular QA process, and not paid by the word. This is a vendor partnering opportunity. You wouldn’t think of paying a functional QA person by the word, because that doesn’t relate to the task. Same for the QA approach I’ve described. In talking to some translators, this represents an opportunity. I do find it troublesome that in the world of software translation, the norm is to rely on a translator to take a machine translation of bundles of short strings and rip through them with edits in as little time as possible for payment by the word. It strikes me that retaining highly educated translation experts willing to translate a batch of projects with an average of say 33 words each, is a limiting business practice. But being a relied upon LQA expert that reviews new features regularly and gives locale-specific, or language-specific input, is better for everyone.
It’s also hard for people to think differently when they see new solutions. An example is showing a much faster way to update a string, and yet some people still want traditional screen shots and bug reports. It’s what they are used to being measured by, even if a streamlined process makes them irrelevant.
The reader may be wondering where the TMS fits in?
The TMS calling is to be the arbiter of translation efforts and resources. The system I have described can operate entirely outside of a TMS, but it can also operate in partnership, depending upon the technical integration. The catch is that the workflow I’ve described doesn’t have to have a CAT tool or translator workbench, since corrections can happen in line with application screens. That said, a translator could review the translations in the traditional source/target paradigm. There are cases for this, for example, in life sciences, you may want a double human translator process to absolutely make sure of the right translation.
This approach opens up localization to a new class of software providers who might not ordinarily justify localization into many markets due to processing overhead, minimum translation charges for iterations and lack of overall budget. In our experience we’ve seen application providers who were only thinking of one or two target languages leap to 10 or more. In some cases, due to the ease of QA updates, they are having in-country stakeholders and distributors review and update translations. I know that won’t scale well, but it gets a toehold for software teams who represent the future growth of our industry. And language is very much a part of the relationship we have with individual users.