Last September, I wrote a blog post about Slate Desktop, a new customised machine translation (MT) engine developed by Precision Translation Tools that was released in February this year. Since then, I’ve been testing Slate Desktop alongside the SDL Language Cloud Custom MT Engines solution. Here’s a report on my findings so far.
Some background facts
What is a customised machine translation engine?
A generic machine translation engine (think Google Translate) is the result of processing huge bilingual, parallel text corpora. It’s usually free or quite inexpensive for the end user. Customised MT is built on your own translation memories (TMs) and it produces more meaningful translations in your own fields and languages. What you get out depends entirely on what you put in. It’s not free (see below for prices).
How and where do you build your own MT engine?
Both Slate Desktop (Slate) and SDL Language Cloud Custom MT Engine (Language Cloud) need to be fed a TM with at least 90,000-100,000 units. Fewer units may be fine in very narrow fields. The engine building process is fairly straightforward in both cases, but it’s time consuming.
In the case of Slate, the engine is trained and stored on your own machine. Engine training time depends on your computer specs and TM size, but my 120,000-unit TM took about 4 hours with a powerful PC. (I actually left the process running overnight.) Apparently, adding new content and rebuilding an existing engine is just around the corner, which will save considerable time.
With Language Cloud, the Custom MT engine is built, encrypted and stored on SDL secure servers. Engine training time takes about the same time as Slate, but you have to upload the TM and then your request joins a queue, so it may take longer in practice. (My custom engines have taken between 4 hours and 2 days to build.)
In addition to the engine itself, both tools let you force specific terminology during MT look-up by giving priority to a glossary or dictionary.
With Slate this means adding a tab delimited text file to a specific folder and with Language Cloud you need to upload a TBX file. A MultiTerm termbase can be converted easily to either file format with the Glossary Converter from the SDL AppStore.
Slate also offers the option of adding a text file for variables, which works like a terminology file, but isn’t language specific.
Both Slate and Language Cloud are integrated in Studio as Automated Translation providers. They can be added to a specific project under Project Settings > Language Pairs > All Language Pairs > Translation Memory and Automated Translation > Add, or under General Settings (File > Options).
Slate needs the free plug-in for Studio 2015 and it doesn’t work in earlier versions. However, it also connects with memoQ, OmegaT and CafeTrans, and as a standalone application it can be used to pretranslate XLF and other file formats.
Slate Desktop costs $549.00 for a permanent license and it comes with a 30-day trial. You can build any number of engines and add any number of terminology files. Slate can be installed on two machines and it’s easy to transfer your engines from one to another.
The SDL Language Cloud Custom MT Engine package costs $90.00/month and you can try it out for free for 30 days. The subscription allows you one customised engine and one dictionary. It works on any machine where SDL Trados Studio is installed and the package includes other features that don’t require Studio at all, such as MS Office add-ins.
When I started looking at customised machine translation solutions last year, I was aware that much of my work wouldn’t be suitable at all.
Articles for medical journals, for example, are written in long-winded sentences in my source language and need considerable reworking and rewriting in English. They discuss new procedures that I haven’t translated before, so not even my TMs are particularly useful, let alone machine translation.
However, clinical trials are quite repetitive, have standardised terminology and should be written clearly, in fairly short sentences. These three aspects make this field a potential candidate for customised machine translation. I thought that informed consent forms, ethics committee letters and back translations of these documents would be good texts to try out with my customised MT engines.
Happily, since Slate and Language Cloud are based on my own TMs, they don’t make the typical mistakes that Google Translate makes. They know that in clinical trials, promotor in Spanish is not promoter in English:
Short, simple segments are generally easy for my custom machine engines. Here, both alternatives are fine:
(Although Language Cloud didn’t realise that análisis was plural)
Longer segments are much more problematic:
(Here, Language Cloud does a better job than Slate).
Sometimes, length has no bearing on the result:
(Here, Slate manages just one of these drug names, whereas Language Cloud gets them spot on.)
(Here, Language Cloud took too long to look up a segment that was not too long or difficult to understand, while Slate almost got it spot on.)
Understandably, convoluted source segments are non-starters:
(Best to start from scratch here)
Both tools have problems dealing with upper case segments:
Slate has problems with initial capitals and end punctuation:
Square brackets trigger rogue code in Slate:
Language Cloud times out if the look-up takes too long:
Language Cloud embeds tags correctly; Slate omits all tags:
Unfortunately, however, the time-out issue sometimes means that tags are included, but everything else is omitted:
Feedback from the developers
Slate Desktop has several technical issues, as seen in the screenshots above. I asked Slate’s developer, Tom Hoar, what is being done to sort out them out and he provided the following responses:
1. Lack of tags: Yes, Slate Desktop v1.x removes all tags from the source segment and does not attempt to place them in the target. Our roadmap includes support to clone XLIFF 1.x and 2.x inline elements and place them as closely in the target element as possible. Our timeline and release version are TBD. As an interim step, we could clone the tags and simply place them at the end of the target segment.
2. End punctuation inconsistencies: Possibly fixed with a bug-fix implemented in the upcoming v1.1.
3. Rogue code: These are “escape sequences” that temporarily replace the open/close square brackets. A bug in Slate Desktop allowed them to leak through to the end-user (i.e. not temporary). Another customer reported this and we fixed it in the upcoming v1.1 release.
4. Upper-case usage: Slate Desktop restores target language casing according to casing and spacing found in the TMs’ target segments without regard to the source language casing. “Fixing” this example with a broad rule that copies source language casing could “break” otherwise desirable results for another language pairs, products or contexts. [We will probably try to] fix this specific use-case – where the entire source language segment is upper case – using a rules-based rather than a corpus-based approach, to simply ensure the target follows the source casing. As Slate Desktop’s user-interface matures, we will add user-configurable options to enable/disable different features like this one.
The ability to make these kinds of fine-tuning choices shows that Slate Desktop is a hybrid (statistical and rules) system. The statistical Moses system is at its core, and rules-based pre-processing and post-processing can accommodate a wide range of modifications. Translators with a personal interest in experimenting with these features can contact me. Eventually, we envision an SD community that creates and shares their experiments as features for everyone.
SDL Language Cloud Custom MT Engines has ironed out most technical issues (apart from the upper case bug) but it has some serious logistic issues. I asked my Language Cloud contact at SDL, David Pooley, for some feedback and he responded as follows:
1. Upper-case usage: This appears to be a peculiarity with the engine training as our machine translation engines usually return translations in sentence case. If you translate the same text through FreeTranslation.com then you would get a translation of “Internal revision number:” which, even then, is potentially in the wrong case for your requirements. I will raise this observation with the engineering team and we will investigate in due course. When we do implement fixes, since SDL Language Cloud is a SaaS product, you’ll get them automatically and you’re always using the latest version.
David also mentioned (in case anyone doesn’t know) that you can toggle upper and lower case text by selecting it and pressing Shift+F3.
2. Server connection and time-out issues: We have a number of solutions built on SDL Language Cloud which include the integration with SDL Trados Studio as well as the SDL Translate mobile app and FreeTranslation.com. Recently it appears that we have become the victim of our own success with a big increase in the traffic; some of which is being generated by users looking to abuse our free translation offerings. We are aware of this and are taking steps to:
- Prevent users abusing our service
- Increase our server infrastructure to deal with the increased demand
- Ensure that the SDL Trados Studio integration is more robust and deals gracefully with failed connections to SDL Language Cloud
3. Price. [My comment: There’s a huge range of subscription options, but only the very top one, Specialist, offers a single custom engine for $90/month. It seems that you’re not targeting this project at freelance translators at all at this price?]
We created this package for the freelance market. We realize the price is higher than the other packages, but the benefit of being able to train a custom engine is significant. We are also currently working on new features and functionality that will be more appealing for freelance translators so watch this space. I will, however, illustrate an ROI calculation that you may find surprising (and I’ll err on the low side for some of these figures). A freelancer translating 250 words per hour at $0.05 per word would be earning $12.50 per hour. If using a personalized engine increases that productivity by 20% then the new hourly revenue is $15.00 (an increase of $2.50 per hour) and the $90 would be recouped in 36 hours which is roughly one week and yields up to $270 “profit” for the remainder of the month. The 250 words, $0.05 per word and 20% increase are conservative numbers and it’s entirely possible that the cost would be recouped much quicker.
Slate Desktop and SDL Language Cloud customised MT engines are new products that are still being developed and improved. My conclusions are based on the current builds.
Both tools produce useful suggestions for simple segments in my carefully-selected fields. I don’t pretranslate whole files with them, but set them to automatic look-up when my TMs don’t find a match above 85%. In the past, I’ve always set this threshold to 70%, but TM hits from 70-85% need considerable rewriting in any case and my customised MT engines tend to be more useful in this fuzzy bracket.
I think Language Cloud has the edge over Slate in terms of translation quality. Here, for example, it’s worth post-editing the Language Cloud suggestion, whereas the Slate version has substantial deviations and needs to be started from scratch:
Slate has solved the dichotomy between confidentiality and machine translation because the entire process takes place on your local machine. Client confidentiality cannot be breached.
With Language Cloud, my translation engine is stored in an encrypted environment and my source segments aren’t accessed by anyone else. But the cloud will never be as secure as my own computer.
On my machine, Slate look-ups take between one and three seconds, depending on segment length. Slate has the edge over Language Cloud not only in look-up times but also in making quick tweaks to a terminology file. Adding a couple of terms or variables for a specific project is a breeze.
Language Cloud performed well when I started testing it several months ago, but right now the lag is very significant (when it doesn’t time out completely).
Productivity / ROI
I personally don’t feel that my productivity increases by 20% as SDL suggest. For me, a customised MT engine is an additional tool in my toolkit, and it’s the sum of all these tools that makes me highly productive in terms of output per hour. Many years of experience also play a significant part in my productivity.
All in all, I’m happy to be on board the MT train. I can’t wait to find out which station comes next.
Next station will be very interesting Emma, and it’s not so far away now – watch this space!
You should be in marketing, Daniel, not product management 🙂
Interesting stuff, and even though I don’t use Trados, I’ve been vaguely considering getting Slate so the info purely about Slate itself is useful, so thanks for your efforts. Something of a contrast with the review here, https://www.facebook.com/IAPTI/posts/1144298255630197:0, in case you haven’t seen it. There again, the fields/languages are different and it’s early days. Bit surprised at the low quality of longer segments. MemoQ already makes a fair fist of short segments, so hard to see where Slate gains much at this point.
Hi Charlie, Yes, I read Loek’s post when he published it on another FB group. I can see Loek’s points – you do get “treacherous” errors sometimes, but that’s part and parcel of any MT system. I can spot them OK, so they don’t worry me.
I don’t agree with his conclusions. A customised MT engine isn’t about me giving discounts to anyone. It’s about getting a machine to help me with the more boring parts of my work a bit more quickly, so that I can concentrate on the complex parts that machines can’t begin to solve for me 🙂
Absolutely, and what’s more I detest talk of “slavery” in this context where we all have complete freedom to walk away from any situation. I detest it so much I almost didn’t post the link, in order not to give it more publicity. But the comments above the conclusions are worth reading from the point of view of other people’s experience with the actual product.
Thank you, Charlie, for posting that link. Through our differences, I thoroughly enjoyed working with Loek. The experience even helped us fix a bug. So, his evaluation work was service to all SD customers. I commented on FB and asked Loek to describe his refund process.
One thing Loek didn’t mention was how much time/effort he had invested in setting up memoQ’s segment assembly with his “terminology database, which is greatly catered to the stuff I translate. It contains words, phrases, verb conjugations, et cetera, all case-sensitive.” Clearly, his Facebook post reflects his pride in that effort, and rightfully so. More often than not, we hear from customers that these CAT features are too complicated and time-consuming to setup, so they never benefit from them.
In contrast, Emma created an engine in ~4 hours. Loek said it took 1.5 hours to generate his engine, “this is a monster laptop with 64 GB of RAM and lots of other bells and whistles” in his words.
These are only some of the ways everyone’s experiences will vary. While I have no control over anyone’s exact linguistic results, our goal is to provide a constant, reliable trustworthy customer experience.
Thanks for a very interesting article! But I think David Pooley is a bit off the mark: Not only because the productivity increase of 20% may be too high an estimate, as you state yourself, but also because probably few people will use that single custom engine for more than a part of their work. It may still be a worthwhile investment, but the profit he calculates “for the remainder of the month” will hardly be so.
As I stated in my reply to Emma, those numbers are quite conservative. The hourly translation rate (250) should be higher and the price per word ($0.05) is rock bottom. Using our internal translators, we’ve done our own analysis for productivity gain for editing from scratch vs. post-editing MT output and we have seen increases up to 40% so I thought 20% was reasonably comfortable for illustration. I do accept, however, that when used in conjunction with TM and given that you won’t be using the engine all of the time then you may not see the same results.
Pingback: Slate Desktop and SDL Language Cloud Custom Mac...
Thank you for taking the time to test these two tools and share your conclusions with us. Very interesting!
Very interesting, Emma. I’ve always felt like the SDL Language Cloud price options aren’t quite right, not necessarily in terms of price, but in terms of the combinations you can choose, for example the number of dictionaries and industry engines available with the various options. I haven’t tried the high-end option yet, for example, but I’m discouraged by seeing that you don’t get any industry engines and only one dictionary with the single custom engine option. I think it would be better if there was a sort of “build-up” scheme where you can choose the things you need rather than having to settle for pre-packaged combos. I would also agree that a 20% productivity increase sounds too high, and while less experienced translators may see the highest productivity boosts out of MT, I think they may also be less inclined to spend much on MT solutions, but that’s may be just my perception. I’ve been considering Slate for a while, since reading your first post about it, but on most days I can’t decide whether it’s worth the investment.
The concept of MT, especially SMT, always puzzled me.
Algorithms do not translate. Algorithms solve mathematical problems, and even the best of them can go only as far as the mathematical model allows. Turning translation — a cognitive process — into a mathematical problem never seems the right approach to me. If to put it in mathematical terms: Translation is a one to many function, while algorithms in general excel at solving many to one functions.
This is what SMT does. It takes bitext corpora, and through mathematical and statistical calculations and manipulations tries to solve translation is a many to one function. Or in other words: It is only as good as the bitext data used while it tries to “reverse engineer” existing translations.
This brings us to the argument: “Yes, MT doesn’t translate (therefore should be more appropriately called Language Transformation Algorithm or Machine pseudo-Translation, in my humble opinion), but it could be useful as a productivity tool”.
This argument is too generalized and oversimplified in my opinion. Productivity depends on so many factors: experience, specialization level, on’es own quality standards, and general workflow, to name a few one. But even if considering a controlled environment in which all the many variables are equal, if one has small, poor, or otherwise not very meaningful TMs the quality of the MT engine will match no matter how good the algorithm is claimed to be. And if one has large, properly managed, and meaningful TMs (and termbases and other resources such as customized segmentation rules and non-translatable variables, I just don’t see the big intrinsic advantage over using the TMs.
Indeed, if you consider the TM engine as a “fuzzy match engine” (which is another argument I often hear) it could, in some circumstances, fill the current gap at around the 70%-80% fuzzy segment, which are typically not very helpful segments to work with linguistically, but may contain the right terminology and converted/localized numbers and units. However, I suspect that (even rather basic) improvements to the TM sub-segmentation logic could yield comparable results.
And then there is the issue of defining productivity. MpT might save one some typing (or give one the satisfaction of not staring at an empty cell with a blinking cursor), but it shouldn’t automatically be assumed it saves time/increase output. It creates different error patterns that should be accounted for and adjusted to, and as a result could negatively affect the translation process as one focuses on identifying and correcting known MpT issues instead on the translation process itself.
MpT is just a tool, and like all tools could benefit the user but also put a (especially if unqualified/irresponsible) user at risk.
Another alternate and appropriate term for SMT is a “concordance search engine” because the engine searches all concordance combinations in the source language to find the most frequent concordance combination in the translated language.
Emma – thank you for the informative and fair report.
I would be curious to hear your opinion of lilt.com. It takes a different approach – one more like autocomplete and it adapts as you translate. We use it regularly in Transpiral and some of the more savvy freelancers we introduced to it now use it on their own projects. I had an editor from Rolling Stone (the brother of an ex-colleague) who translates articles for the U.S. magazine into French stay with me in Dublin last weekend for a Bruce Springsteen concert and he swears by it, so it seems to be helpful on non-technical text too.
Like Language Cloud any data uploaded is private but unlike Language Cloud and the MT Autosuggest plugin / feature, the Lilt suggestions come from the full multi-gigabyte language model on the server and not just the single target sentence proposal from the MT engine. We found this makes a big difference.
Thank you for your comment, John.
Lilt’s feature of combining its own MT engine with my resources sounds good. Similar to CafeTrans, I believe?
I’ve looked very briefly at Lilt. I uploaded my tmx (this process was not straightforward – several attempts reported 0 segments after uploading) and translated some real-life files. I also ran through the same test file I used for this blog post (i.e., confidential data removed, etc.)
I thought about posting the results here, but decided against it because it’s not fair for me to compare 2 systems I’ve used for several months with one I’ve tried for a few days.
But since you’re asking, my immediate reaction was feeling out of control in a new interface, online, without Studio’s many features. I like to see and control tags, run a Regex search, add comments, filter by varied criteria, use tracked changes, run a customised QA check… the list goes on and on.
Slate has the big advantage that you can integrate it with your own CAT tool (no learning curve) and it enhances that tool’s features.
My very brief foray into Lilt showed it offered similar quality to Slate and Language Cloud, without all the benefits that I get from working in a fully developed desktop environment.
Emma, thank you for sharing your experiences. We have learned a lot and SD becomes a better product for everyone. I’d like to share some things that may not be so obvious.
SD actually allows customers to create & use an unlimited number of engines. If your TMs have sufficient TUs per job type, you can create different engines for each job. If a translator works in multiple language pairs, he/she can make & use multiple engines for each pair. All this at no extra cost.
This opens huge opportunities for those who want to experiment. For example, in your testing ground you can make engines to test which TMs might be useful for which job types. You might find a combination that works with your medical journal articles. Your educated guess is logical, but I’ve learned you don’t really know until you try.
We made SD so there’s no cost barrier to experimenting. As your TMs grow, you can add the new segments to SD’s inventory and regenerate new engines. They learn from the new segments and improve over time.
Furthermore, since you own the engines, you can share them with colleagues for free or profit (hopefully). Unlike sharing TMs, the recipient can’t reuse them for another purpose.
Hi Emma, thank you for your very informative post! It seems that SD can be valuable alternative to cloud solutions. Since PTT continues improving Slate and customers support is very good, all technical issues can be solved in due time.
I agree with Tom’s words that “you don’t really know until you try”. I have tried and satisfied with first results. My engine consists of TM that net ~ 68,500 TUs and I’m on rather slow machine (i3, 4GB RAM). Nevertheless Slate gave me 40% acceptable suggestions, among them 13% were very good. That means I’ll continue to use Slate for my daily work. I can recommend all to try and share results!
Thanks for your comment, Igor. I agree that technical issues will be solved sooner or later. Customer support couldn’t be better, and all my questions have been answered within a few hours, even at the weekend!
It’s good to hear that you’re getting useful results with smaller TMs and a slower machine. I echo your recommendation for people to try Slate out, especially with the 30-day money-back guarantee.