I feel the need, the need for… machine translation
Following an interesting late-night discussion with fellow linguists, I feel the need to chat a little more about MT. The argument was that you cannot translate unless you understand meaning. I don’t believe that’s true. I believe the number of possible word and letter combinations is finite – very large, but finite nonetheless. So, all you need is a computer system that can handle an incredible amount of data – say, billions of lines of text – plus some kind of a clever AI engine that calculates probabilities. If we can eloquently translate the weather forecast – and I have not cross-checked this, so let’s just say that we can – then why should a system not be able to tackle a much more complex text?
Enter Google. A recent NYT article states that Google is using a so-called statistical approach and a few hundred billion words to create a model of a language (Source). This sounds very plausible to me. Now, there are some obvious design flaws with this. If the system checks thousands or millions of passages and their human-generated translations, who is to say that the human translation was flawless to begin with? But if I read this MT translation of The Little Prince, it is almost – eerily – better than its human equivalent! To be fair, the other MT translations in that same article did not impress me at all – but The Little Prince did.
So, in conclusion, I believe that decent-quality MT translation is not that far off, maybe in the next 10-15 years. But don’t quit your day job just yet – a sophisticated or literary text will always need to be proof-red and fact-checked by a human. Only change is constant – we just need to adapt and change the way we, as translators, work. And that’s not necessarily a bad thing.