I don't understand this. It's only six thousand words and it's the polishing that takes the time. How would it have taken weeks to do the initial draft?
And I don't have any skill in Russian, but I would say that his translation is not good, or at least was not thoughtfully made, based solely on the fact that he did not write the author's name in it.
Building up an epistemology isn't just recreational, ideally it's done for good reasons that are responsive to scrunity, standing firm on important principles and, where necessarily, conciliatory in response to epistemological conundrums. In short, such theories can be resilient and responsible, and facts based on them can inherent that resilience.
So I think it completely misses the point to think that "facts imply epistemologies" should have the upshot of destroying any conception of access to authoritative factual understanding. Global warming is still real, vaccines are still effective, sunscreen works, dinosaurs really existed. And perhaps, more to the point in this context, there really are better and worse understandings of the fall of Rome or the Dark Ages or Pompeii or the Iraq war.
If being accountable to the theory-laden epistemic status of facts means throwing the stability of our historical understanding into question, you're doing it wrong.
And, as it relates to the article, you're doing it super wrong if you think that creates an opening for a notion of human intuition that is fundamentally non-informational. I think it's definitely true that AI as it currently exists can spew out linguistically flat translations, lacking such things as an interpretive touch, or an implicit literary and cultural curiosity that breathes the fire of life and meaning into language as it is actually experienced by humans. That's a great and necessary criticism. But.
Hubert Dreyfus spent decades insisting that there were things "computers can't do", and that those things were represented by magical undefined terms that speak to ineffable human essence. He insisted, for instance, that computers performing chess at a high level would never happen because it required "insight", and he felt similarly about the kind of linguistic comprehension that has now, at least in part, been achieved by LLMs.
LLMs still fall short in critical ways, and losing sight of that would involve letting go of our ability to appreciate the best human work in (say) history, or linguistics. And there's a real risk that "good enough" AI can cause us to lose touch with such distinctions. But I don't think it follows that you have to draw a categorical line insisting such understanding is impossible, and in fact I would suggest that's a tragic misunderstanding that gets everything exactly backwards.
Certainly some facts can imply a certain understanding of the world, but they don't require that understanding in order to remain true. The map may require the territory, but the territory does not require the map.
“Reality is that which, when you stop believing in it, doesn't go away.” ― Philip K. Dick
"This score captures if there is nontrivial AI usage that successfully completes activities corresponding to significant portions of an occupation’s tasks."
Then the author describes their job qualitatively matching their AI applicability score by using AI to do most of their work for them.
If there's a lot of unmet demand for low-priced high-quality translation, translators could end up having more work, not less.
On the other hand, one day they will replace human beings. And secondly, if something like transalation (or in general, any mental work) becomes too easy, then we also run the risk of incresing the amount of mediocre works. Fact is, if something is hard, we'll only spend time on it if it's really worthwhile.
Same thing happens with phone cameras. Yes, it makes some things more convenient, but it also has resulted in a mountain of mediocrity, which isn't free to store (requires energy and hence pollutes the environment).
Haven't even read it completely, but in contrast to the countless submissions regurgitating badly thought-out meta arguments about AI-supported software engineering, it actually seems to elaborate on some interesting points.
I also think that the internet as a primary communication and mass medium + generative AI evokes 1984, very strongly.
I don't believe the current race to build AI is actually about any productivity gains (which are questionable at best).
I believe the true purpose of the outsized AI investments is to make sure the universal answer machine will give answers that conform to the ideology of the ruling class.
You can read hints of that in statements like the Trump AI Action Plan [0], but also things like the Llama 4 announcement. [1]
[0] "Ensure that Frontier AI Protects Free Speech and American Values" - https://www.whitehouse.gov/wp-content/uploads/2025/07/Americ...
[1] "It’s well-known that all leading LLMs have had issues with bias—specifically, they historically have leaned left when it comes to debated political and social topics. This is due to the types of training data available on the internet." https://ai.meta.com/blog/llama-4-multimodal-intelligence/
Yeah, no. I find it funny how everyone from other specialties take offence when their piece of "advanced" whatever gets put on a list, but they have absolutely no issue with making uninformed, inaccurate and oversimplified remarks like "averaging machines".
Brother, these averaging machines just scored gold at IMO. Allow me to doubt that whatever you do is more impressive than that.
Well-put
And that is exactly why translators are getting replaced by ML/AI. Companies don't care about quality, that is the reason customer support was the first thing axed, companies see it only as a cost.
This exactly why tech companies want to replace those jobs with LLMs.
The companies control the models, the models control the narrative, the narrative controls the world.
Whoever can get the most stories into the heads of the masses runs the world.
pavel_lishin•2d ago
My mother reads books mostly in Russian, including books by English-speaking authors translated into Russian.
Some of the translations are laughably bad; one recent example had to translate "hot MILF", and just translated "hot" verbatim - as in the adjective indicating temperature - and just transliterated the word "MILF", as the translator (or machine?) apparently just had no idea what it was, and didn't know the equivalent term in Russian.
As a mirror, I have a hard time reading things in Russian - I left when I was ten years old, so I'm very out of practice, and most of the cultural allusions go straight over my head as well. A good translation needs to make those things clear, either via a good translation, or by footnotes that explain things to the reader.
And this doesn't just apply to linguistic translation - the past is a foreign country, too. Reading old texts - any old texts - requires context.
notpushkin•1h ago
incone123•1h ago
Joker_vD•1h ago
Well, "горячий" does have figurative meaning "passionate" (and by transfer, "sexy") in Russian just as it has in English. Heck, English is even worse in this regard: "heated argument", seriously? Not only an argument doesn't have a temperature, you can't change it either (since it does not exist)! Yet the phrase exists just fine, and it translates as "hot argument" to Russian, no problem.
No comments on "MILF" though. But I wouldn't be surprised if it actually entered the (youth/Internet) slang as-is: many other English words did as well.