IMO a small blog website is not going to get pulled-up for this - it's about the author making a point. They're entitled to do so of course.
Well, maybe not the typical engineering blog but I think if you're a puritan some posts/texts from Aphyr probably reaches borderline "adult content", so I'm not that surprised Aphyr rather play it safe and also make a point at the same time.
Someone noting it is unavailable in the UK.
Someone posting an archive.is link.
Someone asking why the above posted an archive link to a static site.
An answer that it is because the content is otherwise unavailable in the UK.
Do we really need to see this every single time?
I realize I am also not adding to the real discussion now as well, but Jesus Christ, this is irritating. Can we get a new rule that an author posting their own content, knowing it is unavailable in the UK, has to post their own archive link and explain why they're doing so as part of the submission?
Relax, not everyone sees every article everyday
And obviously a way to filter in/out those flags.
[Author blocks link to avoid being potentially in violation of the law]
You ask author to willingly provide link to again potentially be in violation of the law
You do not see the irony in your question
The biggest failure of imagination, I think, is the assumption we'd use humans for most (or *any) of these jobs--for example, the work of the haruspex is better left to an LLM that can process the myriad of internal states (this is the mechanical interpretation field).
I think you can combine 'Incanters' and 'Process Engineers' into one - 'Users'. Jobs that encompass a role that requires accountability will be directing, providing context, and verifying the output of agents, almost like how millions of workers know basic computer skills and Microsoft Office.
In my opinion, how at-risk a job is in the LLM era comes down to:
1: How easy is it to construct RL loops to hillclimb on performance?
2: How easy is it to construct a LLM harness to perform the tasks?
3: How much of the job is a structured set of tasks vs. taking accountability? What's the consequence of a mistake? How much of it comes down to human relationships?
Hence why I've been quite bullish on software engineering (but not coding). You can easy set up 1) and 2) on contrived or sandboxed coding tasks but then 3) expands and dominates the rest of the role.
On Model Trainers -- I'm not so convinced that RLHF puts the professional experts out of work, for a few reasons. Firstly, nearly all human data companies produce data that is somewhat contrived, by definition of having people grade outputs on a contracting platform; plus there's a seemingly unlimited bound on how much data we can harvest in the world. Secondly, as I mentioned before, the bottleneck is both accountability and the ability for the model to find fresh context without error.
If we think of the digitization tech revolution... the changes it made to the economy are hard to describe well, even now.
In the early days, it was going to turn banks from billion dollar businesses to million dollar ones. Universities would be able to eliminate most of their admin. Accounting and finances would be trivialized. Etc.
Earlier tech revolution s were unpredictable too... But at lest retrospectively they made sense.
It's not that clear what the core activities of our economy even are. It's clear at micro level, but as you zoom out it gets blurry.
Why is accountability needed? It's clearly needed in its context... but it's hard to understand how it aggregates.
This is dependent on having a court system uncaptured by corruption. We're already seeing that large corporations in the "too big to fail" categories fall outside of government control. And in countries with bribing/lobbying legalized or ignored they have the funds to capture the courts.
The government gains partial control and the people under it's control get partial protection.
"oh I'm sorry your hospital burned down mr plantiff but the electrician was following his professional rules so his liability is capped at <small number> you'll just have to eat this one"
My implementation speed and bug fixing my typed code to be the bottleneck - now I just think about an implementation and it then exist - As long as I thought about the structure/input/output/testability and logic flow correctly and made sure I included all that information, it just works, nicely, with tests.
Unix philosophy works well with LLM too - you can have software that does one thing well and only one thing well, that fit in their context and do not lead to haphazard behavior.
Now my day essentially revolves around delivering/improving on delivering concentrated engineering thinking, which in my opinion is the pure part about engineer profession itself. I like it quite a lot.
Though something I half-miss is using my own software as I build it to get a visceral feel for the abstractions so far. I've found that testability is a good enough proxy for "nice to use" since I think "nice to use" tends to mean that a subsystem is decoupled enough to cover unexpected usage patterns, and that's an incidental side-effect of testability.
One concern I have is that it's getting harder to demonstrate ability.
e.g. Github profiles were a good signal though one that nobody cared about unless the hiring person was an engineer who could evaluate it. But now that signal is even more rubbish. Even readmes and blog posts are becoming worse signals since they don't necessarily showcase your own communication skills anymore nor how you think about problems.
Github code itself maybe irrelevant, but is the product KISS/UNIX? Or is it an demonstration of complete lack of discipline about what "feature" should be added. If you see something that have multiple weakly or completely irrelevant feature strung together, it's saying something. Additionally, AI would often create speghetti structures, and require human shepherding to ensure the structure remain sound.
Same with communication. I have AI smell, I know if something is AI slop. In my current job, docs sent with expectation for others to read always prefaced with -- this section typed 100% by aperocky -- and I dispensed with grammar and spelling checks for added authenticity. I'll then add -- following section is AI generated -- to mark the end of my personal writing.
I think that is the way to go in the future. I pass intentional thinking into AI, not the other way around. There are knowledge flowing back for sure, but only humans possess intention, at least for now.
Yup. I've spotted former coworkers who I know for a fact can barely write in their native language, let alone in English, working for AWS and writing English-language technical blog posts in full AI-ese. Full of the usual "it's not X, it's Y", full of AI-slop. Most of the text is filler, with a few tidbits of real content here and there.
I don't know before, but now blog posts have become more noise than signal.
I remember those days fondly and often wish I could return to them. These days it's not uncommon to go a couple days without writing a meaningful amount of code. The cost of becoming too senior I suppose.
They can take their 20+ years of experience and use it to build working systems in the gaps between meetings now. Previously they would have to carve out at least half a day of uninterrupted time to get something meaningful done.
How long do you think it'll take for the AI trend to mostly automate the parts of your job that still make you excited?
Everyone thinks it won't be them, it will be others that will be impacted. We all think what we do is somehow unique and cannot be automated away by AI, and that our jobs are safe for the time being.
We're literally trying to build an intelligence to replace us.
Whether the tool is too powerful or ethical to use is an orthogonal discussion, in my opinion. Taken to the extreme, nuclear weapons still need someone fire or drop them. (We should still have discussions on safety and ethics always!)
Isn't this addressed explicitly in TFA, in section "meat shields"?
As for the rest, if you predict even the jobs described in TFA will be obsoleted by future LLMs+tools, then the future is even more dire than predicted by Aphyr, right? Fewer jobs for humans to do.
elcapitan•1h ago
sp527•1h ago
bythreads•1h ago
cwmma•1h ago
jjulius•1h ago
Edit: Further, the only times "meat" appears is in the phrase "meat shield", which is an analogy that is very apt relative to the crux of the article.
Edit 2: "People" appears 13 times!
sebg•1h ago
techteach00•1h ago
elcapitan•32m ago
fHr•1h ago
ai_critic•59m ago
crote•26m ago
A company like Amazon doesn't treat its warehouse workers as human beings. Workers are seen as disposable: forced to piss in bottles, forced to work around the corpses of their collapsed coworkers, paid the absolute minimum possible, and replaced the second they don't operate like a perfect unfailing machine. You aren't viewed like a human, you are a tool. Cattle. A piece of meat they are forced to retain because a robot isn't quite capable of doing your task yet.
The article's use of "meat shields" isn't any different. Humans are going to be hired for the sole reason of taking accountability for actions dictated by AI. They are there only because the company can't put blame on a machine and will be sued to oblivion if there's nobody to blame at all. Your existence as a person is irrelevant, they are just interested in someone with a heartbeat they can blame when stuff inevitably goes wrong.