I don't understand these words. Does "AI-native workflow" mean vibe coding?
I am now seeing a lot of roles asking for "AI-enabled engineers". And I am not sure what that means either. I am sort of afraid to ask because the answer will probably confuse me even more. Maybe it's my understanding of what LLMs are and how they work that makes these words mean very little to me.
Cheaper younger people who dont think vibe coding is bad
Is this a good idea - probably not
It would be like hiring a junior to lead a team. They're the worst choice for that role.
"...In practical terms, GM is looking for people who know how to build with AI from the ground up — designing the systems, training the models, and engineering the pipelines — not just use AI as a productivity tool."
Man, the only advice I can give people is do not sacrifice time with your loved ones for a company that doesn’t give a shit. Your kid is only going to graduate once. Those family vacations are priceless in the long run. Hell, I take time off to hang out with my dogs now and then. The job can wait.
I've been down quite a few rabbit holes like that which made me think that a lot of major 'issues' appear to be meticulously engineered to protect a certain set of interests at the expense of others.
It's like; "Damm, houses are expensive, I'm going to live in a caravan" then you realize you can't park it on your own land without council approval... Then you find out that council will never approve due to it "negatively impacting the charm of the area."
Then you become homeless and realize that you can't legally put your tent anywhere and all the camping sites in the wilderness which you used to go to as a child now charge you fees to stay there and have rangers patrolling constantly (paid for by your own tax money you used to pay). Also, you can't get a job without an address and it's a literal catch-22... Then if you lose hope and start doing drugs, bad actors (possibly sponsored by foreign states) put fentanyl in the drug supply to finish you off. Then the media fully covers it up by distracting people with slop.
People are dying and it is covered up in the most targeted, effective way imaginable... They are not only killed, they are blamed for what is system failure on the way out. "Should have gotten a job," or "Shouldn't have done drugs." And the people doing the most blaming and defending the system are passive-income shareholders who have a lot of time on their hands and sit at home all day and further rig the politics in their favour. It's cooked all the way down.
It's like the dystopian book "Brave New World" is looking pretty good by comparison to where we're heading. At least in BNW, the "savages" had a designated reserve they could escape to.
It's really annoying how at some ASIL levels you need 100% code coverage of unit tests. With AI, all you have to do is to get your agent to generate the tests! Likewise with all the MISRA C requirements. Need your cyclomatic complexity to be less than 10? It's just one prompt away! Now your spaghetti code can easily satisfy the safety requirements with much less effort.
AI can't be held accountable, it shouldn't be writing the tests that determine whether car systems function correctly.
I hear this all the time. Why does it matter? Punishing a human for making a mistake does not prevent mistakes, nor does it undo the harm of the mistake. A human saying "my bad, I messed up" and an AI saying "my bad, I messed up" are equally worthless, in a functional sense.
Tell the family of the person killed by a semi truck driver who showed up to work drunk or high: "Don't worry - the driver went to jail! Accountability prevented anything bad from happening!"
Accountability alone fails to prevent deadly mistakes millions of times a day; millions of mistakes are avoided daily through process, redundancy, independent review, and formal methods.
"Accountability prevents mistakes" is a comforting delusion. In reality, accountability is only marginally related to whether or not mistakes are made.
A human also knows they might get punished if it messes up bad enough, which might cause it to think twice before doing something bad. For an AI there is a reward, but there is no risk.
So while both might lie, only the human will be worried that it will be found out. That makes a difference.
What you are describing is a hypothetical "rational person". In real life, even the most rational people you know do completely irrational things routinely.
The Therac-25 engineers were accountable. The 737 MAX engineers were accountable. Accountability is doing much less work in the safety story than you seem to think.
The real work is done by process, redundancy, independent review, formal methods. None of these inherently require someone to be penalized for making mistakes, and penalizing people for making mistakes is a demonstrably, empirically unreliable mechanism for preventing mistakes.
If you want 100% coverage, you just autogenerate the test cases. LLMs can't properly check MISRA requirements, so they're really just a layer on top existing automated checkers. Same for complexity metrics, it doesn't get merged if it violates that (or it's a vendor dependency you won't touch anyway).
If you care about the spirit of the rules, they're not that big a difference. If you don't care, there are already ways to do this. In either case they're an incremental change, not what I'd call a godsend.
To say nothing of their cars.
Firing people with institutional knowledge? So what? It's going to improve profits short-term.
Nevermind that the update cycle seems to be 6-10 months for changes like "You can now reset your radio presets directly from the radio settings menu", while bugs with things like temperature control never get fixed.
I would love to read about why that stuff is the way it is from the engineers, hmm that might be a good spelunking. I really must be missing something that makes it harder than I think it really should be.
Carbage in, carbage out.
skywhopper•58m ago
forgetfreeman•27m ago
pixl97•22m ago