The problem is that they are directionally correct in that it would be bad to have a patch work of laws around AI but the alternative is we leave it to congress which has consistently shown an inability to thoughtfully regulate or reform anything - just pass mega spending bills and increase the debt limit.
Why would that be bad? And for who?
Wouldn't it be better to have a variety of laws around something new, and figure out over time what is optimal? Wouldn't this be better than having one set of laws that can be more easily compromised via regulatory capture? Why the common assumption that bigger and more uniform is better? Is that to encourage bigger companies and bigger profits? Has that been a good thing?
Nor does the deficit (and at least a dozen other big issues)
The term "performative bad faith" comes to mind...
Right now, The US and China are in an AI war. The US is doing everything it can to stop China from making progress on AI like it was a nuclear bomb. And it just might be that consequential in 10 years.
Where I am now is past the "3 sleepness nights" of 'Co-Intelligence' fame.
If you haven't seen a properly contexted (50k-100k tokens, depending on the size of the project(s)) LLM work in a code repo, then you have no idea why so many of us are terrified. LLMs are already taking jobs. My company laid off 7% of the workforce because of LLM's impact directly. I say that not because the CEO said it, but because I see it in my day to day. I'm a Principal Engineer and I just don't have need of Juniors anymore. They used to be super useful because they were teachable, and after some training you can offload more and more work to them and free up your time to work on harder problems.
With MCPs, LLMs aren't limited to the editor window anymore. My models update my JIRA tickets for me, rip content from the wiki into it's markdown memory bank which is kept in repo and accelerates everyone's work. It's connecting to databases to find out schemas and example column data. Shit, as I'm typing this it's currently deploying a new version of a container to ECR/ECS/Fargate with terraform for a little project I'm working on.
I believe we are in the very early days of this technology. I believe that over the next ten years, we are going to inundated with new potential for LLMs. It's impossible to foresee all the changes this is going to bring to society. The use cases are going to explode as each tiny new feature or new mode evolves.
My advice is to get off of the sidelines and level up your skills to include LLM integrations. Understand how they work, how to use them effectively, how to program system integrations for them... agents especially! Agents can be highly effective at many use cases. For instance, an Agent that watches a JIRA board for new tickets which contain prompts to be executed in certain repos, then executes the prompt and creates a PR for the changes. All in a context that is fully aware of your environment, deployment, CI/CD, secrets management, etc.
Anything will be possible sooner than we expect. It's going to impact the poorest people the most. A really cyberpunk reality could be upon us faster than we expect, including the starving masses stuggling to get enough to even survive.
Furthermore, I think we are going to find less and less work for Juniors to do because Seniors are blasting through code at a faster and faster pace now.
I'm not the only one saying that the entry level market is already getting trashed...
There's a reality to content with here. We all know that software developers have been coming out of school with decidedly substandard skills (and I am being very kind). In that context, the value they might add to an organization has almost always been negative. Meaning that, without substantial training and coaching --which costs time, money and market opportunity-- they can be detrimental to a business.
Before LLM's you had no options available. With the advent of capable AI coding tools, the contrast between hiring an person who needs hand-holding and significant training and just using AI is significant and will be nothing less than massive with the passage of time.
Simply put, software development teams who do not embrace a workflow that integrates AI will not be able to compete with those who do. This is a business forcing function. It has nothing to do with not being able to or not wanting to train newcomers (or not seeing value in their training).
People wanting to enter the software development field in the future (which is here now), will likely have to demonstrate a solid software development baseline and equally solid AI-co-working capabilities. In other words, everyone will need to be a 5x or 10x developer. AI alone cannot make you that today. You have to know what you are doing.
I mean, I have seen fresh university CS graduates who cannot design a class hierarchy if their life depended on it. One candidate told me that the only data structure he learned in school was linked lists (don't know how that's possible). Pointers? In a world dominated by Python and the like, newbies have no clue what's going on in the machine. Etc.
My conclusion is that schools are finally going to be forced to do a better job. It is amazing to see just how many CS programs are just horrible. Sure, the modules/classes they take have the correct titles. What and how they teach is a different matter.
Here's an example:
I'll omit the school name because I just don't want to be the source of (well-deserved, I might add) hatred. When I interviewed someone who graduated from this school, I came to learn that a massive portion of their curriculum is taught using Javascript and the P5js library. This guy had ZERO Linux skills --never saw it school. His OOP class devoted the entire semester to learning the JUCE library...and nobody walked out of that class knowing how to design object hierarchies, inheritance, polymorphism, etc.
Again, in the context of what education produces as computer scientists, yes, without a doubt, AI will replace them in a microsecond. No doubt about it at all.
Going back to the business argument. There is a parallel:
Companies A, B and C were manufacturing products in, say, Europe. Company A, a long time ago, decides they are brilliant and moves production to China. They can lower their list price, make more money and grab market share from their competitors.
Company B, a year later, having lost 25% of their market share to company A due to pricing pressure, decides to move production to China. To gain market share, they undercut Company A. They have no choice on the matter; they are not competitive.
A year later A and B, having engaged in a price war for market share, are now selling their products at half the original list price (before A went to China). They are also making far less money per unit sold.
Company C now has a decision to make. They lost a significant portion of market share to A and B. Either they exit the market and close the company or follow suit and move production to China.
At this point the only company one could suggest acted based on greed was A during the initial outsourcing push. All decisions after that moment in time were about market survival in an environment caused by the original move.
Company C decides to move production to China. And, of course, wanting to regain market share, they drop their prices. Now A, B and C are in a price war until some form of equilibrium is reached. The market price for the products they sell are now one quarter what they were before A moved to China. They are making money, but it is a lot tighter than it used to be. All three organizations had serious reorganization and reductions in the labor force.
The AI transition will follow exactly this mechanism. Some companies will be first movers and reap short-term benefits of using AI to various extents. Others will be forced into adoption just to remain competitive. At the limit, companies will integrate AI into every segment of the organization. It will be a do or die scenario.
Universities will have to graduate candidates who will be able to add value in this reality.
Job seekers will have to be excellent candidates in this context, not the status quo ante context.
You may think your job's not at risk because you're a plumber. But you're not realising that you will be competing with millions of new plumbers fleeing AI-decimated industries pushing down wages dramatically.
And what if China wins on AI and now Huawei can produce tech gear that is dramatically superior/cheaper to global competitors. So now Chinese tech dominates the globe giving enormous power and control to the the CCP.
techpineapple•5h ago
GuinansEyebrows•4h ago
[0] https://en.wikipedia.org/wiki/Dark_Money_(book)
ETH_start•4h ago
apwell23•4h ago
techpineapple•3h ago