There are definitely people abusing AI and lying about what it can actually do. However, Crypto and NFTs are pretty much useless. Many people (including me) have already increased productivity using LLMs.
This technology just isn't going away.
Some payment chains are painful. An awful lot of middlemen take a cut. Some payment chains impose burdens on the endpoint like 90 day settlement debts which could be avoided with some use of tech. Nothing about the hype, just modification to financial transactions, but they could be done other ways as well (as could the settlement ideas above)
NFT are the same logic as bearer bonds. They're useful to very specific situations of value transfer, and almost nothing else. The use isn't about the artwork on the front, its the posession of a statement of value. Like bonds, they get discounted. The value is therefore a function of the yield and the trust in the chain of sigs asserting its a debt to that value. Not identical, but the concept stripped of the ego element isn't that far off.
Please note I think bored ape and coins are stupid. I am not attempting to promote the hype.
AI is the same. LLMs are useful. There are functional tools in this. The sheer amount of capital being sunk into venture plays is however, disconnected from that utility.
The key blockchain requirement is allowing unrestricted node membership. From that flows a dramatic explosion of security issues, performance issues, and N-level deep workarounds.
In the case of a bunch of banks trying to keep each other honest, it's drastically simpler/faster/cheaper to allocate a certain number of fixed nodes to be run by different participants and trusted outside institutions.
One doesn't need to trust every node, just the a majority is unlikely to be suborned, and you'll know in advance which majorities are possible. The bank in Australia probably doesn't want or need to shift some of that responsibility outside the group, onto literally anybody who shows up with some computing power.
Lasers turn out to be useful for... eye surgery, and pointing at things, and reading bits off plastic discs, and probably a handful of other niche things. There just aren't that many places where what they can do is actually needed, or better than other more pedestrian ways of accomplishing the same thing. I think the same is true of crypto, including NFTs.
It's just like all the ICO, NFT, and other crypto launches, but for all the little things that you can do with Ai. Everybody or their bot has some new game changing Ai project. It's a tiring mess right now, which I do hope will similarly die down in time
For clarity, I was a big fan of blockchain before it got bad, still am for things like ZKP and proof-of-authority, and I am similarly very excited for what Ai enables, but (imo) one cannot easily argue there is not a spam problem that feels similar.
We’ll still have the “best code tooling ever invented” stuff, but if the market is assuming “intellectual workers all replaced”, there’s still a bubble pop waiting for us.
Other than by corrupt criminals and mafia types who have a need to covertly hide cash.
And then the current administration wants the government to 'protect' crypto investors against big losses. Gotta love it.
I’ve got an Argentinian friend who sends crypto to his mother because he pays less than 0.5 % in fees and exchange rates instead of close to 5% using the traditional way. From now on I’ll call him a corrupt criminal.
And anyone who lives in a polity whose local currency may be undergoing rapid devaluation/inflation.
And anyone who needs a form of wealth that no local authority is technically capable of alienating them from - ie: if you need to pack everything in a steamer trunk to escape being herded into cattle cars, you can memorize a seed phrase and no one can stop you from taking your wealth with you.
And any polity who may no longer wish to use dollars as the international lingua franca of trade, as the global foreign exchange reserve currency, to reduce the degree to which their forex reserves prop up American empire.
Sadly, all of these use cases appear increasingly relevant as time goes on.
You're describing the people that use actual cash to launder and hide, well, cash, and that have done so for centuries, long before crypto had even been invented.
A few web searches on <big bank name> + "money laundering scandal" (e.g. "HSBC money laundering scandal") can offer valuable insights.
There is no doubt crypto processes trillions of dollars of illegal cash. Way easier for the illegal cash industry to wash their cash than ever before.
The future of the net was closed gated communities long before AI came along. At worst it’s maybe the last nail in the coffin.
AI is, I think, more mixed. It is creating more spam and noise, but AI itself is also fascinating to play with. It’s a genuine innovation and playing with it sometimes makes me feel the way I did first exploring the web.
1. AI slop PRs (sometimes giant). Author responds to feedback with LLM generated responses. Show little evidence they actually gave any thought of their own towards design decisions or implementation.
2. (1) often leads me to believe they probably haven't tested it properly or thought of edge cases. As reviewer you now have to be extra careful about it (or just reject it).
3. Rise in students looking for job/internship. The expectation is that LLM generated code which is untested will give them positive points as they have dug into the codebase now. (I've had cases where they said they haven't tested the code, but it should "just work").
4. People are now even more lazy to cleanup code.
Unfortunately, all of these issues come from humans. LLMs are fantastic tools and as almost everyone would agree they are incredibly useful when used appropriately.
I've been thinking about this recently. As annoying as all the bots on Twitter and Reddit are, it's not bots spinning up bots (yet!), it's other humans doing this to us.
Well, some of them are, but the bots bot is spun up by a human (or maybe bot n+1)
And little bots have lesser bots, and so ad infinitum...
My canned response now is to respond, "Can you link me to the documentation you're using for this?" It works like a charm, the clanker doesn't ever respond.
They are. They’ve always been there.
The problem is that LLMs are a MASSIVE force multiplier. That’s why they’re a problem all over the place.
We had something of a mechanism to gate the amount of trash on the internet: human availability. That no longer applies. SPAM, in the non-commercial sense of just noise that drowns out everything else, can now be generated thousands of times faster than real content ever could be. By a single individual.
It’s the same problem with open source. There was a limit to the number of people who knew how to program enough to make a PR, even if it was a terrible one. It took time to learn.
AI automated that. Now everyone can make massive piles of complicated plausible looking PRs as fast as they want.
To whatever degree AI has helped maintainers, it is not nearly as an effective a tool at helping them as it is helping others generate things to waste their time. Intentionally or otherwise.
You can’t just argue that AI can be a benefit therefore everything is fine. The externalities of it, in the digital world, are destroying things. And even if we develop mechanisms to handle the incredible volume will we have much of value left by the time we get there?
This is the reason I get so angry at every pro AI post I see. They never seem to discuss the possible downsides of what they’re doing. How it affects the whole instead of just the individual.
There are a lot of people dealing with those consequences today. This video/article is an example of it.
Social media feels like parks smothered with smog.
It makes you stupid like leaded gas.
We'll probably be stuck with it forever, like PFAS
What I found in the following week is a pattern of:
1) People reaching out with feature requests (useful) 2) People submitting minor patches that take up a few lines of code (useful) 3) People submitting larger PRs, that were mostly garbage
#1 above isn't going anywhere. #2 is helpful, especially since these are easy to check over. For #3, MOST of what people submitted wasn't AI slop per se, but just wasn't well thought out, or of poor quality. Or a feature that I just didn't want in the product. In most cases, I'd rather have a #1 and just implement it myself in the way that I want to code organized, rather than someone submitting a PR with poorly written code. What I found is that when I engaged with people in this group, I'd see them post on LinkedIn or X the next day bragging about how they contributed to a cool new open-source project. For me, the maintainer, it was just annoying, and I wasn't putting this project out there to gain the opportunity to mentor junior devs.
In general, I like the SQLite philosophy of we are open source, not open contribution. They are very explicit about this, but it's important for anyone putting out an open source project that you have ZERO obligation to accept any code or feature requests. None.
AI agents mean that dollars can be directly translated into open-source code contributions, and dollars are much less scarce than capable OSS programmer hours. I think we're going to see the world move toward a model by which open source projects gain large numbers of dollar contributions, that the maintainers then responsibly turn into AI-generated code contributions. I think this model is going to work really, really well.
For more detail, I have written my thoughts on my blog just the other day: https://essays.johnloeber.com/p/31-open-source-software-in-t...
It’s a tragedy of the commons problem. Most of the money available is not tied up to decision makers who are ideologically aligned with open source, so I don’t see why they’d donate any more in the future.
They usually do so because they are critically reliant on a library that’s going to die, think it’s good PR, makes engineers happy(don’t think they care about that anymore), or they think they can gain control of some aspect of industry(looking at you futurewei and the corporate workers of the Rust project)
More concretely, there are many features that I'd love to see in KDE which don't currently exist. It would be amazing if I could just donate $10, $20, $50 and submit a ticket for a maintainer to consider implementing the feature. If they agree that it's a feature worth having, then my donation easily covers running AI for an hour to get it done. And then I'd be able to use that feature a few days later.
2. Even assuming the AI can crap out the entire feature unassisted, in a large open source code base the maintainer is gonna to spend a sizeable fraction of the time reviewing and testing the feature as they would have coding it. You’re now back to 1.
Conceivably it might make it a little cheaper, but not anywhere close to the kind of money you’re talking about.
Now if agents do get so good that no human review is required, you wouldn’t bother with the library in the first place.
If AI can make features without humans why would I, as a profit maximizing organization, donate that resource instead of keeping it in house? If we’re not gonna have human eyes on it then we’re not getting more secure, I don’t really think positive PR would exist for that, and it would deny competitors resources you now have that they don’t.
While I'd like to believe in the decency and generosity of humans, I don't get the economic case of donating money to the agent behind an OS project, when the person could spend the money on the tokens locally themselves and reap the exclusive reward. If it really is just about money that only makes sense.
Obviously this is a gross oversimplification, but I don't think you can ignore the rational economics of this, since in capitalism your dollars are earned through competition.
Usually, getting stuff fixed on main is better than being forced to maintain a private fork.
I think this is true, but misses the point: quantity of code contributions is absolutely useless without quality. You're correct that OSS programmer hours are the most scarce asset OSS has, but AI absolutely makes this scarce resource even more scarce by wasting OSS programmers' time sifting through clanker slop.
There literally isn't an upside. The code produced by AI simply isn't good enough consistently enough.
That's setting aside the ethical issues of stealing other people's work and spewing even more carbon into the atmosphere.
Give money to maintainers? No.
Give money to bury maintainers in AI Slop? Yes.
1. When people use LLMs to code, they never read the docs (why would they), so they miss the fact that the open source library may have a paid version or extension. This means that open source maintainers will receive less revenue and may not be able to sustain their open source libraries as a result. This is essentially what the Tailwind devs mentioned.
2. Bug bounties have encouraged people to submit crap, which wastes maintainers time and may lead them to close pull requests. If they do the latter, then they won't get any outside help (or at least, they will get less). Even if they don't do that, they now have a higher burden than previously.
I finally got around to Claude code and the code it generates and the debugging it does is pretty good.
Inb4 some random accuses me of being an idiot or shit engineer lol
Project after project reports wasted time, increased hosting/bandwidth bills, and all around general annoyance from this UTTER BULLSHIT. But every morning, we wake up, and its still there, no sign of it ever stopping.
LLMs are confidently wrong and make bad engineers think they are good ones. See: https://en.wikipedia.org/wiki/Dunning–Kruger_effect
If you're a skilled dev, in an "common" domain, an LLM can be an amazing tool when you integrate it into your work flow and play "code tennis" with it. It can change the calculus on "one offs", "minor tools and utils" and "small automations" that in the past you could never justify writing.
Im not a Lawyer, or a Doctor. I would never take legal advice or medical advice from an LLM. Im happy to work with the tool on code because I know that domain, because I can work with it, and take over when it goes off the rails.
I'm a long time linux user - now I have more time to debug issues, submit them, and even do pull requests that I considered too time consuming in the past. I want and I can now spend more time on debugging Firefox issues that I see, instead of just dropping it.
I'm still learning to use AI well - and I don't want to submit unverified slop. It's my responsibility to provide a good PR. I'm creating my own projects to get the hang of my setup and very soon I can start contributing to existing projects. Maintainers on the other hand need to figure out how to pick good contributors on scale.
Someone can spam me with more AI slop than I can vet and it can pass any automated filter I can setup.
The solution is probably closed contributions because figuring out good contributors at scale sounds like figuring out how to hire at scale, which we are horrible at as an industry.
> But it's not improving like it did the past few years.
As opposed to... what? The past few months? Has AI progress so broken our minds as to make us stop believing in the concept of time?
1. I write hobby code all the time. I've basically stopped writing these by hand and now use an LLM for most of these tasks. I don't think anyone is opposed to it. I had zero users before and I still have zero users. And that is ok.
2. There are actual free and open source projects that I use. Sometimes I find a paper cut or something that I think could be done better. I usually have no clue where to begin. I am not sure if it even is a defect most of the time. Could it be intentional? I don't know. Best I can do is reach out and ask. This is where the friction begins. Nobody bangs out perfect code on first attempt but usually maintainers are kind to newcomers because who knows maybe one of those newcomers could become one of the maintainers one day. "Not everyone can become a great artist, but a great artist can come from anywhere."
LLM changed that. The newcomers are more like Linguini than Remy. What's the point in mentoring someone who doesn't read what you write and merely feeds it into a text box for a next token predictor to do the work. To continue the analogy from the Disney Pixar movie Ratatouille, we need enthusiastic contributors like Remy, who want to learn how things work and care about the details. Most people are not like that. There is too much going on every day and it is simply not possible to go in depth about everything. We must pick our battles.
I almost forgot what I was trying to say. The bottom line is, if you are doing your own thing like I am, LLM is great. However, I would request everyone to have empathy and not spread our diarrhea into other people's kitchens.
If it wasn't an LLM, you wouldn't simply open a pull request without checking first with the maintainers, right?
VorpalWay•1h ago
stickynotememo•39m ago