However 2 things are very specific to this case:
1- Dev received a donation, which might be a way for a crypto rug puller to pump a coin. Kind of tangential, but it might be dirty money that the dev accepted. What usually happens is that the famous person is naïve and believes that they really deserve the money, and then they promote a coin which is rugpulled, that's the basic but there might be many shapes, like sending a single prompt about cryptocurrency and causing moltbot to create a new coin.
2- There is a PoW effect in agentic vibe coding, poetically illustrated in GasTown. This parallel makes it possible that there's a very tight relationship between these 2 worlds.
I guess I really am just that out of touch with “AI” and cryptocurrency.
Dirtbag crypto people will spin up a coin in the name of someone's software product, give the project owner a bunch of coin, make them feel special like they're suddenly part of lots of money, and then astroturf and pump the coin as much as they can before setting up for a rugpull by either the project owner trying to cash out, or the crypto folks trying to finish the job off.
Cursor was popular because it was reselling OpenAI at a loss, so for 20 USD / month you could consume 200 USD of tokens per day, but now it's over.
Founders (coins minters) are leaving the ship.
The last ones to leave the ship are going to be left holding the bag.
If someone posts a github link of some LLM tool, clawbot or whatever. You are free to run or fork it and then some crypto bro creates a clawbot $coin.. nobody is forcing you to buy the $coin.
which is likely what’s unknowingly being described here:
> However CLAWD coin tokens are kicking off right now and people are being lured into buying them as the hype grows.
I get that feeling. I suppose it's more about crypto than AI, where the first translates into "pyramid scheme" and the second to "hype".
Any kind of defraud must be rooted in someone's greed. In this case that's FOMO about some presumably magic discovery that's gonna change the world.
So nothing special you might have missed about AI or cryptocurrencies. It's just that those are relatively cheap and accessible technologies to create and transfer (presumed) wealth.
All that's left is serial bullshitters generally not delivering anything real or tangible whatsoever. But of course, them affiliating themselves with whatever is fashionable is entirely in character. That's what serial bullshitters do.
As far as I can see there's little to no overlap in the Venn diagram of crypto tech bro types and AI optimists/utopians. Neither group produces much technology. They mostly just move hot air.
And then there's a rather large crowd of skeptical yet open minded people actually getting some early results using or building various AI tools.
Most AI stuff on HN breaks into the AI bears (it's all bull-shit and going to end in tears, any minute now) and bulls (AGI is imminent and we're all going to be unemployed and then our AI overlords will kill us). And a few occasional rational things in between.
I'm in camp rational. Some cool/useful tools out there. Getting some tangible results using those. Clear and quite rapid progress year on year. Worth keeping up with. I don't worry about employment. I'm quite busy currently. All this AI stuff is generating lots of work and new business potential. And the AIs are not picking up the slack so far. If anything, there's a growing gap between what's possible and what's being realized. That's what opportunity looks like. I see a lot of business potential currently for somebody reasonably handy with AI tools.
some half-baked project that looks cool until you actually try it,
a flood of “look at me I’m first” blog posts and influencers hyping the hell out of it,
people and companies saying they’re building on it because they don’t want to be left behind,
a weird intersection with tokens/coins thrown in as an afterthought because hey, incentives, right? — and suddenly the narrative becomes “pump this thing hard”.
"x is different because we can actually do useful stuff with it" is what every x enthusiast deep in an x bubble or pump n dump says about x.
When the next big tech bubble comes along in 10 - 15 years, there will be people saying exactly what you just said: "NextBigTech you can actually use to build useful things in the world, and NextBigTech thing actually does that building, not just what LastBigTech thing (AI) did, that obviously didn't deliver the utopia it promised".
I wonder what it'll be. AGI? Quantum computing? Brain computer interfaces?
I'd love to pickup this conversation again with you in 15 years.
As for C and C++, there definitely aren’t fewer of them in absolute terms. And even in relative terms they are still incredibly popular.
All of that is beside the point though. The hype around 4GLs wasn’t that they would replace older programming languages. The hype was that they’d replace programming as a profession in general. You wouldn’t need programming specialists because domain experts could handle programming the computer themselves.
This is exactly the same hype around AI coding.
I see the dynamic as follows (be warned, cynical take)
1) there are the youth who are seeking approval from the community - look I have arrived - like the person building the steaming pile of browser code recently.
2) there are the veterans of a previous era who want to stay relevant in the new tech and show they still got mojo (gastown etc)
In both cases, the attitude is not one of careful deep engineering, craftsmanship or attention to the art, instead it reflects attention mongering.
2010: https://web.archive.org/web/20100226043552/http://www.progra...
Trustless Agents and the Agentic Economy is now in that cycle. Will it stick? Builders gotta build something.
2026: https://agentscan.info/
I am guessing this is sort of ProgrammableWeb 2.0.
Disintermediation is the common thread in all of this.
Will be interesting to see solutions arising for developers to monetize their open-source contributions. Pull Request = Push Demand so perhaps there should be a cost attached to that especially knowing that AI will eventually train on it.
Crypto was doing stuff in 2012, it contributed to a huge amount of global remittance payments even then, and probably still does now.
I was working with intelligence agencies, and crypto was being widely use in a variety of crimes too. Both of those are still probably true, and then there's now probably an entire industry shipping literally billions of $ around the world every day as settlement between exchanges in crypto.
As someone who was approached as an expert at the time, I was saying all the things you're saying to me now at the time about Crypto.
The point is I was right at the time: crypto was being used, and still is. You're right, AI is being used, and still is.
The problem, or the bubble or the pump/dump/parallel element is that the amount of attention and capital flowing around the area is vastly more than the current use cases and is therefore largely speculative.
This is true in AI too. Yes people are using it already daily, but if everyone is already using AI for everything, then why do we need a few hundered billion dollars more of datacentres, chips, RAM and powergen, what's that for...? "Future AI stuff..." soooo.... speculative...?
Because everyone is already using AI for everything. That proves its value.
But of course the future isn't evenly distributed yet and only a tiny fraction of 1% of us are using AI all day so far. But once somebody gets converted they don't / can't go back to the old way. And converting them is pretty much instant.
Meanwhile im using AI coding agents to build a B2B Saas.
Meanwhile cryto offers an alt banking platform used by many who have been debanked.
The point im making is that crypto exists purely as this alt investment, trading tokens to get rich. Im skeptical its actually being used as a currency in any real currency-fashion. But now seems to be stuck in pump and dump schemes.
Meanwhile, AI is enabling people right now, today, to help build and learn things they normally wouldnt do.
Classic speculative AI developer;)
> Meanwhile, AI is enabling people right now, today, to help build and learn things they normally wouldnt do.
Are you sincerely suggesting nobody ever built or learnt anything from crypto?
I don't know what qualifies as real currency-fashion to you but you can purchase things and services from many different places. At times credit card payment processors may be down you can use bitcoin to pay. It doesn't have to replace a currency it can be used as another way to spend or collect money.
It's also good in situations where you want to accept money but not open yourself up to risk like a donation button. A risk exists selling a product via credit card around chargebacks this method removes that risk.
In practice it works well and is being used. It's not replacing the dollar but it doesn't need to.
AI will offer us a utopia when we've finished rebuilding all of our electricity infrastructure and finally got enough AI datacenters, and stopped muggle humans buying memory and GPUs because AI needs them more.
I'm pretty certain I read the sentence above about crypto sometimes around 2015.
AI -> Value is very obvious to me as a developer.
Blockchain -> ? What is the actual value? Something about decentralized finance and not having to trust anyone? And the tradeoff is every transaction costs $10 or more. It was always a dubious proposition with its "value" driven by speculative investment which fueled the hype machine.
Yeah there are parallels in that in all cases people got really excited about something tech and poured a bunch of money in, but the outcomes and actual amount of value derived can be wildly different.
The dotcom bubble was due to all the useless, speculative stuff people were doing with the internet, not the useful bits you referred to that are still around which we use today.
The AI bubble is coming from all the useless, speculative stuff people were doing with AI, not the useful bits you referred to that are still around which we use today.
... You see where I'm going with this, right?
Crypto use cases that are still around that get used today are are not hard to find for anyone sincerely wanting to accept that they exist. I've already listed a few in other posts. That's not my point though.
My point is there's a speculative bubble around AI, and that's got a lot of parallels to the speculative bubbles around crypto and dotcom. Everything you've said supports the idea that you're unaware that you're talking from inside a bubble.
> It was always a dubious proposition with its "value" driven by speculative investment which fueled the hype machine.
Explain to me - without speculation or hype - why we still need trillions more datacentres, power, water, money and everything else for AI, if we're already using it and it's already here and we're already getting the most out of it?
- Likely to be a zero sum winner. The Players that have invested now, do not want to be left behind.
- The improvement and capabilities in agents continues to grow. There is no reason to believe this will slow down any time soon
> There is no reason to believe this will slow down any time soon
Now you're trolling, surely?
> Now you're trolling, surely?
Models are getting better every month. Do you disagree?
All investment is kind of speculative: you're betting on the future, but typically for a reason.
A bubble, IMO, is what emerges when lots of people bet on the future purely because they see others betting on the future. People often don't realise they're doing it, like the people building AI SaaS apps. They think they're going to get rich because they think everyone is using the bubble tech.
Most of the apps are rubbish and could be implemented with something other than AI, same as a lot of crypto apps or dotcom websites in the bubble periods.
They look like they're useful in the bubble, because they're getting regular customers (as everyone comes in to try this newfangled AI/Crypto/dotcom tech) but once everyone's tried it, the only people who come back are the ones with the actual use for it, and there's never enough use to support the hype created in bubbles.
Almost every single blockchain "product" (outside of the peer-to-peer trustless currency ) could have been a database.
This time the cost of entry of small software products has cratered.
For example, I was able to knock up a tool for a guide-maker for a niche game I play that gets about 500 peak daily players on steam.
The entire motivation for the tool is because I personally struggle to follow their well written guide. It takes a reasonable amount of focus and care to adjust a bunch of settings between "runs" based on the guide as written. Getting one of these wrong can set you back a bunch of time without even realising what went wrong.
These settings have an import/export feature in game, but that only allows for a few saved presets, and isn't easy to share.
So I've made a tool that lets people create, organise and share these presets.
Literally the only user is likely to be this single guide maker. Possibly a few others might use it to consume their guides.
Without claude-code, it would never have been reasonable for me to invest the time to make the tool. It would have been an idle dream sitting on my "I wish I had the discipline to make this" pile.
But I don't have the discipline to make that kind of project. I'm too easily distracted, and I'd have got bored of the idea before I'd finished establishing all the boilerplate, let alone before ironing out all the bugs. I also don't have the front-end talent to make things look pretty with CSS.
The LLM doesn't get demotivated. It doesn't get bored, and compressed the building of the prototype down to a day or two. Enough to keep my interest until feedback arrived. A week later, and it's shipped with 50+ issues raised and fixed.
Yes, and it's trivial now to look at so many LLM startups and say "that could be a complex if/else statement" or "that could be an Alexa skill" or "I can do that already with my mobile phone".
Everything you've just described about the impact of the friction of you doing your work, and how AI has solved that, is essentially what crypto promised and delivered for a certain subsect of finance, which is why crypto still has market caps in the trillions.
AI will do the same, make a notable change on a certain sub sector of work.
My point isn't that AI is useless, it isn't that it won't add value. It's hugely valuable and will change the world in way people don't even realise, just like dotcom and crypto did and do. Right now though, the disruption and investment is disproportionate and speculative, which is why it has parallels to crypto and dotcom.
To people in the EU/UK who had free faster payments before Bitcoin was a thing, it never looked like an improvement at all.
The solution expensive and slow banking was always political, not technical.
Crypto was purely speculative, because it was never solving real problems.
I'm not speculating about problems being solved, I'm out there solving real problems. No-one in "blockchain" ever got to say the same. It was always a promise of things being better. And for many people, things already were better than what was being promised.
AI only solved friction in places work messed up, like giving developers enough time to program stuff.
> To people in the EU/UK who had free faster payments before Bitcoin was a thing, it never looked like an improvement at all.
To tech companies who were already content with their development team's velocity, AI never looked like an improvement at all.
> The solution expensive and slow banking was always political, not technical.
The solution to developers not coding fast enough was always political, not technical.
> Crypto was purely speculative, because it was never solving real problems.
AI was purely speculative, because it was never solving any problems. (Sorry, I have to point out here you said higher up a bunch of problems that Crypto was solving, and now you're saying how it was also speculative, which is the parallel between crypto that you were trying to argue against).
> I'm not speculating about problems being solved, I'm out there solving real problems. No-one in "blockchain" ever got to say the same. It was always a promise of things being better. And for many people, things already were better than what was being promised.
Again, either you're right above when you said crypto solved problems where banking was bad, or you're right here where you're saying blockchain never solved anything.
You're going round in circles trying to find a way that AI isn't like crypto whilst giving more examples of how AI is like crypto.
Remittance, micropayments, unbanked people, unstable economies: all of these did, can and do have problems solved by blockchain.
I think it's funny that you highlight this, because for many blockchains, their native token is the transactional currency also.
Which opens up the possibility for a marketplace around it, as well as an incentive to grift to recoup one's investment.
AFAIK there's no similar market for LLM tokens (the price may fluctuate, but the AI companies set it, and they can't be resold), but the grift works by instead selling the outputs from using the tokens.
is it grift if I want the output, and its contributing usefully to the work I am doing now,?
i have a filter for this kind of thing in the era of greedmaxxing (get rich quick schemes that are not new but change shape pretty often these days) - be a late adopter.
To wait is to maximize information and efficiency in execution.
Let me introduce you to the wonderful world of "research." It's what happens when you're willing to spend money on things without immediate, obvious ROI. The real value often comes not from the resulting product, but from the lessons learned along the way. I also don't see what's wrong with showcasing the results of your experiments. How many developers have implemented a toy ray tracer and put it on their personal GitHub? No one in their right mind believes Pixar will use it for their next renderer, but should we conclude those people are inflating their CVs with bait? Or can we acknowledge it's a cool project to undertake, and pulling it off requires real skill? If individuals are welcome to do this, why can't organizations? I want to see more "we did a fun thing, here are the results." There's a playfulness in that approach I find refreshing. Just because it comes from a for-profit company doesn't make it cynical.
I thought only AI bots were born yesterday.
In some sense I just feel like this is another way to gamble, which in general is seeing an unprecedented growth with Polymarket and the likes. There is less faith in white-collar skills making you rich, so you just try your luck.
When the published "lessons" don't match up with what the experiment actually did, that's when people start asking questions. Is not just "boo it didn't work", but there is a vast mismatch between what the research actually answered, and what they claimed it answered.
> The rendering engine is from-scratch in Rust with HTML parsing, CSS cascade, layout, text shaping, paint, and a custom JS VM.
If I cloned Pixar’s rendering library and called that then added to my CV ‘built a renderer from scratch’ this would be entirely dishonest…
I use LLMs often and don’t hate Cursor or think they’re a bad company. But it’s obvious they are being squeezed and have little USP (even less so than other AI players). They are frankly extremely pressured to make up lies.
I don’t think I’d resist the pressure either, so not on a high horse here, but it doesn’t make it any less dishonest.
Unrelated; For CI, what hardware would people recommend? I'm choosing between mac Mini (M4 Pro) and Mac Studio (M3 Ultra) but haven't digged into the CPU difference yet to understand what would be best. Opinions?
It is like all the garbage papers you find in academia that you need to sift through until you find that one good paper. Needle in a haystack.
2026 will be the year of vibe-code driven enshittification. Github will be the casualty.
I expect once users get burnt enough time, they'll stop adopting the new cool thing until it's been out long enough with consistent releases.
The truth is building a project is like a lottery ticket, and there's hard diminishing returns on time invested in quality in terms of payoff. If I told you you could spend 10x more time for a 2x increase in probability of success, if you were trying to make a living from your creativity, you would be stupid to spend the extra time, it's a horrible investment.
The people spamming half baked projects that they quickly abandon if they don't get traction are being rational. People like me that grind on unsexy process bottlenecks and try to keep refining into something really nice are the irrational ones.
1. ▲ Moltbook (moltbook.com)
538 points by teej 8 hours ago | hide | 293 comments
2. ▲ Software Pump and Dump (tautvilas.lt)
108 points by brisky 5 hours ago | hide | 25 comments
3. ▲ OpenClaw – Moltbot Renamed Again (openclaw.ai)
256 points by ed 6 hours ago | hide | 110 comments
This is art.Imagine the group of parents of my kids schools sending 100 to 300 messages per day with different subjects.
The issue is. I also have personal and important chats that I don't want to share with an vibe coded AI software without any canaries taking the shot first.
And I'm talking as a person that is using almost all my Claude max subscription every week.
But I do verify ALL of the code that I'm delivering. And I'm even using Gemini as an adversarial LLM to review Claude generated code.
Does this that gigantic project set any standards for this?
I was not able to find on their documentation.
So it's funny indeed, but for now I'm upvoting this one even being a confident moderate person.
:)
Often when you don't understand something you feel stupid; but sometimes the reason you don't understand is because somebody's trying to sell something to you, and it's that thing that's supid, or pointless, or a scam, or all three.
I only skimmed the OpenClaw post, but unless I completely misunderstood the README in their GitHub repo, to me the benefits are stupidly obvious, and I was actually planning to look at it closer over the weekend.
The value proposition I saw is: hooking up one or more LLMs via API (BYOK) to one or more popular chat apps, via self-hostable control plane. Plus some bells and whistles.
The part about chat integration is something that I wanted to have even before LLMs were a thing, because I hate modern communication apps with burning fashion. All popular IM apps in particular[0] are just user-hostile prisons whose vendors go out of their way to make interoperability and end-user automation impossible. There's too much of that, and for a decade or more I dreamed of centralizing all these independent networks for myself in a single app. I considered working on the problem a few times, but the barriers vendors put up were always too much for my patience.
So here I thought, maybe someone solved this problem. That alone would be valuable.
Having an LLM, especially BYOK, in your main IM app? That's a no-brainer to me too; I think it's a travesty this is not a default feature already. Especially these days, as a parent, I find a good chunk of my IM use involves manually copy-pasting messages and photos to some LLM to turn them into reminders and calendar invites. And that's one of many use cases I have for tight IM/LLM integration.
So here I thought maybe this project will be a quick and easy way to finally get a sane, end-user-programmable chat experience. Shame to see it might be vaporware and/or a scam.
--
[0] - Excepting Telegram, which has a host of other problems - but I'd be fine living with them; unfortunately, everyone I need to communicate with uses either WhatsApp or Facebook Messenger these days.
My mother's mother would physically drop in unannounced to the people she wanted to talk to, and they'd have tea and chat a while to coordinate events. This was reciprocal. You are probably already wealthy, and your time can be spent however you like, consider not optimizing it anymore.
Genuinely, why are you using your limited time on this earth doing everything in your power to poison serendipity? If texting identical things bores you, you have free time and free will, make it actually personal so neither of you will be bored. Break the social taboo and call! Or share a calendar like a normal parent or neighborhood group.
If one of my friends with school age kids coordinated with me via clearly prompted text I would assume that we were not as close as I thought we were. That I'm a 'target for personal PR' rather than, you know, a person. It would diminish us both.
- Automating the boring part of creating calendar invites and such from messages people send, which half of the time are photos of some announcements. LLMs are already a godsend here.
- Getting up to speed quickly on what's going on in various kindergarten groups I'm in, whenever a bunch of parents who don't work on traditional schedule decide to have a spontaneous conference in late morning, and generate a 100 messages on the group by early afternoon.
Etc.
I'm not trying to avoid communicating with people - on the contrary, I want to eliminate the various inconveniences (more and less trivial) that usually prevent me from keeping up.
I can just imagine that many people won't be using stuff like this to automate copy-pasting etc. but literally let LLM's handle conversations for them (which will in turn be read by other LLMs).
"You free to chat?" "Always. I'm a bot." "…Same."
This post has been written by a human :)
Having a delegate to deal with communications is something people embrace when they can afford it. "My people will talk to your people" isn't an unusual concept. LLMs could be an alternative to human secretaries, that's affordable to the middle class and below.
I had high hopes for the OpenClaw approach too, but the 'security sirens' you mentioned are real—self-hosting a control plane that bridges to WhatsApp/Messenger is a maintenance nightmare if you actually value your privacy.
I’ve been tracking a project called PAIO (Personal AI Operator) that seems to be attacking this from the exact angle you’re looking for. It’s essentially a privacy-first integration layer that uses a BYOK (Bring Your Own Key) architecture. The goal is to provide that 'one-click' connectivity to the walled gardens (WhatsApp, etc.) without you having to sacrifice your data or build the bridge yourself from scratch.
It’s the first tool I’ve seen that treats AI as a personal 'operator' rather than just another chatbot. Might be worth a look if you’re tired of the manual slog but don't want to risk the security 'fire sirens' of unproven scripts. Have you found any other bridges that actually handle the WhatsApp/FB Messenger side reliably, or is everything still just a 'beta' promise at this point?
On one hand this is pretty obviously dumb but on the other maybe I'm just not 'getting it' and if shit-coin-speculators want to help finance OSS projects (vibe coded or no) why complain about it?
Not with a plan from Anthropic or OpenAI. It seems like using pure API is a status symbol among some developers. Look how much I spend on tokens.
I'm surprised anyone is still holding Bitcoin at this point... I thought everyone finally got with the program that crypto will never amount to anything...
Truly, the market can remain irrational...
Pump == experimentation/innovation, different people look at it differently, so you get variety of interesting ideas.
Dump == natural consequence of over-supply, in this case whatever is not useful, we will drop.
But to invent/discover new things, new paradigms, we need that Pump.
1. Look at age of computers, we had so many different architectures and computer brands with own hardware, now mostly converged to a couple of architectures
2. Operating systems, at some point everyone was writing operating systems, now converged to primarily 3
3. Programming languages, not converged to small number of languages, but there were bunch of languages, same with Databases
4. Frontend frameworks, converged around React & Vue.
5. Search engines
6. Social networks
We need that Pump
Maybe a bit different but I think it's worth pointing out how this parallels the state of the job market right now.
It is so hard to get hired, with so many moving and diverse frameworks, libraries, and technologies you are expected to know, that it's almost impossible to keep up and stand out.
The only way to do it is to develop "projects" that demonstrate your abilities in each target domain, and in these days of vibe coding these need to be more than sketches but like full fledged applications that can draw real attention to you, if your lucky get on the front page somewhere.
And with vibe coding it can be done relatively quickly.
So we're in this state of new projects, very impressive looking projects, getting posted every day, all the time, and about 1% of them will see any kind of longevity because the vast majority will be dumped as soon as the author gets a job.
This makes it increasingly difficult to select dependencies for downstream work.
I had a guy crash out after I told him that "so and so said Thing was good" was not sufficient to say whether Thing was good or not.
I told him he needed to develop enough skill to determine that for himself or he'd constantly fall for hype.
My dude pasted a ChatGPT list of engineers who had ever said anything about LLMs and was like ARE THEY ALL WRONG??
... did you listen to nothing I said? lol
No one claimed “X said it’s good, therefore it’s good.” The point was that ignoring what experienced people say entirely is just as dumb as following them blindly.
You told me to “think for myself.” Great. Thinking for yourself doesn’t mean pretending expert opinion doesn’t exist. It means weighing it against your own understanding. That’s literally how learning works.
Calling it a “ChatGPT list” is just you dodging the question. If those people are wrong, explain why. If some are right for bad reasons, name them. Laughing and changing the subject isn’t an argument.
You’re shadowboxing a strawman and congratulating yourself for winning.
People who actually understand things can explain how they’re wrong. People who don’t just announce they’ve already reached a conclusion and declare further thought unnecessary.
If that’s your bar, then yes, no other reasoning is required, because none was applied in the first place.
“Crypto = scam” isn’t a filter, it’s a shortcut for people who don’t want to explain themselves. You didn’t analyze anything. You flinched and stopped.
What’s telling is how confident you are while saying nothing. No description of how the scam works. No incentives. No mechanics. Just “trust me, I’ve seen this before.”
If this is what you consider a clear negative signal, then yeah, everything must look very simple from where you’re standing. Simple is doing a lot of work for you here.
An appeal to authority is saying “X is true because this person said so.” That’s not what’s happening here. What’s happening is people treating expert opinion as evidence, not a verdict.
You say you don’t want appeals to authority, then you immediately offer your own opinion and expect people to take it seriously. Why? On what basis? Because it’s your judgment?
That’s the funny part. The moment you state an opinion, you’re asking others to weigh your credibility against someone else’s. You don’t escape authority, you just replace it with yourself.
Yegge’s opinion has weight because of his track record. It can still be wrong. Mine can be wrong. Yours can be wrong. That’s why people compare opinions instead of pretending they live in a vacuum.
Ignoring expert opinion entirely isn’t “independent thinking.” It’s just choosing to be uninformed and calling it a virtue.
Regardless, it’s a lot of words to again say “they are famous, so consider them more seriously” despite the obvious scam being perpetuated via crypto. The appeal to authority is you stating their credentials first, and none of the deductions you claim one should make from merit.
You keep restating a position no one is taking. No one said “they’re famous, therefore right.” That’s something you invented so you don’t have to argue against what was actually said.
Credentials don’t make an argument true. They explain why an opinion isn’t noise. Pretending otherwise doesn’t make you principled, it just makes you incurious.
If there’s an obvious scam, spell it out. If the reasoning is flawed, point to the step where it fails. You haven’t done either.
So far all you’ve contributed is tone policing, motive guessing, and now AI paranoia.
All your replies have severe clear AI slop smell, you’re not giving me any reason not to assume otherwise tbh. It’s more about whether you respect my replies to formulate your own answer, but given your appeal to authority, clearly you have no qualms allowing others (senior engineers, AI/LLMs) to determine them for you!
You haven’t engaged with the argument once. You’ve complained about credentials, then tone, then AI, then “respect.” That’s four pivots and zero substance.
I didn’t say “trust this person instead of thinking.” I said experience adds context. You keep pretending those are the same thing because otherwise you’d have to actually respond.
The AI accusation is just embarrassing. It’s the online version of “I can’t refute this, so I’ll imply you cheated.” That might feel clever, but it mostly signals panic.
If you’re as sharp as you seem to think you are, this shouldn’t be hard. Pick a claim. Explain why it’s wrong. Everything else is just noise you’re making to avoid that moment.
I also wasn’t aware that I’m speaking to someone who actually persistently appeals to authority and maintains a list of figures to bow down to: https://news.ycombinator.com/item?id=46783280
As such, this is where I get off this thread train. Seeya!
What actually happened is pretty obvious. You ran out of things to argue, so you switched to archeology. Scroll history, squint hard, invent a story, declare victory, announce departure.
That move isn’t rare. People do it when they don’t want to admit they’ve hit the end of their reasoning but still want to feel like they left on their own terms.
Confident people don’t need a backstory, a diagnosis, and an exit speech. They make the point and keep talking.
You're just quietly running away and hoping no one noticed.
smcin•1w ago
zombot•1w ago
andOlga•1w ago
mixtureoftakes•1w ago