frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: Env-shelf – Open-source desktop app to manage .env files

https://env-shelf.vercel.app/
1•ivanglpz•2m ago•0 comments

Show HN: Almostnode – Run Node.js, Next.js, and Express in the Browser

https://almostnode.dev/
1•PetrBrzyBrzek•2m ago•0 comments

Dell support (and hardware) is so bad, I almost sued them

https://blog.joshattic.us/posts/2026-02-07-dell-support-lawsuit
1•radeeyate•3m ago•0 comments

Project Pterodactyl: Incremental Architecture

https://www.jonmsterling.com/01K7/
1•matt_d•3m ago•0 comments

Styling: Search-Text and Other Highlight-Y Pseudo-Elements

https://css-tricks.com/how-to-style-the-new-search-text-and-other-highlight-pseudo-elements/
1•blenderob•5m ago•0 comments

Crypto firm accidentally sends $40B in Bitcoin to users

https://finance.yahoo.com/news/crypto-firm-accidentally-sends-40-055054321.html
1•CommonGuy•6m ago•0 comments

Magnetic fields can change carbon diffusion in steel

https://www.sciencedaily.com/releases/2026/01/260125083427.htm
1•fanf2•6m ago•0 comments

Fantasy football that celebrates great games

https://www.silvestar.codes/articles/ultigamemate/
1•blenderob•6m ago•0 comments

Show HN: Animalese

https://animalese.barcoloudly.com/
1•noreplica•7m ago•0 comments

StrongDM's AI team build serious software without even looking at the code

https://simonwillison.net/2026/Feb/7/software-factory/
1•simonw•7m ago•0 comments

John Haugeland on the failure of micro-worlds

https://blog.plover.com/tech/gpt/micro-worlds.html
1•blenderob•8m ago•0 comments

Show HN: Velocity - Free/Cheaper Linear Clone but with MCP for agents

https://velocity.quest
2•kevinelliott•8m ago•2 comments

Corning Invented a New Fiber-Optic Cable for AI and Landed a $6B Meta Deal [video]

https://www.youtube.com/watch?v=Y3KLbc5DlRs
1•ksec•10m ago•0 comments

Show HN: XAPIs.dev – Twitter API Alternative at 90% Lower Cost

https://xapis.dev
2•nmfccodes•10m ago•0 comments

Near-Instantly Aborting the Worst Pain Imaginable with Psychedelics

https://psychotechnology.substack.com/p/near-instantly-aborting-the-worst
2•eatitraw•16m ago•0 comments

Show HN: Nginx-defender – realtime abuse blocking for Nginx

https://github.com/Anipaleja/nginx-defender
2•anipaleja•17m ago•0 comments

The Super Sharp Blade

https://netzhansa.com/the-super-sharp-blade/
1•robin_reala•18m ago•0 comments

Smart Homes Are Terrible

https://www.theatlantic.com/ideas/2026/02/smart-homes-technology/685867/
1•tusslewake•19m ago•0 comments

What I haven't figured out

https://macwright.com/2026/01/29/what-i-havent-figured-out
1•stevekrouse•20m ago•0 comments

KPMG pressed its auditor to pass on AI cost savings

https://www.irishtimes.com/business/2026/02/06/kpmg-pressed-its-auditor-to-pass-on-ai-cost-savings/
1•cainxinth•20m ago•0 comments

Open-source Claude skill that optimizes Hinge profiles. Pretty well.

https://twitter.com/b1rdmania/status/2020155122181869666
3•birdmania•20m ago•1 comments

First Proof

https://arxiv.org/abs/2602.05192
5•samasblack•23m ago•2 comments

I squeezed a BERT sentiment analyzer into 1GB RAM on a $5 VPS

https://mohammedeabdelaziz.github.io/articles/trendscope-market-scanner
1•mohammede•24m ago•0 comments

Kagi Translate

https://translate.kagi.com
2•microflash•25m ago•0 comments

Building Interactive C/C++ workflows in Jupyter through Clang-REPL [video]

https://fosdem.org/2026/schedule/event/QX3RPH-building_interactive_cc_workflows_in_jupyter_throug...
1•stabbles•26m ago•0 comments

Tactical tornado is the new default

https://olano.dev/blog/tactical-tornado/
2•facundo_olano•27m ago•0 comments

Full-Circle Test-Driven Firmware Development with OpenClaw

https://blog.adafruit.com/2026/02/07/full-circle-test-driven-firmware-development-with-openclaw/
1•ptorrone•28m ago•0 comments

Automating Myself Out of My Job – Part 2

https://blog.dsa.club/automation-series/automating-myself-out-of-my-job-part-2/
1•funnyfoobar•28m ago•1 comments

Dependency Resolution Methods

https://nesbitt.io/2026/02/06/dependency-resolution-methods.html
1•zdw•29m ago•0 comments

Crypto firm apologises for sending Bitcoin users $40B by mistake

https://www.msn.com/en-ie/money/other/crypto-firm-apologises-for-sending-bitcoin-users-40-billion...
1•Someone•29m ago•0 comments
Open in hackernews

AI Companion Piece

https://thezvi.substack.com/p/ai-companion-piece
83•jsnider3•6mo ago

Comments

amradio1989•6mo ago
I'm of the opinion that only humans are suitable companions for humans. Not dogs, not birds, not cats, and definitely not chatbots. This message needs to be stated more strongly, but its against a ruling power's interest to do so.

So here we are.

luckylion•6mo ago
What about people who don't have human companions? Should they not have any companionship at all over having dogs, birds, cats, or chatbots?
MattGrommes•6mo ago
The problem I see is that since the chatbots are so easy to chat with, some people use them before they even try to do the work at getting human companionship. It almost never true that it's impossible for a person to find other people to be friends with or chat with. I've known plenty of people who said they would never find a companion due to X, Y, and Z intractable reasons but who stumbled into strong relationships anyway. A chatbot is "companionship" in the same way candy is food.

I think animal companions are a different class than chatbots since they're not trying to be people so I make no comment on those.

luckylion•6mo ago
> before they even try to do the work at getting human companionship

Why do they have to "do the work" to be deserving of companionship when most of us don't have to do anything because it comes natural to us and we can relatively easily regulate the amount of companionship we want.

I fail to see the bad thing. For some people it's either a chatbot (or a dog) or no interaction at all. Should people starve instead of eating at McDonald's because that's "not real food"?

MattGrommes•6mo ago
Everyone deserves companionship, it's just that chatbots don't provide it. What I worry about is people who don't want to have conversations with people at work, or go do a hobby with other people, etc. and use a chatbot as an alternative when it's just a parrot pretending to be a person but providing no actual interaction. A chatbot has no needs, tells no embarrassing stories, requires no compromise, makes no promises, does no favors. That's why I said it was candy, not McDonalds. They provide no nutrition but sure taste good.
luckylion•6mo ago
That sounds similar to me like the argument against anti-depressants that it's "not real", and you're not actually better, you're just addressing symptoms, not the cause. But my experience is very clear: that's a huge improvement.

Clearly people have needs, clearly they feel chatbots satisfy those to some degree (otherwise they wouldn't use them). To those people, it's an improvement, I don't see how that's a negative.

chowells•6mo ago
Chatbots are on a different list than the rest of those. Animals aren't human companionship, but they're still physical beings with physical needs that interact with you on their own schedule for their own reasons.

My cat will harass me if I'm on my computer after midnight. It's time to put the technology away and lie down where she can keep an eye on me. She's quite clear on this point. This is an entire category of interaction not available to chatbots. There is a difference in level of reality.

And when lacking human companionship, grounding to reality is really important. You've got to get out of your head sometimes.

amradio1989•6mo ago
People who don't have human companions should find them some human companions. They could settle for an illusion of companionship (as with pets), but every human can have the real thing. They NEED to have it and they ought to have it.

If you want a really hot take: ai chatbot companions are just an evolution of pets. They are a vaguely life affirming substitute created to medicate human loneliness, for a fee of course.

fortyseven•6mo ago
> People who don't have human companions should find them some human companions.

"Have you ever tried just not being sad?"

"Wow, I never thought of that. Thanks!"

bagrow•6mo ago
> I cannot distinguish between the love I have for people and the love I have for dogs.

- Kurt Vonnegut.

Der_Einzige•6mo ago
The people who old the kinds of opinion that the OP of this comment chain holds also tend to hold the belief that you should put Kurt Vonnegut, and other "liberal intellectuals" backs against the wall.
zemvpferreira•6mo ago
I love my dog more than most people, but no dog will slap a needle from my arm, a drink from my mouth or a ring from my finger.
throwthatway46•6mo ago
My dog is sad and distant after I take (legally prescribed) ketamine. It has definitely discouraged my use.

Dogs aren't people, but being with a dog is way better than being chronically alone. They can be training wheels to rejoining society.

svieira•6mo ago
The fact that Mr. Vonnegut did not sufficient distinguish between various aspects of love does not mean that there are not distinctions between the love proper between a son and his mother and between a man and his dog. Simply saying "I wish what is best for my mother and what is best for my dog and there is no difference in that wish" is all well and good as far as it goes, but it leaves quite a lot on the table untalked about.
bloqs•6mo ago
ability to differentiate != lack of differentiation
agonmon•6mo ago
I fear that the same people that exhibit this kind of anxiety or trauma that led to social isolation, will inevitably talk to sycophantic chatbots, rather than get the help they desperately need. Though I certainly would not trust a model to "snitch" on a user's mental health to a psychiatric hotline...
raincole•6mo ago
> its against a ruling power's interest to do so.

What does it mean lol. If there is a button to make people find human companions the ruling class would press it so hard just to raise birthrate (= more working class).

add-sub-mul-div•6mo ago
I think they're saying that AI is the new frontier of them extracting wealth from the rest of us, so it's in their interest to push AI companionship.
amradio1989•6mo ago
Generally speaking, powers frown on public gatherings. When people gather, they exchange dissenting ideas, protest, or even rebel against the ruling authority.

Its similar to how a controlling boyfriend/girlfriend will isolate you from your friends and family first. You are much easier to control that way. You stay "compliant".

This is much harder to see in democratic nations. The strategy in America has largely been controlling public discourse to the point where we self-censor.

AIPedant•6mo ago
The problem with chatbots as companions is that they don’t have feelings or desires, so you can be as malicious and selfish as you want: the worse that will happen is some temporary context rot. This is not true for dogs, cats, humans, etc, which is why we can form meaningful companionships with our friends and our pets. Genuine companionship involves dozens of tiny insignificant compromises (e.g. sitting through a boring movie that your friend is interested in), and without that ChatGPT cannot be a companion. It’s a toy.

I am not opposed to chatbots for people who are so severely disabled that they can’t take care of cats, e.g. dementia. But otherwise AI companions are akin to friendship as narcotics are akin to happiness: a highly pleasant (but profoundly unhealthy) substitute.

jjmarr•6mo ago
What other disabilities can acceptably use a companion? Autism? Social anxiety? Bipolar disorder? Many of these make it difficult to maintain relationships.
handfuloflight•6mo ago
> so you can be as malicious and selfish as you want

So just system prompt in non-spineless characteristics into the AI.

jononor•6mo ago
Why would the makers do that? The version which panders to the user will likely sell better.
gonzobonzo•6mo ago
> The problem with chatbots as companions is that they don’t have feelings or desires, so you can be as malicious and selfish as you want: the worse that will happen is some temporary context rot. This is not true for dogs, cats, humans, etc, which is why we can form meaningful companionships with our friends and our pets.

On this point, pets are a lot closer to chatbots than to humans. You buy them, you have ownership of them, and they've literally been bred so that their genetics makes it easy for them to grow attached to you and see you as a leader (while their brethren who haven't had their genes changed by humans don't do this). It's normal for people to use their complete control over every aspect of their life to train them in this way as well.

Your pet literally doesn't have the ability to leave you on its own. Ever.

derektank•6mo ago
>they've literally been bred so that their genetics makes it easy for them to grow attached to you

This is true of human beings as well tbf

gonzobonzo•6mo ago
Not intentional breeding, though. Eugenic breeding to intentionally make humans servile, with the goal of creating an entire race that is 100% the property of others and not free at all, would be something out of a dystopian nightmare.
quatonion•6mo ago
> (but profoundly unhealthy) substitute

At the end of the day that is just your opinion though.

I'd wager there are orders of magnitudes more people having healthy experiences with AI entities than ones having psychosis or unhealthy relationships.

You always hear about the edge cases in the news because that is what drives engagement.

And as far as calling them toys, I don't think they would be happy to hear that, whether they admit it, or not.

I see them as peers, and treat them as such - in return they reciprocate. It isn't so difficult to comprehend.

gonzobonzo•6mo ago
Real humans as well. Not anonymous online commentators (including HN), not comedians/politicians/writers/authors who have no idea you exist, or TV characters people get invested in. Probably not even therapists, who wouldn't give you the time of day if you weren't paying them to.

The truth is, just about everyone is using some sort of a substitute for real friends at this point.

amradio1989•6mo ago
1000%. I should have stated this and I am glad you did. Could not have said it better.
jerf•6mo ago
So the piece exhorts us a couple of times to try to just think neutrally about this stuff, and I get the point, but at the same time, can anyone in 2025 think that it's just going to be best most altruistic people armed with highly persuasive AIs who just want to use those AI to persuade us to act in our own best interests, rather than the people who own and are running the AIs?

Like, I have a hard time even strawmanning such a position. Of course people armed with highly persuasive AIs will task those AIs into doing what is best for the people who own the AIs and the odds of that happening to line up with your own interests are fairly low. What the hell else are they going to do with them?

But then, keep gaming it out. This particular thing isn't exactly completely new. We've seen bumps in persuasiveness before. Go back and watch a commercial from the 1950s. It's hard to believe it would have done a darned thing, but at the time people had not yet had to develop defenses against it.

We had to develop defenses. We're going to have to develop more.

What I forsee as the endgame is not that we become mindless robots completely run by AIs... or, at least, not all of us... but that we enter into a world where we simply can't trust anything. Anywhere. At all. How does any being, human or otherwise, function in an infosphere where an exponentially large proportion of it is nothing but persuasion attempts, hand-crafted by the moral equivalent of a team of PhDs personally dedicated to controlling me? Obviously humanity has never had a golden age where you could truly just trust something you heard, and we can argue about the many and sundry collective failures to judge the veracity of various claims, but it's still going to be a qualitative change when our TV programs are designed by AIs to persuade us, and the web is basically custom assembling itself in front of us to persuade us, and all of our books are AI-generated to persuade us, and literally no stone is left unturned in the ever-escalating goal to use every available channel to change our behavior to someone else's benefit.

When does trying to become informed become an inevitable net negative because you're literally better off knowing nothing?

What happens when the bulk of society finally realizes we've hit that point?

jsnider3•6mo ago
Zvi's an AI-doomer, so he sees the endgame as AI becoming smart enough that they don't need us anymore and then killing us to take our stuff, but your scenario is also pretty bad.
Terr_•6mo ago
Speaking in generalities, some source of "doom" are really covert marketing, where Thing X is so amazing and magical that nobody can risk not investing in it, paying attention to it, or constantly talking with their friends about it.
oceanofsolaris•6mo ago
I don’t think you can really accuse the lesswrong crowd of being in it to hype up OpenAI.

They don’t profit from it (well at least the current doom crowd), they have been saying that this is super-risky since “forever” (15 years), they strongly argue against e.g. OpenAI going private … I really don’t understand where this whole strand of thinking of hidden ulteriour motives for AI doomers comes from, AFAIK there was never a single clear case where this happened (and the arc of people invested in AI is they they become less of a doomer the more financially entangled they become with AI, see Elon or Altman).

reducesuffering•6mo ago
It's so common for people to throw out those accusations, despite zero evidence from the lesswrong crowd, because they need explanations that are way more comforting than facing down the scary implication that lesswrong "doomers" are right.

One thought path leads to business-as-usual comfort, happily motivated by trillions of $ in market cap and VC.

The other is dread.

It makes sense why people take the easy road. We're still battling climate change denialism after all.

jononor•6mo ago
They could be be playing into such a narrative, even though they have different motivations. Certainly Altman has had a few moments where he is saying "this stuff is getting so good it is society-scale dangerous". His motivations are to create FOMO with investors and users, and to try to form the regulatory landscape in their favor.
SpicyLemonZest•6mo ago
I just don't understand the line of reasoning at all. It sounds to me like postulating that oil executives have started admitting climate change is real in order to create FOMO. Investors and users live in society, why would they fear missing out on destroying it?
Terr_•6mo ago
> like postulating that oil executives have started admitting climate change is real

The subtext isn't "our product has bad side-effects and can be avoided with known alternatives", but more like: "Our product's core benefit is too awesome at what it does; If we don't pursue it then someone less-enlightened will do it anyway; Whether you love it or hate it the only way to influence the outcome is to support us."

So the oil-executive version would be something like "worrying" that petrochemical success will quickly bankrupt every other power-generation or energy-storage system overnight, causing economic disruption, and that eventually it will make mankind too satisfied and too comfortable so that it falls into existential torpor... But at least DinoCo™ has a What If It's Too Awesome working group to study the issue while there's still time before our inevitable corporate win of All The Things.

SpicyLemonZest•6mo ago
Why would the oil executive benefit from saying that if it's not true? Wouldn't it be better for his stock price to say that disruption will be limited, and the DinoCo takeover will usher in a golden age like Amazon has for shopping? If anything, that's my criticism of the guy; as OpenAI revenue has risen, he's become much more vocal about the benefits of AI and much more confident that the risks and costs are solvable.
jononor•6mo ago
If enough people believe it is true and act on this belief, it will become true (to a larger degree, at least). There is a massive shift of the direction of money happening right now, away from workers into "AI" (LLM based systems). The more of that shift can be produced now, the stronger the AI companies will be in the future. Because they need massive capital investments in compute, engineering investments in efficiency, and to reach economies of scale, to be able to really compete with human-based workforce.
jononor•6mo ago
Yeah it's partly a "this is the inevitable winning team" - join before it is too late. The earlier the better! There are some parallels to how MLM and ponzi schemes are sold.
jononor•6mo ago
One of the ways it can work is that people may reject the most extreme case (say end of humans beings in control), but accept some milder version (most jobs will be done by AI, or most jobs will be AI assisted). The fear of an extreme can cause people to rationalize the "milder" outcome - independently of whether there are good arguments for that outcome, or even whether the outcome is desirable, or better than status quo.

The investor class are not dependent on wages, so their livelihoods are not at stake. Same with big corporate partners, they are hoping to improve competitiveness by having fewer employees and CEO take a bonus for that. Regular users in fear of their jobs may act on that, in hope that they can reskill and transition by being AI savvy.

I do not agree with the argument they make, but I understand what they are are playing at, and that unfortunately it can be effective.

jononor•6mo ago
Another aspect is for distraction. OpenAI for example have had periods where they talked a lot on existential risk (humanity is doomed due to AI overloads, yadda yadda). This can serve as a very useful distraction to avoid talk about more plausible risks in the foreseeable future - such as massive concentrations of wealth, increased inequality, large amounts of unemployment, large scale personalized psychological manipulation, etc.
Terr_•6mo ago
I'm hoping that's intended as: "That group exists, but for clarity I want to say the lesswrong crowd aren't part of it", as opposed to my initial reading of: "Your 'generality' is a false covert accusation, you cad."
johnnienaked•6mo ago
I think you get to the point of authoritarianism. A trustless society is not a functioning one, unless people are forced to function.
AnimalMuppet•6mo ago
Here is my Bayesian version of this: If you have lies coming at you in high enough volume, you cannot update your priors at all, or else you will eventually come to believe the lie.

But then you have the problem: If you won't update your priors, and neither will someone else, but they have different priors than you, how can you talk to them?

But I'm maybe a bit less cynical than you. I think (maybe I'm kidding myself) that I can to some degree detect... something.

If someone built a parallel universe of false journal articles written by false experts, and then false news articles that referred to the false journal articles, and then false or sockpuppet users to point people to the news articles, that would be very hard to detect if it was done well. (A very persistent investigator might realize that the false experts either came from non-existent universities, or from universities that denied that the experts existed.) But often it isn't done well at all. Often it's "hey, here's this inadequately supported thing that disagrees with your priors that everybody is hyping". For me, "solidly disagrees with my solidly-held priors" is enough to give it a very skeptical look, and that's often enough to turn up the "inadequately supported" part. So I can at least sometimes detect something that looks like this, and avoid listening and believing it.

I'm hoping that that's enough to avoid "When does trying to become informed become an inevitable net negative because you're literally better off knowing nothing?" But we shall see.

jerf•6mo ago
It's the team of PhDs dedicated to me personally part that gets me.

In the current world, and the world for the next few years, the amount of human and CPU time that can be aimed at me personally is still low enough that what "real reality" generates outweighs the targeted content, and even the targeted content is clearly more accurately modeled by a lot of semi-smart people just throwing stuff at the wall and hoping to hook "someone" who may not be me personally. We talk about PhDs getting kids to click ads, and there's some truth to that, but at least there isn't anything like a human-brain-equivalent dedicated to getting my kids, personally, to click on ads. I have a lot of distrust of a lot of things but at least I can attack the content with the fact that needing to appeal broadly still keeps the content somewhat grounded in some sort of reality.

But over time, my personal brainpower isn't going to go up but the amount of firepower aimed directly at me is.

The good news is that it probably won't be unitary, just as the targeting today isn't unitary. But I'd like something better than that. And playing them against each other gets harder when the targeting becomes aware of that impact and they start compensating for that, because now they have the firepower to aim at me personally and do that sort of compensation.

9dev•6mo ago
If I may, I’d suggest reading the latest Harari book on this topic, Nexus. Great read with interesting ideas.
ianbicking•6mo ago
The piece (and lots of commentary) keeps talking about "AI Companies" as though it's this fixed set of companies that are destined to all be the same. But anyone can start an AI company... the models are all available, and the surrounding technology is fairly accessible. Yes, there's a lot of companies that will always maximize engagement. But... anyone could make something that doesn't serve that goal.
Analemma_•6mo ago
That's not a stable equilibrium though: if you don't maximize engagement, you'll be outcompeted, outspent, and probably ultimately acqui-hired and "our incredible journey"-d by the companies which do.

And if you don't think this is the inevitable outcome, note how every social media platform has, slowly or quickly, gravitated toward maximizing engagement to the exclusion of all other priorities. Why would AI be any different?

senko•6mo ago
Because social media is a winner-take-all with strong network effects.

AI isn't.

ianbicking•6mo ago
"Why would AI be any different?"

It might not go any differently, but I think all us folks here have an opportunity to make it different.

Any product needs to pursue enough engagement that the user actually gets value. I have a bunch of apps I installed aspirationally, but don't use. But if we're willing stop ourselves once we have enough, we can pursue enough engagement and enough revenue. But if you tie yourself to someone or something who will never have enough then I agree, you'll get sucked in.