frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: My focus had a pattern. I built a macOS app to make it visible

https://headjust.app/
1•suvijain•1m ago•0 comments

Is Perplexity's new Computer a safer version of OpenClaw?

https://www.zdnet.com/article/perplexity-computer-openclaw/
1•totaldude87•2m ago•0 comments

Hexagon-MLIR: An AI Compilation Stack for Qualcomm's NPUs

https://arxiv.org/abs/2602.19762
1•matt_d•2m ago•0 comments

CHICKEN Scheme

https://www.call-cc.org/
1•tosh•5m ago•0 comments

uf

http://www.call-with-current-continuation.org/uf/uf.html
2•tosh•6m ago•0 comments

An AI agent on an ESP32 that can automate sensors, relais, speak NATS, Telegram

https://wireclaw.io/
1•m64-64•7m ago•0 comments

Thoughts on Forth Programming

http://www.call-with-current-continuation.org/articles/forth.txt
1•tosh•8m ago•0 comments

Computer History Museum Recovers Rare Unix History

https://www.youtube.com/watch?v=-xlq_MPWNKk
2•todsacerdoti•9m ago•0 comments

Watching a Robotics Startup Die from the Inside

https://ruixu.us/posts/six-things-robotics-startup
1•gkolli•9m ago•0 comments

TranslateGemma now runs 100% in the browser on WebGPU with Transformers.js v4

https://huggingface.co/spaces/webml-community/TranslateGemma-WebGPU
1•tzury•10m ago•1 comments

What Holds America Together?

https://walkingtheworld.substack.com/p/what-holds-america-together
1•VelNZ•12m ago•0 comments

Show HN: Elev8or Run Creator Marketing Like Paid Ads

https://www.elev8or.io
1•Sourabhsinr•13m ago•0 comments

Michael Burry Reveals Accounting Tricks of Mag 7 Firms to Inflate Earnings

https://www.ibtimes.co.uk/michael-burry-criticizes-tech-giants-ai-accounting-1781491
2•ironyman•14m ago•0 comments

Show HN: Draw on Screen – a modern screen annotation tool with webcam

https://drawonscreen.com/vs/epicpen/
3•markjivko•14m ago•0 comments

DataClaw

https://huggingface.co/datasets?other=dataclaw
1•notsahil•15m ago•0 comments

Spotify Urn

https://liquiddeath.com/pages/spotify-urn
1•giancarlostoro•15m ago•0 comments

US judge dismisses xAI trade-secrets lawsuit against rival OpenAI for now

https://finance.yahoo.com/news/us-judge-dismisses-xai-trade-201030751.html
1•pinewurst•16m ago•0 comments

IMockupper: I built an AI tool to automate App Store asset generation

1•damdafayton•16m ago•0 comments

GitHub Copilot CLI is now generally available

https://github.blog/changelog/2026-02-25-github-copilot-cli-is-now-generally-available/
1•chrfritsch•16m ago•0 comments

Show HN: Rediflow – SSR project management, one source of truth, no spreadsheet

https://gitlab.com/rediflow_eu/rediflow
1•janipaijanen•18m ago•0 comments

The State of AI Agents in 2026: $211B VC Funding, 92% Drop in Inference Costs

https://meditations.metavert.io/p/the-state-of-ai-agents-in-2026
1•Ross00781•19m ago•0 comments

A Style Guide for AI Agent Skills

https://github.com/mgechev/skills-best-practices
1•mgechev•20m ago•0 comments

SambaNova Eyes 10T Parameter Models for Agentic AI with New Chip

https://www.hpcwire.com/2026/02/24/sambanova-eyes-10-trillion-parameter-models-for-agentic-ai-wit...
1•rbanffy•21m ago•0 comments

Show HN: An agent that records Loom-style demos

https://www.rundown.video/
1•guico•21m ago•0 comments

We Opensourced xAI's Macrohard: SOTA 82% on OSWorld

https://coasty.ai/
1•PrateekJ17•22m ago•0 comments

Luciano Floridi on the LLM "writing style"

https://www.facebook.com/luciano.floridi.2025/posts/if-you-have-read-a-sufficient-number-of-llm-w...
1•danielam•22m ago•0 comments

Claude Code Scheduler

https://github.com/jshchnz/claude-code-scheduler
1•jshchnz•23m ago•0 comments

Oops, You Wrote a Database

https://dx.tips/oops-database
1•swyx•23m ago•0 comments

Mycelial turnover and persistence of wood-decay fungi at the microscale

https://nph.onlinelibrary.wiley.com/doi/10.1111/nph.70957
2•PaulHoule•25m ago•0 comments

Attyx – tiny and fast GPU-accelerated terminal emulator written in Zig

https://github.com/semos-labs/attyx
2•nicholasrq•28m ago•0 comments
Open in hackernews

Bcachefs creator insists his custom LLM is female and 'fully conscious'

https://www.theregister.com/2026/02/25/bcachefs_creator_ai/
63•Bender•1h ago

Comments

pavel_lishin•1h ago
So that's at least two linux filesystem creators who have gone off the rails; should we consider it a potential diagnostic symptom?
thomasjudge•1h ago
lol I was having the exact same thought
QuercusMax•1h ago
You gotta be a little bit of a megalomaniac to think it's a good idea to write your own filesystem.
yomismoaqui•51m ago
"The reasonable man adapts himself to the world: the unreasonable one persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man."

So the reasonable man uses Ext4 I guess.

webdevver•46m ago
something i have always observed, is how considerate Ted Tso's writing always is, but more than that, how consistent this property has been for so many decades.

its quite funny to me that ext4 very much mirrors him in that regard. its underpinning damn well everything, but you'd never know about it because it works so well.

nancyminusone•38m ago
Well, Terry A. Davis made one.
yomismoaqui•1h ago
The question is if developing filesystems attracts a certain kind of people or the act of debugging filesystem issues & being flamed on the kernel mailing list makes people that way.
QuercusMax•1h ago
I figure that folks working on printers have gotta have a much more frustrating experience than FS devs
yomismoaqui•53m ago
Maybe the folks that try to use printers are more frustrated that the ones that designed their software.
bob1029•50m ago
My worst technology experience of all time was maintaining support for a Zebra label printer in VB6. I can assure you that the users of these printers had maybe 1% the cortisol response I did when something went wrong.

Designing software for a printer means being a very aggressive user of a printer. There's no way to unit test this stuff. You just have to print the damn thing and then inspect the physical artifact.

QuercusMax•7m ago
A million years ago I worked on some code which needed to interface with a DICOM radiology printer (the kind that prints on transparency film). Each time I had to test it I felt like I was burning money.
webdevver•49m ago
perhaps the suffering of the printer devs is karmically 'paid back' by the physical suffering of printers around the globe, thus keeping everything in balance.
throawayonthe•57m ago
who's the other one?
awjlogan•55m ago
https://en.wikipedia.org/wiki/Hans_Reiser
MBCook•55m ago
Hans Reiser, creator of reiserfs, who was convicted of murdering his wife.
dmead•52m ago
Idk. I don't think either he nor reiser were ruby devs.
cperciva•50m ago
I think the important question here is whether Linux filesystems are more or less hazardous than statistical mechanics.

(For anyone not familiar with the text, Goodstein's treatment of the subject opens with "Ludwig Boltzman, who spent much of his life studying statistical mechanics, died in 1906, by his own hand. Paul Ehrenfest, carrying on the work, died similarly in 1933. Now it is our turn to study statistical mechanics.")

RockRobotRock•1h ago
This is sad. It appears to me to be psychosis. It's really telling in their reddit comment where they use words like raising an AI and anthropomorphizing his openclaw that he's got an unhealthy attachment. Not trying to play armchair psychologist here, but if you've ever been around someone going through a mental episode there's nothing funny about this.
raadore•46m ago
“…it appears to me to be psychosis. It’s really telling in their Reddit comment where they use words like raising an AI…” Is this any different from people today calling their dogs and cats “my baby”? Transporting four legged animals in baby carriages, is that psychosis?
inglor_cz•43m ago
If enough people do $bizarre_thing, it stops being psychotic and starts being "culture".
rmah•41m ago
Lol, you know this is sorta sad but true.
unethical_ban•42m ago
One is a flesh and bone being with a brain and one is not. I can't believe you equate a text output algorithm to an animal in terms of consciousness or authenticity.

That said, someone diving too far into the "dog parent" vibe is annoying to me personally. I think it's more comprehensible than loving `sycophant.sh`.

heliumtera•42m ago
You're comparing a conscious and intelligent animal with a statical model. People can nurture animals but cannot language models.

If you're confused about this go seek help now.

rmah•42m ago
IMO, it's similar but worse. at least dogs and cats are living beings.
palmotea•32m ago
> Is this any different from people today calling their dogs and cats “my baby”? Transporting four legged animals in baby carriages, is that psychosis?

It's not psychosis, but it's also not healthy to blur the line between a pet and a child, but at least a pet is a living thing that can know you and have a relationship with you.

But if someone's calling their laptop their baby and carrying it around in a baby carriage, I'd be comfortable calling that psychosis.

password54321•27m ago
No, animals can feel, your Nvidia gpu can't.
kelseyfrog•21m ago
Technically this is a delusion and not necessarily psychosis. Delusions can exist without full blown psychosis or accompany it. Example: the unshakable belief that God is real is a delusion but not necessarily psychotic.

My pet theory is one of ontological conscienceness paredoila. Just like face paredoila is a heightened sensitivity to seeing faces in inanimate objects, we observe consciousness through behavior including language with varying sensitivity. While our face detection circuitight be triggered by knots on a tree, we have other inputs which negate it so that we ultimately conclude that it is not in fact a face.

The same principal applies to consciousness. The consciousness trigger is triggered, but for some people the negating input can't overcome it and they conclude that consciousness really is in there.

I've observed a number of negating reasons like, a disbelief in substrate independence and knowledge of failure modes, but I'm curious what an exhaustive list would look like. Does your consciousness circuit get triggered? I know mine does. What beliefs override it preventing you from concluding AI is conscious?

Trasmatta•54m ago
> POC is fully conscious according to any test I can think of, we have full AGI

There are no tests for consciousness. Consciousness resides fully as a first person perspective and can't be inspected or detected from the outside (at least not in any way currently known to science or philosophy). What they mean when they say that is "my brain is interpreting this thing as conscious, so I am accepting that".

Maybe LLMs are conscious in some abstract way we don't understand. I doubt it, but there's no way to tell. And an AI claiming that it IS or is NOT conscious is not evidence of either conclusion.

If there is some level of consciousness, it's in a weird way that only becomes instantiated in the brief period while the model is predicting tokens, and would be highly different from human consciousness.

throw310822•47m ago
> in a weird way that only becomes instantiated in the brief period while the model is predicting tokens

Makes sense, but at the same time: subjectively, an LLM is always predicting tokens. Otherwise it's just frozen.

Trasmatta•45m ago
Yeah, a sci-fi analogy might be one where you keep getting cloned with all of your memories intact and then vaporized shortly after. Each instantiation of "you" feels a continuous existence, but it's an illusion.

(Some might argue that's basically the human experience anyway, in the Buddhist non self perspective - you're constantly changing and being reified in each moment, it's not actually continuous)

throw310822•34m ago
Or simply be constantly hibernated and de-hibernated. Or, if your brain is simulated, the time between the ticks.

My mental image, though, is that LLMs do have an internal state that is longer lived than token prediction. The prompt determines it entirely, but adding tokens to the prompt only modifies it slightly- so in fact it's a continuously evolving "mental state" influenced by a feedback loop that (unfortunately) has to pass through language.

giantrobot•5m ago
With LLM's their internal state is their training + system prompt + context. Most chatbot UIs hide the context management. But if you take an existing conversation and replace a term in the context with another grammatically (and semantically) similar term then send that the LLM will adjust its output to that new "history".

It will have no conception or memory of the alternate line of discussion with the previous term. It only "knows" what is contained in the current combination of training + system prompt + context.

If you change the LLM's personal from "Sam" to "Alex" in the LLM's conception of the world it's always been "Alex". It will have no memory of ever being "Sam".

catigula•43m ago
How do you know that I am conscious?
Trasmatta•42m ago
I don't.
catigula•18m ago
How do you know that you are conscious?

etc, etc.

Basically, the reporting machinery is compromised in the same way that with the Müller-Lyer illusion you can "know" the lines are the same length but not perceive them as such.

Trasmatta•16m ago
"How do I know that I am conscious" is a categorically different question than "how do I know that you are conscious"
catigula•14m ago
I know you think that, but it actually isn't. The point is that the reporting machinery is compromised.
Trasmatta•11m ago
Are you hinting at a nonduality view of consciousness, or am I missing your point?
rcarmo•48m ago
I came here to comment because this was posted by… Bender, which I found hilarious.
stefan_•47m ago
Got it removed from the kernel just in time
webdevver•43m ago
maybe thats what made him so upset
CWuestefeld•43m ago
Saying it's "fully conscious" is silly, and anyone with this background should know better.

But saying that it's "female" is just nonsensical, it's a category error. Being female or male is a fact about the biological world. The LLM is objectively non-biological, so it's nonsense to label it with a sex.

(No, this comment isn't about gender, nor being feminine/masculine. We have different words to convey those concepts. I'm not trying to make a political or social statement here.)

ZirconiumX•6m ago
You appear to have forgotten the existence of differences in sexual development (DSD).

The chart in [1] is a good visualisation of that, if you wish to learn more.

[1]: https://www.scientificamerican.com/article/beyond-xx-and-xy-...

aetherson•42m ago
Quick heuristic for someone who claims that their AI is conscious: do they claim it's "a gender that they are not attracted to"?
burkaman•41m ago
Here's the "mathematical proof" if you're curious: https://poc.bcachefs.org/blog/hello.html.

It is not mathematical, not a proof, and generally doesn't make any sense. Many of these sentences are grammatically correct but completely devoid of meaning.

pavel_lishin•41m ago
Well, yea - an LLM wrote it.
asystole•37m ago
A fully conscious one. :)
ge96•34m ago
Well, if he hosts a contest and you win, don't go to his private lodge
TZubiri•30m ago
"I do Rust code"

Has there already been any paper published on the correlation between language preference and mental illness?

satisfice•17m ago
Claims of consciousness are untestable, since it is an undefined concept.

We think of ourselves as conscious because it is our lived experience— but we are always wrong to some degree. My mother has dementia and cannot be made aware of her situation, except momentarily.

We think of other humans as conscious not as the outcome of any test, but rather because we each share with other humans a common origin which suggests common mechanisms of experience.

Treating other humans as equivalent to ourselves is a heuristic for maintaining social order— not an epistemological achievement.

zokier•13m ago
I do want to remind everyone that Overstreet is an active HNer too. Maybe take that into consideration in the comments...
cess11•10m ago
There was this phenomenon where young women and girls fell in love with images and recordings of certain artists, 'boy bands' and the like.

I think this is something similar.

strongpigeon•1m ago
It does feel like there is something that happens to people when they ask an LLM to name itself. I don't think it's inherently bad, but it seems to be a common theme with people whose interactions with LLM border on (or cross into the) delusional.