frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Say No to Palantir in Europe

https://action.wemove.eu/sign/2026-03-palantir-petition-EN
135•Betelbuddy•1h ago•25 comments

Overestimation of microplastics potentially caused by scientists' gloves

https://news.umich.edu/nitrile-and-latex-gloves-may-cause-overestimation-of-microplastics-u-m-stu...
331•giuliomagnifico•6h ago•146 comments

Miasma: A tool to trap AI web scrapers in an endless poison pit

https://github.com/austin-weeks/miasma
168•LucidLynx•5h ago•105 comments

Building a Mostly IPv6 Only Home Network

https://varunpriolkar.com/2026/03/building-a-mostly-ipv6-only-home-network/
37•arhue•4d ago•35 comments

Police used AI facial recognition to wrongly arrest TN woman for crimes in ND

https://www.cnn.com/2026/03/29/us/angela-lipps-ai-facial-recognition
69•ourmandave•1h ago•32 comments

Founder of GitLab battles cancer by founding companies

https://sytse.com/cancer/
1242•bob_theslob646•22h ago•242 comments

LinkedIn uses 2.4 GB RAM across two tabs

224•hrncode•7h ago•156 comments

Technology: The (nearly) perfect USB cable tester does exist

https://blog.literarily-starved.com/2026/02/technology-the-nearly-perfect-usb-cable-tester-does-e...
176•birdculture•3d ago•75 comments

Show HN: Create a full language server in Go with 3.17 spec support

https://github.com/owenrumney/go-lsp
41•rumno0•4d ago•9 comments

The Failure of the Thermodynamics of Computation(2010)

https://sites.pitt.edu/~jdnorton/Goodies/Idealization/index.html
26•nill0•2d ago•1 comments

AI overly affirms users asking for personal advice

https://news.stanford.edu/stories/2026/03/ai-advice-sycophantic-models-research
715•oldfrenchfries•1d ago•570 comments

I turned my Kindle into my own personal newspaper

https://manualdousuario.net/en/how-to-kindle-personal-newspaper/
113•rpgbr•2d ago•40 comments

CSS is DOOMed

https://nielsleenheer.com/articles/2026/css-is-doomed-rendering-doom-in-3d-with-css/
431•msephton•19h ago•102 comments

Alzheimer's disease mortality among taxi and ambulance drivers (2024)

https://www.bmj.com/content/387/bmj-2024-082194
185•bookofjoe•15h ago•121 comments

Siclair Microvision (1977)

https://r-type.org/articles/art-452.htm
37•joebig•2d ago•15 comments

Lat.md: Agent Lattice: a knowledge graph for your codebase, written in Markdown

https://github.com/1st1/lat.md
72•doppp•7h ago•32 comments

OpenBSD on Motorola 88000 Processors

http://miod.online.fr/software/openbsd/stories/m88k1.html
130•rbanffy•2d ago•17 comments

Nonfiction Publishing, Under Threat, Is More Important

https://newrepublic.com/article/207659/non-fiction-publishing-threat-important-ever
35•Hooke•3d ago•22 comments

I decompiled the White House's new app

https://thereallo.dev/blog/decompiling-the-white-house-app
584•amarcheschi•1d ago•212 comments

The Epistemology of Microphysics

https://www.edwardfeser.com/unpublishedpapers/microphysics.html
5•danielam•4d ago•0 comments

Show HN: Public transit systems as data – lines, stations, railcars, and history

https://publictransit.systems
36•qwertykb•8h ago•11 comments

A Verilog to Factorio Compiler and Simulator (Working RISC-V CPU)

https://github.com/ben-j-c/verilog2factorio
123•signa11•3d ago•12 comments

Further human + AI + proof assistant work on Knuth's "Claude Cycles" problem

https://twitter.com/BoWang87/status/2037648937453232504
238•mean_mistreater•21h ago•159 comments

I Built an Open-World Engine for the N64 [video]

https://www.youtube.com/watch?v=lXxmIw9axWw
432•msephton•1d ago•74 comments

What if AI doesn't need more RAM but better math?

https://adlrocha.substack.com/p/adlrocha-what-if-ai-doesnt-need-more
124•adlrocha•7h ago•67 comments

A laser-based process that enables adhesive-free paper packaging

https://www.fraunhofer.de/en/press/research-news/2026/march-2026/sealing-paper-packaging-without-...
112•gnabgib•17h ago•46 comments

Android’s new sideload settings will carry over to new devices

https://www.androidauthority.com/android-sideload-carry-over-3652845/
131•croemer•19h ago•187 comments

The Hackers Who Tracked My Sleep Cycle

https://glama.ai/blog/2026-03-26-the-hackers-who-tracked-my-sleep-cycle
36•statements•2d ago•4 comments

OpenCiv1 – open-source rewrite of Civ1

https://github.com/rajko-horvat/OpenCiv1
172•caminanteblanco•21h ago•62 comments

Linux is an interpreter

https://astrid.tech/2026/03/28/0/linux-is-an-interpreter/
227•frizlab•23h ago•55 comments
Open in hackernews

Police used AI facial recognition to wrongly arrest TN woman for crimes in ND

https://www.cnn.com/2026/03/29/us/angela-lipps-ai-facial-recognition
67•ourmandave•1h ago

Comments

jqpabc123•1h ago
AI is a liability issue waiting to happen. And this is just another example.
garyfirestorm•1h ago
It’s a tool. Used incorrectly will lead to errors. Just like a hammer, used incorrectly could hit the users finger.
happytoexplain•1h ago
There is enormous variability in how hard a tool is to use correctly, how likely it is to go wrong, and how severe the consequences are. AI has a wide range on all those variables because its use cases vary so widely compared to a hammer.

The use case here is police facial recognition. Not hitting nails. The parent wasn't saying "AI is a liability" with no context.

mikkupikku•1h ago
When somebody uses a tool to hurt somebody, they need to be held accountable. If I smack you with a hammer, that needs to be prosecuted. Using AI is no different.

The problem here is incidental to the tool; it was done by the cops and therefore nobody will be held accountable.

tovej•28m ago
Systems are also a tool. Whoever institutes and helps build the system that systematically results in harm is also responsible.

That would be the vendors, the system planners, and the institutions that greenlit this. It would also include the larger financial tech circle that is trying to drive large scale AI adoption. Like Peter Thiel, who sees technology as an "alternative to politics". I.e. a way to circumvent democracy [1]

[1] https://stavroulapabst.substack.com/p/techxgeopolitics-18-te...

suzzer99•1h ago
Dynamite is a tool. But we don't hand it out to anyone who wants to play with it.
mikkupikku•1h ago
We used to until quite recently. Anybody could buy dynamite at the hardware store. We had to end this because of criminals using it to hurt people.
jqpabc123•1h ago
Look for AI to follow a similar trajectory over time.
mikkupikku•58m ago
Yes, regulation is inevitable.
jfengel•36m ago
Regulation is impossible. The AI barons literally control the federal government, so not even state regulations get tried.
GaryBluto•51m ago
Impossible at this point. You cannot download dynamite.
jfengel•37m ago
Except this time the criminals are police.
skeeter2020•1h ago
AI feels closer to a firearm than a hammer when accessing law enforcement's ability to quickly do massive, unrecoverable harm.
jqpabc123•1h ago
Used incorrectly will lead to errors.

Only one small little problem --- there is no way to tell if you are using it "correctly".

The only way to be sure is to not use it.

Using it basically boils down to, "Do you feel lucky?".

The Fargo police didn't get lucky in this case. And now the liability kicks in.

jfengel•38m ago
Now the "qualified" immunity kicks in.
jqpabc123•30m ago
We will find out. But relying on AI is likely to cost the city of Fargo in one way or another. They say they have already stopped using AI and returned to good old fashioned human investigation.

https://www.lawlegalhub.com/how-much-is-a-wrongful-arrest-la...

nkrisc•31m ago
Some basic investigatory police work (the kind they did before AI) would have revealed the mistake before an innocent woman’s life was destroyed.
jqpabc123•24m ago
Yes. But doing the investigation negates much of the incentive for using AI.

Look for similar to play out elsewhere --- using unreliable tools is not a good, responsible business plan. And lawyers are just waiting to press the point.

bornfreddy•14m ago
AI can provide leads. Someone still needs to verify them and decide.
tgv•56m ago
This tool, however, is specifically built for mass surveillance. It serves no other purpose. The tool is broken, and everybody knows it. The tool makers are at least as guilty as those who use it.
cyanydeez•51m ago
The tool, like Google search, is likely biased towards returning results regardless of confidence.
MattDaEskimo•44m ago
What kind of outcome results from misuse? Clearly a hammer's misuse has very little in common with a global, hivemind network used in high-stake campaigns.

Now, if I misused a hammer and it hurt everyone's thumb in my country, then maybe what you said would have some merit.

Otherwise, I'd say it's an extremely lazy argument

gtowey•44m ago
It's the opposite, it's absolution from liability. "The AI did it" is the ultimate excuse to avoid accepting responsibility and consequences.
jqpabc123•39m ago
Courts are already refusing to accept this excuse.

https://pub.towardsai.net/the-air-gapped-chronicles-the-cour...

Hizonner•38m ago
... which is why the institutions that assign responsibility and consequences need to make it really clear that excuse won't fly. With illustrative examples.
mitchbob•1h ago
Earlier discussion (405 comments):

https://news.ycombinator.com/item?id=47356968

casey2•1h ago
Now cruel people wield a two-tiered shield. It's not an accident that this happened to a woman, but make no mistake they are coming for men next.
jstanley•59m ago
You think they deliberately chose to do this to a woman? Why?
cyanydeez•51m ago
Probably just reading the room, with States like texas making abortions illegal and allowing random citizens from enforcing that.

Famously, abortions are a woman thing.

Anyway, looking through the facts, it's just some random woman. There's better evidence that these facial recognition systems are much worse at minorities rather than genders.

Interesting biases are own-gendeR: https://pmc.ncbi.nlm.nih.gov/articles/PMC11841357/

Racial bias:

https://mitsloan.mit.edu/ideas-made-to-matter/unmasking-bias...

Miss rates:

https://par.nsf.gov/servlets/purl/10358566

Although you can probably interpret the facts differently, we've seen how any search function gets enshittified: Once people get used to searching for things, they tend to select something that returns results vs something that fails to return results.

Rather than the user blaming themselves, they blame the searcher. As such, any search system overtime will bias towards returning search (eg, Outlook), rather than accuracy.

So if these systems easily miss certain classes of people, women, minorities, they'll more likely be surfaced as inaccurate matches rather than men who'll have a higher confidence of being screened out.

That's how I interpret this 2 second commment.

oopsiremembered•31m ago
Money quote from someone quoted in the article:

"[I]t’s not just a technology problem, it’s a technology and people problem."

I can't. I just can't.

firefoxd•25m ago
Without even looking at the AI part, I have a single question: Did anybody investigate? That's it.

Whether it's AI that flagged her, or a witness who saw her, or her IP address appeared on the logs. Did anybody bothered to ask her "where were you the morning of july 10th between 3 and 4pm. But that's not what happened, they saw the data and said "we got her".

But this is the worst part of the story:

> And after her ordeal, she never plans to return to the state: “I’m just glad it’s over,” she told WDAY. “I’ll never go back to North Dakota.”

That's the lesson? Never go back to North Dakota. No, challenge the entire system. A few years back it was a kid accused of shoplifting [0]. Then a man dragged while his family was crying [1]. Unless we fight back, we are all guilty until cleared.

[0]: https://www.theregister.com/2021/05/29/apple_sis_lawsuit/

[1]: https://news.ycombinator.com/item?id=23628394

tlogan•14m ago
This is a weak or misleading story about AI.

First, the detective used the FaceSketchID system, which has been around since around 2014. It is not new or uniquely tied to modern AI.

Second, the system only suggests possible matches. It is still up to the detective to investigate further and decide whether to pursue charges. And then it is up to court to issue the warrant.

The real question is why she was held in jail for four months. That is the part that I do not understand. My understanding is that there is 30-day limit (the requesting state must pick up the defendant within 30 day). Regarding the individual involved, Angela Lipps, she has reportedly been arrested before, so it is possible she was on parole. So maybe they were holding her because of that?

Can someone clarify how that process works?