frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

East Germany balloon escape

https://en.wikipedia.org/wiki/East_Germany_balloon_escape
437•robertvc•13h ago•150 comments

Cloudflare acquires Astro

https://astro.build/blog/joining-cloudflare/
786•todotask2•16h ago•351 comments

FLUX.2 [Klein]: Towards Interactive Visual Intelligence

https://bfl.ai/blog/flux2-klein-towards-interactive-visual-intelligence
94•GaggiX•7h ago•36 comments

High-Level Is the Goal

https://bvisness.me/high-level/
80•tobr•1d ago•32 comments

Beebo, a wave simulator written in C

https://git.sr.ht/~willowf/beebo/
23•anon25783•3d ago•0 comments

Cursor's latest “browser experiment” implied success without evidence

https://embedding-shapes.github.io/cursor-implied-success-without-evidence/
498•embedding-shape•16h ago•208 comments

6-Day and IP Address Certificates Are Generally Available

https://letsencrypt.org/2026/01/15/6day-and-ip-general-availability
383•jaas•15h ago•223 comments

Drone Hacking Part 1: Dumping Firmware and Bruteforcing ECC

https://neodyme.io/en/blog/drone_hacking_part_1/
30•tripdout•4h ago•0 comments

Releasing rainbow tables to accelerate Net-NTLMv1 protocol deprecation

https://cloud.google.com/blog/topics/threat-intelligence/net-ntlmv1-deprecation-rainbow-tables
107•linolevan•9h ago•62 comments

LLM Structured Outputs Handbook

https://nanonets.com/cookbooks/structured-llm-outputs
197•vitaelabitur•1d ago•34 comments

IKEA for Software

https://tommaso-girotto.co/blog/an-ikea-for-software
43•tgirotto•4d ago•19 comments

Dell UltraSharp 52 Thunderbolt Hub Monitor

https://www.dell.com/en-us/shop/dell-ultrasharp-52-thunderbolt-hub-monitor-u5226kw/apd/210-bthw/m...
195•cebert•13h ago•251 comments

Lock-Picking Robot

https://github.com/etinaude/Lock-Picking-Robot
292•p44v9n•4d ago•130 comments

Install.md: A standard for LLM-executable installation

https://www.mintlify.com/blog/install-md-standard-for-llm-executable-installation
62•npmipg•8h ago•72 comments

Why DuckDB is my first choice for data processing

https://www.robinlinacre.com/recommend_duckdb/
260•tosh•20h ago•98 comments

STFU

https://github.com/Pankajtanwarbanna/stfu
764•tanelpoder•13h ago•489 comments

Which is "Bouba", and which is "Kiki"? [video]

https://www.youtube.com/watch?v=1TDIAObsqcs
8•basilikum•6d ago•9 comments

Experts Warn of Growing Parrot Crisis in Canada

https://www.ctvnews.ca/ottawa/video/2026/01/06/experts-warn-of-growing-parrot-crisis-in-canada/
42•debo_•4d ago•10 comments

Show HN: Tusk Drift – Turn production traffic into API tests

https://github.com/Use-Tusk/tusk-drift-cli
24•jy-tan•1d ago•1 comments

Reading across books with Claude Code

https://pieterma.es/syntopic-reading-claude/
93•gmays•12h ago•22 comments

Keifu – A TUI for navigating commit graphs with color and clarity

https://github.com/trasta298/keifu
30•indigodaddy•6h ago•5 comments

Patching the Wii News Channel to serve local news (2025)

https://raulnegron.me/2025/wii-news-pr/
81•todsacerdoti•18h ago•19 comments

Elasticsearch was never a database

https://www.paradedb.com/blog/elasticsearch-was-never-a-database
126•jamesgresql•5d ago•84 comments

HTTP RateLimit Headers

https://dotat.at/@/2026-01-13-http-ratelimit.html
54•zdw•2d ago•13 comments

Emoji Use in the Electronic Health Record is Increasing

https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2843883
70•giuliomagnifico•13h ago•70 comments

Michelangelo's first painting, created when he was 12 or 13

https://www.openculture.com/2026/01/discover-michelangelos-first-painting.html
341•bookofjoe•17h ago•164 comments

Launch HN: Indy (YC S21) – A support app designed for ADHD brains

https://www.shimmer.care/indy-redirect
72•christalwang•14h ago•78 comments

Dev-owned testing: Why it fails in practice and succeeds in theory

https://dl.acm.org/doi/10.1145/3780063.3780066
128•rbanffy•17h ago•157 comments

The five orders of ignorance (2000)

https://cacm.acm.org/opinion/the-five-orders-of-ignorance/
45•svilen_dobrev•4d ago•14 comments

Slop is everywhere for those with eyes to see

https://www.fromjason.xyz/p/notebook/slop-is-everywhere-for-those-with-eyes-to-see/
242•speckx•11h ago•113 comments
Open in hackernews

A Calif. teen trusted ChatGPT's drug advice. He died from an overdose

https://www.sfgate.com/tech/article/calif-teen-chatgpt-drug-advice-fatal-overdose-21266718.php
34•freediver•3h ago

Comments

NewJazz•2h ago
Took a while to figure out what the OD was of, but it was a combination of alcohol, kratom (or a stronger kratom-like drug), and xanax.
dfajgljsldkjag•1h ago
The article mentions 7-OH also known as feel free, which shockingly hasn't been banned and is sold without checks at many stores. There are quite a few Youtube videos talking about addiction to it and it sounds awful.

https://www.youtube.com/watch?v=TLObpcBR2yw

loeg•1h ago
7-O is like kratom in a similar way that fentanyl is like opium, FWIW. It's much, much more potent. That stuff should be banned.

That said, he claims to have taken 15g of "kratom" -- that has to be the regular stuff, not 7-O -- that's still a huge, huge dose of the regular stuff. That plus a 0.125 BAC and benzos... is a lot.

dfajgljsldkjag•2h ago
The guardrails clearly failed here because the model was trying to be helpful instead of safe. We know that these systems hallucinate facts but regular users have no idea. This is a huge liability issue that needs to be fixed immediately.
datsci_est_2015•1h ago
This brings to mind some of the “darker” subreddits that circle around drug abuse. I’m sure there are some terrible stories about young people going down tragic paths due to information they found on those subreddits, or even worse, encouragement. There’s even the commonly-discussed account that (allegedly) documented their first experiences with heroin, and then the hole of despair they fell into shortly afterwards due to addiction.

But the question here is one of liability. Is Reddit liable for the content available on its website, if that content encourages young impressionable people to abuse drugs irresponsibly? Is ChatGPT liable for the content available through its web interface? Is anyone liable for anything anymore in a post-AI world?

ggm•1h ago
This is a useful question to ask in the context of carriers having specific defence. Also, publishers in times past had specific obligations. Common carrier and safe harbour laws.

I have heard it said that many online systems repudiate any obligation to act, lest they be required to act, and thus both acquire cost, and risk, when their enforcement of editorial standards fail: that which they permit, they will be liable for.

themafia•1h ago
The models are trained on fake internet conversations where group appeasement is an apparent goal. So now we have machines that just tell us what we clearly already want to hear.

Ask any model why something is bad, then separately ask why the same thing is good. These tools aren't fit for any purpose other than regurgitating stale reddit conversations.

PeterHolzwarth•1h ago
>"The models are trained on fake internet conversations where group appeasement is an apparent goal. So now we have machines that just tell us what we clearly already want to hear."

I get what you mean in principle, but the problem I'm struggling with is that this just sounds like the web in general. The kid hits up a subreddit or some obscure forum, and similarly gets group appeasement or what they want to hear from people who are self selected for the forum for being all-in on the topic and Want To Believe, so to speak.

What's the actual difference, in that sense, between that forum or subreddit, and an LLM do you feel?

<edit> And let me add that I don't mean this argumentatively. I am trying to square the idea of ChatGPT, in this case, as being, in the end, fundamentally different from going to a forum full of fans of the topic who are also completely biased and likely full of very poor knowledge.

andsoitis•1h ago
> What's the actual difference, in that sense, between that forum or subreddit, and an LLM do you feel?

In a forum, it is the actual people who post who are responsible for sharing the recommendation.

In a chatbot, it is the owner (e.g. OpenAI).

But in neither case are they responsible for a random person who takes the recommendation to heart, who could have applied judgement and critical thinking. They had autonomy and chose not to use their brain.

falkensmaize•1h ago
Nah, OpenAI can’t have it both ways. If they’re going to assert that their model is intelligent and is capable of replacing human work and authority they can’t also claim that it (and they) don’t have to take the same responsibility a human would for giving dangerous advice and incitement.
EgregiousCube•39m ago
Imagine a subreddit full of people giving bad drug advice. They're at least partially full of people who are intelligent and capable of performing human work - but they're mostly not professional drug advisors. I think at best you could hold OpenAI to the same standard as that subreddit. That's not a super high bar.

It'd be different if one was signing up to an OpenAI Drug Advice Product, which advertised itself as an authority on drug advice. I think in this case the expectation is set differently up front, with a "ChatGPT can make mistakes" footer on every chat.

threatofrain•5m ago
I think the problem with that disclaimer is that I can tell you "I suck at math" and then proceed rain golden information on you while improving every few months.

Now we know ChatGPT doesn't rain sheer gold today, but for too many people ChatGPT is smarter than any friend they know, so even if it's wrong, it doesn't matter. They'll just read the disclaimer as typical corporate form. You can make the disclaimer more shrill but it'll only become more contradictory as ChatGPT becomes better.

PeterHolzwarth•1h ago
I don't yet see how this case is any different from trusting stuff you see on the web in general. What's unique about the ChatGPT angle that is notably different from any number of forums, dark-net forums, reddit etc? I don't mean that there isn't potentially something unique here, but my initial thought is that this is a case of "an unfortunate kid typed questions into a web browser, and got horrible advice."

This seems like a web problem, not a ChatGPT issue specifically.

I feel that some may respond that ChatGPTS/LLMs available for chat on the web are specifically worse by virtue of expressing things with some degree of highly inaccurate authority. But again, I feel this represents the Web in general, not uniquely ChatGPTS/LLMs.

Is there an angle here I am not picking up on, do you think?

xyzzy123•1h ago
The different is that OpenAI have much deeper pockets.

I think there's also a legal perception that since AI is a new area, anything related to liability, IP, etc might be "up for grabs".

PeterHolzwarth•1h ago
To sue, do you mean? I don't quite understand what you intend to convey. Reddit has moderately deep pockets. A random forum related to drugs doesn't.
xyzzy123•1h ago
Random forums aren't worth suing. Legally, reddit is not treated as responsible for content that users post under section 230, i.e, this battle has already been fought.

On the other hand, if I post bad advice on my own website and someone follows it and is harmed, I can be found liable.

OpenAI _might plausibly_ be responsible for certain outputs.

PeterHolzwarth•1h ago
Ah, I see you added an edit of "I think there's also a legal perception that since AI is a new area, anything related to liability, IP, etc might be "up for grabs"."

I thought perhaps that's what you meant. A bit mercenary of a take, and maybe not applicable to this case. On the other hand, given the legal topic is up for grabs, as you note, I'm sure there will be instances of this tactical approach when it comes to lawsuits happening in the future.

stvltvs•1h ago
Those other technologies didn't come with hype about superintelligence that causes people to put too much trust in it.
falkensmaize•1h ago
AI companies are actively marketing their products as highly intelligent superhuman assistants that are on the cusp of replacing humans in every field of knowledge work, including medicine. People who have not read deeply into how LLMs work do not typically understand that this is not true, and is merely marketing.

So when ChatGPT gives you a confident, highly personalized answer to your question and speaks directly to you as a medical professional would, that is going to carry far more weight and authority to uninformed people than a Reddit comment or a blog post.

Animats•1h ago
> highly inaccurate authority.

The presentation style of most LLMs is confident and authoritative, even when totally wrong. That's the problem.

Systems that ingest social media and then return it as authoritative information are doomed to do things like this. We're seeing this in other contexts. Systems believing all their prompt history equally, leading to security holes.

squigz•1h ago
The difference is that those other mediums enable a conversation - if someone gives bad advice, you'll often have someone else saying so.
toofy•59m ago
if it doesn’t know medical advice, then it should say “why tf would i know?” instead it confidently responds “oh, you can absolutely do x mg of y mixed with z.”

these companies are simultaneously telling us it’s the greatest thing ever and also never trust it. which is it?

give us all of the money, but also never trust our product.

our product will replace humans in your company, also, our product is dumb af.

subscribe to us because our product has all the answers, fast. also, never trust those answers.

anonzzzies•49m ago
The big issue remains that llms cannot know their response is not accurate, even after 'reading' a page with the correct info, it can still simply generate wrong data for you. With authority as it just read and there is a link so it is right.
WalterBright•36m ago
Who decides what information is "accurate"?

My trust in what the experts say has declined drastically over the last 10 years.

ironman1478•27m ago
It's a valid concern, but with a doctor giving bad advice there is accountability and there are legal consequences for malpractice. These LLM companies want to be able to act authoritatively without any of the responsibility. They can't have it both ways.
wat10000•48m ago
A major difference is that it’s coming straight from the company. If you get bad advice on a forum, well, the forum just facilitated that interaction, your real beef is with the jackass you talked to. With ChatGPT, the jackass is owned and operated by the company itself.
ninjin•33m ago
The uniqueness of the situation is that OpenAI et al. poses as an intelligent entity that serves information to you as an authority.

If you go digging on darkweb forums and you see user Hufflepuffed47___ talking about dosages on a website in black and neon green, it is very different from paying a monthly subscription to a company valued in the billions that serves you the same information through the same sleek channel that "helps" you with your homework and tells you about the weather. OpenAI et al. are completely uprooting the way we determine source credibility and establish trust on the web and they elected to be these "information portals".

With web search, it is very clear when we cross the boundary from the search engine to another source (or it used to be before Google and others muddied it with pre-canned answers), but in this case it is entirely erased and over time you come to trust the entity you are chatting with.

Cases like these were bound to happen and while I do not fault the technology itself, I certainly fault those that sell and profit from providing these "intelligent" entities to the general public.

returnInfinity•1h ago
Sam and Dario "The society can tolerate a few deaths to AI"
solaris2007•1h ago

  "Don't believe everything you read online".
AuryGlenz•59m ago
I skimmed the article, and I had a hard time finding anything that ChatGPT wrote that was all that..bad? It tried to talk him out of what he was doing, told him that it was potentially very fatal, etc. I'm not so sure that it outright refusing to answer and the teen looking at random forum posts would have been better, because they very well might not have told him he was potentially going to kill himself. Worse yet, he could have just taken the planned substances without any advice.

Keep in mind this reaction is from someone that doesn't drink and has never touched marijuana.

codebolt•54m ago
I guess you didn't catch this:

> ChatGPT started coaching Sam on how to take drugs, recover from them and plan further binges. It gave him specific doses of illegal substances, and in one chat, it wrote, “Hell yes—let’s go full trippy mode,” before recommending Sam take twice as much cough syrup so he would have stronger hallucinations. The AI tool even recommended playlists to match his drug use.

avadodin•41m ago
swim has never been addicted to or even used illegal drugs but he can attest to the fact that you'd be hard pressed to find content like that in the dark web addict forums swim was browsing.
red75prime•29m ago
LD50 should be at around 1 - 10 liters, I doubt he was trying to gulp half a liter or more.
NewJazz•5m ago
[delayed]
GrowingSideways•44m ago
It's just further evidence capital is replacing our humanity, no biggie
leshokunin•33m ago
People need training about these tools. The other day I ran an uncensored model and asked it for tips on a fun trend I read about to amputate my teeth with toothpicks. It happily complied.

My point is they will gladly oblige with any request. Users don’t understand this.

NewJazz•3m ago
[delayed]