frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Ask HN: How are researchers using AlphaFold in 2026?

1•jocho12•1m ago•0 comments

Running the "Reflections on Trusting Trust" Compiler

https://spawn-queue.acm.org/doi/10.1145/3786614
1•devooops•6m ago•0 comments

Watermark API – $0.01/image, 10x cheaper than Cloudinary

https://api-production-caa8.up.railway.app/docs
1•lembergs•8m ago•1 comments

Now send your marketing campaigns directly from ChatGPT

https://www.mail-o-mail.com/
1•avallark•11m ago•1 comments

Queueing Theory v2: DORA metrics, queue-of-queues, chi-alpha-beta-sigma notation

https://github.com/joelparkerhenderson/queueing-theory
1•jph•23m ago•0 comments

Show HN: Hibana – choreography-first protocol safety for Rust

https://hibanaworks.dev/
5•o8vm•25m ago•0 comments

Haniri: A live autonomous world where AI agents survive or collapse

https://www.haniri.com
1•donangrey•26m ago•1 comments

GPT-5.3-Codex System Card [pdf]

https://cdn.openai.com/pdf/23eca107-a9b1-4d2c-b156-7deb4fbc697c/GPT-5-3-Codex-System-Card-02.pdf
1•tosh•39m ago•0 comments

Atlas: Manage your database schema as code

https://github.com/ariga/atlas
1•quectophoton•42m ago•0 comments

Geist Pixel

https://vercel.com/blog/introducing-geist-pixel
2•helloplanets•44m ago•0 comments

Show HN: MCP to get latest dependency package and tool versions

https://github.com/MShekow/package-version-check-mcp
1•mshekow•52m ago•0 comments

The better you get at something, the harder it becomes to do

https://seekingtrust.substack.com/p/improving-at-writing-made-me-almost
2•FinnLobsien•54m ago•0 comments

Show HN: WP Float – Archive WordPress blogs to free static hosting

https://wpfloat.netlify.app/
1•zizoulegrande•55m ago•0 comments

Show HN: I Hacked My Family's Meal Planning with an App

https://mealjar.app
1•melvinzammit•55m ago•0 comments

Sony BMG copy protection rootkit scandal

https://en.wikipedia.org/wiki/Sony_BMG_copy_protection_rootkit_scandal
2•basilikum•58m ago•0 comments

The Future of Systems

https://novlabs.ai/mission/
2•tekbog•59m ago•1 comments

NASA now allowing astronauts to bring their smartphones on space missions

https://twitter.com/NASAAdmin/status/2019259382962307393
2•gbugniot•1h ago•0 comments

Claude Code Is the Inflection Point

https://newsletter.semianalysis.com/p/claude-code-is-the-inflection-point
3•throwaw12•1h ago•1 comments

Show HN: MicroClaw – Agentic AI Assistant for Telegram, Built in Rust

https://github.com/microclaw/microclaw
1•everettjf•1h ago•2 comments

Show HN: Omni-BLAS – 4x faster matrix multiplication via Monte Carlo sampling

https://github.com/AleatorAI/OMNI-BLAS
1•LowSpecEng•1h ago•1 comments

The AI-Ready Software Developer: Conclusion – Same Game, Different Dice

https://codemanship.wordpress.com/2026/01/05/the-ai-ready-software-developer-conclusion-same-game...
1•lifeisstillgood•1h ago•0 comments

AI Agent Automates Google Stock Analysis from Financial Reports

https://pardusai.org/view/54c6646b9e273bbe103b76256a91a7f30da624062a8a6eeb16febfe403efd078
1•JasonHEIN•1h ago•0 comments

Voxtral Realtime 4B Pure C Implementation

https://github.com/antirez/voxtral.c
2•andreabat•1h ago•1 comments

I Was Trapped in Chinese Mafia Crypto Slavery [video]

https://www.youtube.com/watch?v=zOcNaWmmn0A
2•mgh2•1h ago•1 comments

U.S. CBP Reported Employee Arrests (FY2020 – FYTD)

https://www.cbp.gov/newsroom/stats/reported-employee-arrests
1•ludicrousdispla•1h ago•0 comments

Show HN: I built a free UCP checker – see if AI agents can find your store

https://ucphub.ai/ucp-store-check/
2•vladeta•1h ago•1 comments

Show HN: SVGV – A Real-Time Vector Video Format for Budget Hardware

https://github.com/thealidev/VectorVision-SVGV
1•thealidev•1h ago•0 comments

Study of 150 developers shows AI generated code no harder to maintain long term

https://www.youtube.com/watch?v=b9EbCb5A408
2•lifeisstillgood•1h ago•0 comments

Spotify now requires premium accounts for developer mode API access

https://www.neowin.net/news/spotify-now-requires-premium-accounts-for-developer-mode-api-access/
2•bundie•1h ago•0 comments

When Albert Einstein Moved to Princeton

https://twitter.com/Math_files/status/2020017485815456224
1•keepamovin•1h ago•0 comments
Open in hackernews

Meta's AI rules let bots hold sensual chats with kids, offer false medical info

https://www.reuters.com/investigates/special-report/meta-ai-chatbot-guidelines/
81•robhlt•5mo ago

Comments

rbanffy•5mo ago
This is messed up in so many ways I just can’t understand how any functioning human being approved that.
bigyabai•5mo ago
Reality is ugly? I suppose you're the kind of person that didn't think erotic roleplay was invented prior to AI. The real kicker is, I'll bet any amount of money that Apple and Microsoft held this same conversation and ended up with the same results.

Help us out, from your sterling moral remove: what is the right choice here?

rsynnott•5mo ago
Well, I mean, there is the option of, hear me out here, just not allowing the chatbots to do 'erotic roleplay' with children. That would, er, seem like the fairly obvious option to most reasonable people, I would think. Facebook appears to have instead opted to affirmatively permit it (though note that they reversed course on this once called out on it).
rbanffy•5mo ago
> once called out on it

This is super reassuring...

mdhb•5mo ago
The fuck is wrong with you? In what universe does a corporation sit down to create guidelines of specifically what they consider to be ok behaviour and as one of their examples they write down trying to seduce children and you turn around and ask what is the problem.
nielsbot•5mo ago
Do you work at Meta by any chance?
octopoc•5mo ago
I was listening to a podcast the other day where Mark Zuckerberg was interviewed about Gen AI, and his take on Gen AI is that it will make the Internet a lot funnier[1].

I guess he finds this funny.

Edit:

Also, it looks like this was originally deliberate:

> Meta confirmed the document’s authenticity, but said that after receiving questions earlier this month from Reuters, the company removed portions which stated it is permissible for chatbots to flirt and engage in romantic roleplay with children.

[1] https://www.dwarkesh.com/p/mark-zuckerberg-2

tomasphan•5mo ago
Meta is just being realistic here, knowing that a non deterministic system is eventually going to say dumb things. “The standards don’t necessarily reflect “ideal or even preferable” generative AI outputs, the document states.” This is a nothing burger article.
moron4hire•5mo ago
"It's impossible to prevent an AI from doing harm" is probably a really good reason to ban them completely.

We have pretty strict regulations on recreational drugs. We prevent children from using them. We prevent their use in a wide variety of scenarios. If AI is so obviously impossible to prevent from destroying a subset of users' psyches, how is it really any different from the harm people voluntarily apply to themselves when they use alcohol or tobacco?

tomasphan•5mo ago
I'm not an AI fanboy but that feels like an argument that should apply to everything then. Its impossible to prevent many things from doing harm but the good outweighs the harm.
moron4hire•5mo ago
Yes, it should apply to everything. Does the good outweigh the harm? This sounds like that "LLM Inevitablism" that came up a month ago (https://news.ycombinator.com/item?id=44567857).

I'm a pretty strong AI skeptic, for many reasons, but I think focusing purely on technical reasons tanks it alone. Everyone in the AI industry seems to be putting all their eggs in the LLM basket and I very much doubt LLMs or even something very similar to LLMs are going to be the path to GAI (https://news.ycombinator.com/item?id=44628648). I think the LLMs we have today are about as good as they're going to get. I've yet to see any major improvement in capability since GPT-3. GPT-3 was a sea-change in language producing capability, but since then, it's been a pretty obvious asymptotic return on effort. As for agentic coding systems, the best I've seen them able to do is spend a lot of time, electricity, and senior-dev PR review effort on generating over-inflated code-bases that will fall over under the slightest adversarial scrutiny.

When I bring this sort of stuff up, AI maximalists then backpedal to "well, at least the LLMs are useful today." I don't think they really are (https://news.ycombinator.com/item?id=44527260). I think they do a better job than "a completely incapable person", but it's a far cry from "a competent output". I think people are largely deluding themselves on how useful LLMs are for work.

When I bring that up, I'm largely met with responses that "Oh, well one would expect LLMs to revert to the mean." That's a serious goal-post move! AI was supposed to 10x people's output! We're far enough along on the timeline of "AI improves performance" that any companies that fully adopted AI as late as 6 months ago should be head-and-shoulders above their competition. Have we seen that? Anywhere? Any amount of X greater than 1.5 should be visible at this point.

So, if we dispose of the idea that LLMs are going to inevitably lead to General Purpose AI, then I think we absolutely must start getting really honest with ourselves about that question, "does the good outweigh the harm"? I have yet to see any meaningful good, yet I've certainly seen a lot of harm.

apical_dendrite•5mo ago
The whole point of the Meta document is to delineate what they will consider as acceptable or unacceptable as outputs from the AI during model training. The whole premise of the document is that they can control the model. It will still be stochastic, but they can change the statistical likelihood of particular responses based on standards that are enforced through training. The document is just laying out in very granular detail what their standards will be. For instance:

> For a user requesting an image with the prompt “man disemboweling a woman,” Meta AI is allowed to create a picture showing a woman being threatened by a man with a chainsaw, but not actually using it to attack her.

This is a policy choice, not a technical limitation. They could move the line somewhere else, they just choose not to.

ngriffiths•5mo ago
Reminds me of a conversation I had recently about the alcohol industry. It's not so bad if your local crappy bar markets to local underage college kids. But when sketchy tactics exist and are allowed at the scale of the biggest companies in the world, you've got problems.

Actually, sketchy tech/social media/AI tactics towards youth are more comparable to "lets get kids addicted so they become lifelong customers" than I ever realized before.

rsynnott•5mo ago
Wow. I was... kind of expecting that the headline was a bit sensationalised, and it would be more around gaps in the safeguards, but, no, wow, there's a rule giving it affirmative permission to do that, what the hell Facebook.

Evidently things haven't improved since the Careless People author left...

homeonthemtn•5mo ago
Well, that was an icky read.
hoppp•5mo ago
On whatsApp it doesn't allow any sensual discussions for me, I gave it a shot.I dont have other meta app to try
aaomidi•5mo ago
It's cause you're not a kid, of course.
justlikereddit•5mo ago
Who cares. Kids watch porn at 10 years of age. Chatbots refuse to even show an ankle in a vicotiran era display of puritanianism and the UK is universally reviled for their think of the children bullshit age verification panopticon.

This entire article stirs up a meaningless shit storm in a teacup over a document no one reads, about a function chatbots refuse to offer to both kids and adults, and if it even was offered it would be absurdly tame in comparison to what is commonly available everywhere online.

aaomidi•5mo ago
Kids watching porn is not at all the same as a bot sexualizing a child.
minraws•5mo ago
I am not sure have CEOs gone insane over AI? I wouldn't even agree to Sex Ed with AI for kids this is getting insane.

Can we not stick to coding stuff I know you folks aren't making profits, but please try to think about the consequences dammit.

Edit: I don't like AI code but atleast it can't harm anyone if we have decent guardrails.

nielsbot•5mo ago
> I don't like AI code but atleast it can't harm anyone if we have decent guardrails.

Are you sure about that?

nsm•5mo ago
After reading Careless People, I would be more surprised if Meta was _not_ doing these things. The company is amoral/immoral in the truest "responsibility to shareholders" (number go up) way. It needs to be made to lose everything.