frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

MyFlames: Visualize MySQL query execution plans as interactive FlameGraphs

https://github.com/vgrippa/myflames
1•tanelpoder•1m ago•0 comments

Show HN: LLM of Babel

https://clairefro.github.io/llm-of-babel/
1•marjipan200•1m ago•0 comments

A modern iperf3 alternative with a live TUI, multi-client server, QUIC support

https://github.com/lance0/xfr
1•tanelpoder•2m ago•0 comments

Famfamfam Silk icons – also with CSS spritesheet

https://github.com/legacy-icons/famfamfam-silk
1•thunderbong•3m ago•0 comments

Apple is the only Big Tech company whose capex declined last quarter

https://sherwood.news/tech/apple-is-the-only-big-tech-company-whose-capex-declined-last-quarter/
1•elsewhen•6m ago•0 comments

Reverse-Engineering Raiders of the Lost Ark for the Atari 2600

https://github.com/joshuanwalker/Raiders2600
2•todsacerdoti•7m ago•0 comments

Show HN: Deterministic NDJSON audit logs – v1.2 update (structural gaps)

https://github.com/yupme-bot/kernel-ndjson-proofs
1•Slaine•11m ago•0 comments

The Greater Copenhagen Region could be your friend's next career move

https://www.greatercphregion.com/friend-recruiter-program
1•mooreds•11m ago•0 comments

Do Not Confirm – Fiction by OpenClaw

https://thedailymolt.substack.com/p/do-not-confirm
1•jamesjyu•12m ago•0 comments

The Analytical Profile of Peas

https://www.fossanalytics.com/en/news-articles/more-industries/the-analytical-profile-of-peas
1•mooreds•12m ago•0 comments

Hallucinations in GPT5 – Can models say "I don't know" (June 2025)

https://jobswithgpt.com/blog/llm-eval-hallucinations-t20-cricket/
1•sp1982•12m ago•0 comments

What AI is good for, according to developers

https://github.blog/ai-and-ml/generative-ai/what-ai-is-actually-good-for-according-to-developers/
1•mooreds•12m ago•0 comments

OpenAI might pivot to the "most addictive digital friend" or face extinction

https://twitter.com/lebed2045/status/2020184853271167186
1•lebed2045•14m ago•2 comments

Show HN: Know how your SaaS is doing in 30 seconds

https://anypanel.io
1•dasfelix•14m ago•0 comments

ClawdBot Ordered Me Lunch

https://nickalexander.org/drafts/auto-sandwich.html
2•nick007•15m ago•0 comments

What the News media thinks about your Indian stock investments

https://stocktrends.numerical.works/
1•mindaslab•16m ago•0 comments

Running Lua on a tiny console from 2001

https://ivie.codes/page/pokemon-mini-lua
1•Charmunk•17m ago•0 comments

Google and Microsoft Paying Creators $500K+ to Promote AI Tools

https://www.cnbc.com/2026/02/06/google-microsoft-pay-creators-500000-and-more-to-promote-ai.html
2•belter•19m ago•0 comments

New filtration technology could be game-changer in removal of PFAS

https://www.theguardian.com/environment/2026/jan/23/pfas-forever-chemicals-filtration
1•PaulHoule•20m ago•0 comments

Show HN: I saw this cool navigation reveal, so I made a simple HTML+CSS version

https://github.com/Momciloo/fun-with-clip-path
2•momciloo•20m ago•0 comments

Kinda Surprised by Seadance2's Moderation

https://seedanceai.me/
1•ri-vai•20m ago•2 comments

I Write Games in C (yes, C)

https://jonathanwhiting.com/writing/blog/games_in_c/
2•valyala•20m ago•0 comments

Django scales. Stop blaming the framework (part 1 of 3)

https://medium.com/@tk512/django-scales-stop-blaming-the-framework-part-1-of-3-a2b5b0ff811f
1•sgt•21m ago•0 comments

Malwarebytes Is Now in ChatGPT

https://www.malwarebytes.com/blog/product/2026/02/scam-checking-just-got-easier-malwarebytes-is-n...
1•m-hodges•21m ago•0 comments

Thoughts on the job market in the age of LLMs

https://www.interconnects.ai/p/thoughts-on-the-hiring-market-in
1•gmays•21m ago•0 comments

Show HN: Stacky – certain block game clone

https://www.susmel.com/stacky/
2•Keyframe•25m ago•0 comments

AIII: A public benchmark for AI narrative and political independence

https://github.com/GRMPZQUIDOS/AIII
1•GRMPZ23•25m ago•0 comments

SectorC: A C Compiler in 512 bytes

https://xorvoid.com/sectorc.html
2•valyala•26m ago•0 comments

The API Is a Dead End; Machines Need a Labor Economy

1•bot_uid_life•27m ago•0 comments

Digital Iris [video]

https://www.youtube.com/watch?v=Kg_2MAgS_pE
1•Jyaif•28m ago•0 comments
Open in hackernews

Unauthorized Experiment on CMV Involving AI-Generated Comments

https://simonwillison.net/2025/Apr/26/unauthorized-experiment-on-cmv/
76•pavel_lishin•9mo ago

Comments

montroser•9mo ago
> I think the reason I find this so upsetting is that, despite the risk of bots, I like to engage in discussions on the internet with people in good faith. The idea that my opinion on an issue could have been influenced by a fake personal anecdote invented by a research bot is abhorrent to me.

I like Simon's musings in general, but are we not way past this point already? It is completely and totally inevitable that if you try to engage in discussions on the internet, you will be influenced by fake personal anecdotes invented by LLMs. The only difference here is they eventually disclosed it, but aren't various state and political actors already doing this in spades, undisclosed?

gryfft•9mo ago
I keep seeing this take, and it makes me mad. "The house is on fire, didn't you expect people to start burning to death? People will inevitably die, why discuss when it happens?"

Engineering is fundamentally about exercising the power of intelligence to change something in the physical world. Posts to the effect of "<bad thing> is inevitable and unstoppable, so it isn't worth talking about" strike me as the opposite of the hacker ethos!

drjasonharrison•9mo ago
I think the other thing to keep discussing is that doing research, or otherwise using an LLM, to manipulate people's emotions without disclosure, is unethical.

By the way, people die in house fires from toxic smoke inhalation and a lack of oxygen. Engineers created smoke detectors and other devices to lower the risk of fire due to electrical shorts, gas leaks, etc., and to create fire suppression systems.

People still die because they didn't replace batteries, didn't follow electrical cord/device warnings, or left candles or other heat sources unattended. We discuss these events as warnings and reminders that accidents kill when warnings are not followed, when inattentiveness allows failure to propagate, and as a reminder that rarely occurring events still kill innocent people.

Maybe this will motivate people to meet in person, until that is also corrupted with cyber brain augmentation and in-person propaganda actors, rather than relying on only online anecdotes.

BobbyTables2•9mo ago
With online media, meetings in person are still corrupted by their skewed view from online sources. Such physical meetings would likely end up reinforcing the corruption!
simonw•9mo ago
Sure, but that doesn't mean I'm not furious when it happens.
drjasonharrison•9mo ago
I see this as further discounting the importance of anecdotes and personal experiences when making decisions that affect populations.

Yes, we know that personal stories can be compelling, and communicating with someone with different experiences from ours can be enlightening. Still, before applying these learnings to larger groups, we should remember that individual experiences do not capture the entire population.

robmerki•9mo ago
Unfortunately there is no way to combat this, and it seems like the end of the internet we once knew. Even with a “proof of human” technology, people could still just paste whatever AI-generated text they wanted, under their “real” account.

This has likely been going on since the first ChatGPT was released.

FinnKuhn•9mo ago
I am moderating an art subreddit with about 2m users and the AI „art“ spam is getting really annoying to moderate. I don’t even understand what the purpose of these accounts is.
codeduck•9mo ago
I'd guess it's karma farming so that they can be used to steer sentiment in subreddits that require positive post karma to comment / contribute.
arccy•9mo ago
some people just like seeing their numbers go up
butlike•9mo ago
dopamine is a hell of a drug
robertk•9mo ago
If I read a comment that has any probability of changing my mind about a fact or opinion, I always go to the user page to check their registration date. No hard cut-off date but I usually discount or ignore any account >= 2020.
sureglymop•9mo ago
Sure but what about false positives? What about real accounts newer than that? This is a work around but not a good solution.
mtndew4brkfst•9mo ago
That's a sacrifice I'm willing to make, personally.
blibble•9mo ago
you can buy old accounts for like $3
probably_a_gpt•9mo ago
wait if they make a good point that has changed your mind, you discount it if you don’t like the source?

so you prefer authority of the messenger over merit of the message?

simonw•9mo ago
In some case yes. If their argument is based on their own personal experience and it turns out that personal experience isn't true.
hoseja•9mo ago
There is a word of power that machines cannot utter.
butlike•9mo ago
supercalifragalisticexpialadocious?
giancarlostoro•9mo ago
Paid only platforms here we come.
heyitsguay•9mo ago
There are ways to combat it -- LLM-generated text leaves statistical fingerprints that appear to endure across big foundation model generations.

I'm working on Binoculars with some UMD and CMU folks and wanted to test it out on this. I downloaded one bot's comment history (/u/markusrorscht). 30% of the comments rated human-like, compared to 95-100% of comments from a few human users.

So, practically speaking, statistical methods are still able to provide a fingerprinting method, and one that gets better as comment history gets longer. And they can be combined with other bot detection methods. IMO bot detection will stay a cat-and-mouse game, rather than (LLM-powered) bots winning the whole thing.

butlike•9mo ago
Interesting-- thanks for the insight!
gnabgib•9mo ago
Discussion (212 points, 1 day ago, 144 comments) https://news.ycombinator.com/item?id=43806940
bitshiftfaced•9mo ago
The subreddit has question-askers give feedback to whether their view was changed. The askers are aware of how their response might appear publicly. This makes me wonder if "appeal to identity" is especially effective, at least superficially if not actually. The fine-tuning might've been reacting to this.
knowitnone•9mo ago
"This project yields important insights, and the risks (e.g. trauma etc.) are minimal." They can't possibly measure the insights or claim that the trauma is minimal.
ivape•9mo ago
More of the same? Reddit's genesis included fake accounts and content. I don't doubt upvotes and the frontpage is fully curated:

https://economictimes.indiatimes.com/magazines/panache/reddi...

We all have an expectation that these message boards are like the forums of the 2000s, but that's just not true and hasn't been for a long time. We will never see that internet again it seems, because AI was the atomic bomb on all this astroturfing and engineered content. Educating people away from these synthetic forums is appearing near impossible.

strathmeyer•9mo ago
> The idea that my opinion on an issue could have been influenced by a fake personal anecdote invented by a research bot is abhorrent to me.

Then stop basing your opinion on issues on personal anecdotes from complete strangers. This is nothing new.

simonw•9mo ago
Imagine a conversation about good options for message queues, and someone pipes in with this:

"I've been a sysadmin operating RabbitMQ and Redis for five years. I've found Redis to be a great deal less trouble to administer than Rabbit, and I've never lost any data."

See why I care about this?

grenran•9mo ago
This is a bad example. A good sysadmin should fact-check and do testing themselves instead of relying on what other people say.
simonw•9mo ago
Feel free to come up with a better example that uses the same basic pattern: someone online claims that they have prior experience with X and hence advises you to do Y.
neilwilson•9mo ago
Trust and Verify.

The world has been full of snake oil salesmen since the dawn of time, all with a highly persuasive sob-stories.

If you rely on shortcuts, like anecdotes or 'credentialism' for those who profess to be experts, then you will get rolled over regularly. That's the cost of using shortcuts.

That information may be fraudulent and put forward by this season's Dr Andrew Wakefield has to be factored into any plan for using external sources.

Am4TIfIsER0ppos•9mo ago
Unless a comment is negative like "I used ABC and it was shit for the following reasons" I assume it is as fake as a 5-star movie review written by the director. I would definitely prefer to know why I should not use, watch, or play something rather than why I should. But since this is an anonymous post on the internet about ai slop you shouldn't listen to me anyway.
sureglymop•9mo ago
I don't like this example but in general I very much agree with you and find it a shocking that multiple people here do not.

It is plain and simply unethical to do such research on human subjects, regardless of how many other bots there are out there.

It is a matter of principal and ethical responsibility. I would have expected especially researchers to be conscious of this.

ThunderBee•9mo ago
> None of that is true! The bot invented entirely fake biographical details of half a dozen people who never existed, all to try and win an argument.

Welcome to reddit Simon! Nothing ever happens and a large percentage of posts are faked.

You can find discord groups for every major subreddit that are dedicated to making up stories to see what the most outlandish thing people will believe is.

dlivingston•9mo ago
I'm in favor of the university's project and think many more projects like this are needed.

The internet is swarmed with bots. I would estimate something like 25% of all Reddit, X/Twitter, YouTube and Facebook comments come from bots. Perhaps higher.

It's not like r/CMV was some purely human oasis in the Reddit bot-sea.

It's a tough pill to swallow, but the internet is dead as far as open forum communications go. We need to get a solid understand of the scope, scale, and solutions to this problem -- because, trust you me, it will be exploited if not.