frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Queueing Theory v2: DORA metrics, queue-of-queues, chi-alpha-beta-sigma notation

https://github.com/joelparkerhenderson/queueing-theory
1•jph•6m ago•0 comments

Show HN: Hibana – choreography-first protocol safety for Rust

https://hibanaworks.dev/
1•o8vm•8m ago•0 comments

Haniri: A live autonomous world where AI agents survive or collapse

https://www.haniri.com
1•donangrey•9m ago•1 comments

GPT-5.3-Codex System Card [pdf]

https://cdn.openai.com/pdf/23eca107-a9b1-4d2c-b156-7deb4fbc697c/GPT-5-3-Codex-System-Card-02.pdf
1•tosh•22m ago•0 comments

Atlas: Manage your database schema as code

https://github.com/ariga/atlas
1•quectophoton•24m ago•0 comments

Geist Pixel

https://vercel.com/blog/introducing-geist-pixel
1•helloplanets•27m ago•0 comments

Show HN: MCP to get latest dependency package and tool versions

https://github.com/MShekow/package-version-check-mcp
1•mshekow•35m ago•0 comments

The better you get at something, the harder it becomes to do

https://seekingtrust.substack.com/p/improving-at-writing-made-me-almost
2•FinnLobsien•36m ago•0 comments

Show HN: WP Float – Archive WordPress blogs to free static hosting

https://wpfloat.netlify.app/
1•zizoulegrande•38m ago•0 comments

Show HN: I Hacked My Family's Meal Planning with an App

https://mealjar.app
1•melvinzammit•38m ago•0 comments

Sony BMG copy protection rootkit scandal

https://en.wikipedia.org/wiki/Sony_BMG_copy_protection_rootkit_scandal
1•basilikum•41m ago•0 comments

The Future of Systems

https://novlabs.ai/mission/
2•tekbog•41m ago•1 comments

NASA now allowing astronauts to bring their smartphones on space missions

https://twitter.com/NASAAdmin/status/2019259382962307393
2•gbugniot•46m ago•0 comments

Claude Code Is the Inflection Point

https://newsletter.semianalysis.com/p/claude-code-is-the-inflection-point
3•throwaw12•48m ago•1 comments

Show HN: MicroClaw – Agentic AI Assistant for Telegram, Built in Rust

https://github.com/microclaw/microclaw
1•everettjf•48m ago•2 comments

Show HN: Omni-BLAS – 4x faster matrix multiplication via Monte Carlo sampling

https://github.com/AleatorAI/OMNI-BLAS
1•LowSpecEng•49m ago•1 comments

The AI-Ready Software Developer: Conclusion – Same Game, Different Dice

https://codemanship.wordpress.com/2026/01/05/the-ai-ready-software-developer-conclusion-same-game...
1•lifeisstillgood•51m ago•0 comments

AI Agent Automates Google Stock Analysis from Financial Reports

https://pardusai.org/view/54c6646b9e273bbe103b76256a91a7f30da624062a8a6eeb16febfe403efd078
1•JasonHEIN•54m ago•0 comments

Voxtral Realtime 4B Pure C Implementation

https://github.com/antirez/voxtral.c
2•andreabat•56m ago•1 comments

I Was Trapped in Chinese Mafia Crypto Slavery [video]

https://www.youtube.com/watch?v=zOcNaWmmn0A
2•mgh2•1h ago•0 comments

U.S. CBP Reported Employee Arrests (FY2020 – FYTD)

https://www.cbp.gov/newsroom/stats/reported-employee-arrests
1•ludicrousdispla•1h ago•0 comments

Show HN: I built a free UCP checker – see if AI agents can find your store

https://ucphub.ai/ucp-store-check/
2•vladeta•1h ago•1 comments

Show HN: SVGV – A Real-Time Vector Video Format for Budget Hardware

https://github.com/thealidev/VectorVision-SVGV
1•thealidev•1h ago•0 comments

Study of 150 developers shows AI generated code no harder to maintain long term

https://www.youtube.com/watch?v=b9EbCb5A408
2•lifeisstillgood•1h ago•0 comments

Spotify now requires premium accounts for developer mode API access

https://www.neowin.net/news/spotify-now-requires-premium-accounts-for-developer-mode-api-access/
1•bundie•1h ago•0 comments

When Albert Einstein Moved to Princeton

https://twitter.com/Math_files/status/2020017485815456224
1•keepamovin•1h ago•0 comments

Agents.md as a Dark Signal

https://joshmock.com/post/2026-agents-md-as-a-dark-signal/
2•birdculture•1h ago•1 comments

System time, clocks, and their syncing in macOS

https://eclecticlight.co/2025/05/21/system-time-clocks-and-their-syncing-in-macos/
1•fanf2•1h ago•0 comments

McCLIM and 7GUIs – Part 1: The Counter

https://turtleware.eu/posts/McCLIM-and-7GUIs---Part-1-The-Counter.html
2•ramenbytes•1h ago•0 comments

So whats the next word, then? Almost-no-math intro to transformer models

https://matthias-kainer.de/blog/posts/so-whats-the-next-word-then-/
1•oesimania•1h ago•0 comments
Open in hackernews

Wikipedia: Signs of AI Writing

https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing
61•FergusArgyll•6mo ago

Comments

constantcrying•6mo ago
I think this is actually a bad idea, especially the language and tone part.

You can not detect AI writing by the language and tone, all LLMs are trained and prompted to write in a very particular style. You can just tell them to write in a different style and they will. What is worse that the default LLM writing style is actually quite common. If you read through that list you will also see that many of these are very much human errors.

Trying to detect what is and isn't LLM generated text will only lead to people chasing ghosts, either accusing innocent people or putting faith in text which is the result of more careful prompting.

rgoulter•6mo ago
> You can just tell them to write in a different style and they will.

I'm guessing the priorities are to have contributions which stick to Wikipedia's guidelines. The LLM tendencies cited are in violation of those.

I don't think the game is strictly "we only want human contributions", where you can imagine a sophisticated LLM-user crafting a reasonable contribution which doesn't get rejected.

The "accidental disclosure" section indicates that some of these bad contributions are just very low effort.

supriyo-biswas•6mo ago
Not in this particular case; the point of Wikipedia is to surface objective and factual information (we could debate what "objective/factual" information are, but that's a different issue).

The issue with LLMs is that they try to insert a lot of judgement about the subject matter without quantification or comparison. A lot of this is already covered by Wikipedia's other rules, such as those about weasel words, verifiability etc. but it is useful to have rules that specifically detect AI content, and by proxy, also take out all the bad human writing along with it.

For example, when asked about person X who discovered a method to do Y, a LLM may try to write "As a testament to X's ingenuity, he also discovered method Y, which helps achieve Z in a rapid and effective manner"; it doesn't really matter whether it was written by a LLM as this writing style is unsuited for Wikipedia. Instead, one may have to quantify it by writing "He/she discovered method Y, a method to do Z, which was regarded as an improvement over historical methods such as P and Q", with references to X discovering Y, and research that cites that improvement.

LLMs could adopt that latter writing style and cite references, but the issue there is that a large market that wants to simply use it to decompress their documents to satisfy the intricacies of the social structure they are embedded in. As an example, someone may want to prove to their manager that they produced a well researched report, but since their manager may have to conduct said research in order to know whether it meets their bar and instead use the document length as a proxy. LLMs meet a lot of such use cases and it'd be difficult to take away this "feature".

nunez•6mo ago
This Wikipedia entry covers more than tone and style.

There are small things that LLM-generated content will almost always do. The emdash used to be one of them; transition word overuse is another; being overly verbose by default is yet another.

That said, I posit that it will get increasingly difficult to keep this page up to date as models get smarter about how they write.

hackermeows•6mo ago
Cool , I just include this in the prompt when writing for wiki. And ask the llm specifically to not write like this . What am i missing?
serialNumber•6mo ago
The fact that it’s still highly likely to write like this and hallucinate information.
yfvcdycdybguibg•6mo ago
Then the content will fit right it with the rest.
wronex•6mo ago
This is purely anecdotal, but I think I’ve seen ChatGPT insert special space characters other than normal space. It also likes to use the different dash characters (en, em and hyphen) more than would appear in normal text.
nialse•6mo ago
Adding to the anecdata: ChatGPT can produce text with a variety of unusual Unicode characters. Possibly for detection.
mnaimd•6mo ago
There are two major problems with Wikipedia doing this:

1. False Positives: phrases like "on the other hand", "not only x but y" are definitely used by humans. You can't simply accuse others for using AI by just checking some phrases to be in text. I mean AI itself is trained on text written by humans, so the reason it uses those phrases is because they are more common in it's training set.

2. By making a set of what seems like AI, they give people the opportunity to just tell AI what phrases NOT to use. Every person who prompts to AI, can use it to make it more like human. Ironically, what the wikipedia itself was trying to stop.

thunderfork•6mo ago
>There are two major problems with Wikipedia doing this:

Doing what, exactly? This is a descriptive, informational page, not a policy.

FergusArgyll•6mo ago
I think a lot of people are missing a crucial point here; the main problem with llm's (as far as wiki is concerned) is these ways of writing are biased, weasel wordy, puffery etc etc. which wiki doesn't want to have regardless of who wrote it.

Technically speaking, if an llm can write wp style prose and source it correctly, that wouldn't be a problem (imo)

tolerance•6mo ago
I sniff that guidelines like this are going to disenfranchise the language of marketing copy and other consumer-orientated lingo.

The advertisement wave of the future will be similar to when Nike and Virgil Abloh were putting out sneakers that said "SHOE" on them. Or something like that.

The working title of this trend is "Bruxism".

nunez•6mo ago
I'm really glad that this exists. Keeping this up will be challenging, but nobody loves a good challenge more than Wikipedia editors.