frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Dexterous robotic hands: 2009 – 2014 – 2025

https://old.reddit.com/r/robotics/comments/1qp7z15/dexterous_robotic_hands_2009_2014_2025/
1•gmays•3m ago•0 comments

Interop 2025: A Year of Convergence

https://webkit.org/blog/17808/interop-2025-review/
1•ksec•12m ago•1 comments

JobArena – Human Intuition vs. Artificial Intelligence

https://www.jobarena.ai/
1•84634E1A607A•16m ago•0 comments

Concept Artists Say Generative AI References Only Make Their Jobs Harder

https://thisweekinvideogames.com/feature/concept-artists-in-games-say-generative-ai-references-on...
1•KittenInABox•20m ago•0 comments

Show HN: PaySentry – Open-source control plane for AI agent payments

https://github.com/mkmkkkkk/paysentry
1•mkyang•22m ago•0 comments

Show HN: Moli P2P – An ephemeral, serverless image gallery (Rust and WebRTC)

https://moli-green.is/
1•ShinyaKoyano•31m ago•0 comments

The Crumbling Workflow Moat: Aggregation Theory's Final Chapter

https://twitter.com/nicbstme/status/2019149771706102022
1•SubiculumCode•35m ago•0 comments

Pax Historia – User and AI powered gaming platform

https://www.ycombinator.com/launches/PMu-pax-historia-user-ai-powered-gaming-platform
2•Osiris30•36m ago•0 comments

Show HN: I built a RAG engine to search Singaporean laws

https://github.com/adityaprasad-sudo/Explore-Singapore
1•ambitious_potat•42m ago•0 comments

Scams, Fraud, and Fake Apps: How to Protect Your Money in a Mobile-First Economy

https://blog.afrowallet.co/en_GB/tiers-app/scams-fraud-and-fake-apps-in-africa
1•jonatask•42m ago•0 comments

Porting Doom to My WebAssembly VM

https://irreducible.io/blog/porting-doom-to-wasm/
1•irreducible•43m ago•0 comments

Cognitive Style and Visual Attention in Multimodal Museum Exhibitions

https://www.mdpi.com/2075-5309/15/16/2968
1•rbanffy•44m ago•0 comments

Full-Blown Cross-Assembler in a Bash Script

https://hackaday.com/2026/02/06/full-blown-cross-assembler-in-a-bash-script/
1•grajmanu•49m ago•0 comments

Logic Puzzles: Why the Liar Is the Helpful One

https://blog.szczepan.org/blog/knights-and-knaves/
1•wasabi991011•1h ago•0 comments

Optical Combs Help Radio Telescopes Work Together

https://hackaday.com/2026/02/03/optical-combs-help-radio-telescopes-work-together/
2•toomuchtodo•1h ago•1 comments

Show HN: Myanon – fast, deterministic MySQL dump anonymizer

https://github.com/ppomes/myanon
1•pierrepomes•1h ago•0 comments

The Tao of Programming

http://www.canonical.org/~kragen/tao-of-programming.html
2•alexjplant•1h ago•0 comments

Forcing Rust: How Big Tech Lobbied the Government into a Language Mandate

https://medium.com/@ognian.milanov/forcing-rust-how-big-tech-lobbied-the-government-into-a-langua...
3•akagusu•1h ago•0 comments

PanelBench: We evaluated Cursor's Visual Editor on 89 test cases. 43 fail

https://www.tryinspector.com/blog/code-first-design-tools
2•quentinrl•1h ago•2 comments

Can You Draw Every Flag in PowerPoint? (Part 2) [video]

https://www.youtube.com/watch?v=BztF7MODsKI
1•fgclue•1h ago•0 comments

Show HN: MCP-baepsae – MCP server for iOS Simulator automation

https://github.com/oozoofrog/mcp-baepsae
1•oozoofrog•1h ago•0 comments

Make Trust Irrelevant: A Gamer's Take on Agentic AI Safety

https://github.com/Deso-PK/make-trust-irrelevant
7•DesoPK•1h ago•4 comments

Show HN: Sem – Semantic diffs and patches for Git

https://ataraxy-labs.github.io/sem/
1•rs545837•1h ago•1 comments

Hello world does not compile

https://github.com/anthropics/claudes-c-compiler/issues/1
35•mfiguiere•1h ago•20 comments

Show HN: ZigZag – A Bubble Tea-Inspired TUI Framework for Zig

https://github.com/meszmate/zigzag
3•meszmate•1h ago•0 comments

Metaphor+Metonymy: "To love that well which thou must leave ere long"(Sonnet73)

https://www.huckgutman.com/blog-1/shakespeare-sonnet-73
1•gsf_emergency_6•1h ago•0 comments

Show HN: Django N+1 Queries Checker

https://github.com/richardhapb/django-check
1•richardhapb•1h ago•1 comments

Emacs-tramp-RPC: High-performance TRAMP back end using JSON-RPC instead of shell

https://github.com/ArthurHeymans/emacs-tramp-rpc
1•todsacerdoti•1h ago•0 comments

Protocol Validation with Affine MPST in Rust

https://hibanaworks.dev
1•o8vm•2h ago•1 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
5•gmays•2h ago•1 comments
Open in hackernews

Wikipedia Seems Pretty Worried About AI

https://nymag.com/intelligencer/article/wikipedia-contributors-are-worried-about-ai-scraping.html
33•stared•3mo ago

Comments

walterbell•3mo ago
Why do AI bots scrape Wikipedia pages instead of downloading the published full database?
nness•3mo ago
My guess is that the scraping tools are specialized for web, and creating per-application interfaces isn't cost effective (although you could argue that scraping Wikipedia effectively is definitely worth the effort, but given its all text context with a robust taxonomy/hierarchy, it might be non-issue.)

My other thought is that you don't want a link showing you scraped anything... and faking browser traffic might draw less attention.

fzeroracer•3mo ago
The rationale I've seen elsewhere is that it saves money. It means you don't need to go to the effort of downloading, storing and updating your copy of the database. You can offload all of the externalities onto whatever site you're scraping.
danielbln•3mo ago
Man, these companies have bazillions in funding and they can't keep some $100 DB in a closet for that. Smh
solarkraft•3mo ago
They could. There’s just no upside in doing so.
walterbell•3mo ago
If they destroy the relatively high-trust internet, the low-trust replacement will require digital ID for every client, with non-neutral traffic price varying by {business digital ID, content}. No more free geese, even to check whether there is a golden goose worthy of payment.

https://utcc.utoronto.ca/~cks/space/blog/web/WeShouldBlockFo...

SideburnsOfDoom•3mo ago
Sheer laziness?
ectospheno•3mo ago
Money. One requires you to use your hardware and your developers. The other way doesn’t.
jjtheblunt•3mo ago
i tried doing that in summer 2019, and the downloaded formats were at that time proprietary and depended on decoders which were like a tail recursive rabbit hole.

in contrast, letting their servers render the content with their proprietary tools yields the sought data, so scraping might be a pragmatic choice still.

NoPicklez•3mo ago
Because that would probably require extra work, why do that if it already crawls and scrapes it in the first place
twosdai•3mo ago
It's possible that they don't know. I literally didn't know there was a full downloadable db until right now.
walterbell•3mo ago
Even on offline phones! https://kiwix.org
tony-vlcek•3mo ago
If the bottom line are donations - as the article states - why push for getting AI companies to link people to Wikipedia instead of pushing for the companies to donate?
flohofwoe•3mo ago
Because many small donations from individuals are better than few big ones from corporations for the independence of Wikipedia? Eggs vs baskets etc...
noir_lord•3mo ago
Case in point: Mozilla.

I love Firefox, I don't love Mozilla - I've no way to donate specifically to Firefox.

janwl•3mo ago
https://archive.is/XGrVL
nkotov•3mo ago
Seems related to another article [1] I've seen recently where a lot of e-commerce traffic is also mostly bots.

[1] https://joindatacops.com/resources/how-73-of-your-e-commerce...

ChrisArchitect•3mo ago
[dupe] https://news.ycombinator.com/item?id=45651485
pflenker•3mo ago
They should be. Articles have been gotten longer and longer over time, getting an AI summary instead is the logical consequence.
moritzwarhier•3mo ago
Wikipedia is not a company.

They should mainly be worried about their reliability and trustworthiness. They should not worry about article length, as long as it's from exhaustiveness and important content is still accessible.

Serving perfectly digestible bits of information optimized for being easy to read must not be the primary goal of an encyclopedia.

By the way, "AI summaries" routinely contain misrepresentations, misleading sentences or just plain wrong information.

Wikipedia is (rightly) worried about AI slop.

The reason is that LLMs cannot "create" reliable information about the factual world, and they can also only evaluate information based on what "sounds plausible" (or matches the training priorities).

You can get an AI summary with one of the 100 buttons for this that are built into every consumer-facing product, including common OS GUIs and Web browsers.

Or "ask ChatGPT" for one.