frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Concept Artists Say Generative AI References Only Make Their Jobs Harder

https://thisweekinvideogames.com/feature/concept-artists-in-games-say-generative-ai-references-on...
1•KittenInABox•1m ago•0 comments

Show HN: PaySentry – Open-source control plane for AI agent payments

https://github.com/mkmkkkkk/paysentry
1•mkyang•3m ago•0 comments

Show HN: Moli P2P – An ephemeral, serverless image gallery (Rust and WebRTC)

https://moli-green.is/
1•ShinyaKoyano•13m ago•0 comments

The Crumbling Workflow Moat: Aggregation Theory's Final Chapter

https://twitter.com/nicbstme/status/2019149771706102022
1•SubiculumCode•17m ago•0 comments

Pax Historia – User and AI powered gaming platform

https://www.ycombinator.com/launches/PMu-pax-historia-user-ai-powered-gaming-platform
2•Osiris30•18m ago•0 comments

Show HN: I built a RAG engine to search Singaporean laws

https://github.com/adityaprasad-sudo/Explore-Singapore
1•ambitious_potat•24m ago•0 comments

Scams, Fraud, and Fake Apps: How to Protect Your Money in a Mobile-First Economy

https://blog.afrowallet.co/en_GB/tiers-app/scams-fraud-and-fake-apps-in-africa
1•jonatask•24m ago•0 comments

Porting Doom to My WebAssembly VM

https://irreducible.io/blog/porting-doom-to-wasm/
1•irreducible•24m ago•0 comments

Cognitive Style and Visual Attention in Multimodal Museum Exhibitions

https://www.mdpi.com/2075-5309/15/16/2968
1•rbanffy•26m ago•0 comments

Full-Blown Cross-Assembler in a Bash Script

https://hackaday.com/2026/02/06/full-blown-cross-assembler-in-a-bash-script/
1•grajmanu•31m ago•0 comments

Logic Puzzles: Why the Liar Is the Helpful One

https://blog.szczepan.org/blog/knights-and-knaves/
1•wasabi991011•42m ago•0 comments

Optical Combs Help Radio Telescopes Work Together

https://hackaday.com/2026/02/03/optical-combs-help-radio-telescopes-work-together/
2•toomuchtodo•47m ago•1 comments

Show HN: Myanon – fast, deterministic MySQL dump anonymizer

https://github.com/ppomes/myanon
1•pierrepomes•53m ago•0 comments

The Tao of Programming

http://www.canonical.org/~kragen/tao-of-programming.html
1•alexjplant•55m ago•0 comments

Forcing Rust: How Big Tech Lobbied the Government into a Language Mandate

https://medium.com/@ognian.milanov/forcing-rust-how-big-tech-lobbied-the-government-into-a-langua...
3•akagusu•55m ago•0 comments

PanelBench: We evaluated Cursor's Visual Editor on 89 test cases. 43 fail

https://www.tryinspector.com/blog/code-first-design-tools
2•quentinrl•57m ago•2 comments

Can You Draw Every Flag in PowerPoint? (Part 2) [video]

https://www.youtube.com/watch?v=BztF7MODsKI
1•fgclue•1h ago•0 comments

Show HN: MCP-baepsae – MCP server for iOS Simulator automation

https://github.com/oozoofrog/mcp-baepsae
1•oozoofrog•1h ago•0 comments

Make Trust Irrelevant: A Gamer's Take on Agentic AI Safety

https://github.com/Deso-PK/make-trust-irrelevant
7•DesoPK•1h ago•3 comments

Show HN: Sem – Semantic diffs and patches for Git

https://ataraxy-labs.github.io/sem/
1•rs545837•1h ago•1 comments

Hello world does not compile

https://github.com/anthropics/claudes-c-compiler/issues/1
35•mfiguiere•1h ago•20 comments

Show HN: ZigZag – A Bubble Tea-Inspired TUI Framework for Zig

https://github.com/meszmate/zigzag
3•meszmate•1h ago•0 comments

Metaphor+Metonymy: "To love that well which thou must leave ere long"(Sonnet73)

https://www.huckgutman.com/blog-1/shakespeare-sonnet-73
1•gsf_emergency_6•1h ago•0 comments

Show HN: Django N+1 Queries Checker

https://github.com/richardhapb/django-check
1•richardhapb•1h ago•1 comments

Emacs-tramp-RPC: High-performance TRAMP back end using JSON-RPC instead of shell

https://github.com/ArthurHeymans/emacs-tramp-rpc
1•todsacerdoti•1h ago•0 comments

Protocol Validation with Affine MPST in Rust

https://hibanaworks.dev
1•o8vm•1h ago•1 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
5•gmays•1h ago•0 comments

Show HN: Zest – A hands-on simulator for Staff+ system design scenarios

https://staff-engineering-simulator-880284904082.us-west1.run.app/
1•chanip0114•1h ago•1 comments

Show HN: DeSync – Decentralized Economic Realm with Blockchain-Based Governance

https://github.com/MelzLabs/DeSync
1•0xUnavailable•1h ago•0 comments

Automatic Programming Returns

https://cyber-omelette.com/posts/the-abstraction-rises.html
1•benrules2•1h ago•1 comments
Open in hackernews

Ask HN: How is Google AI Mode so much faster than ChatGPT

2•excitedrustle•4mo ago
After two years of ChatGPT use, over the past month or so, I've found myself using Google Search instead.

The "AI Overview" is often sufficient and is served very quickly. (Sometimes nearly instant. I assume Google is caching responses for common searches).

"Deep Mode" is just one click away. And the responses are much, much faster. A question that might take 10 or 15 seconds in ChatGPT (with the default GPT5) takes <1 second to first token with Google. And then remaining tokens stream in at a noticeably faster rate.

Is Google just throwing more hardware than OpenAI?

Playing other tricks to look faster? (E.g., use a smaller, faster, non-reasoning model to serve the first part of the response while a slower, reasoning model works on more detailed part of the later response).

Web search tool calls are much faster too, presumably powered by Google's 30 years of web search.

Comments

MrCoffee7•4mo ago
Google's AI search overview is designed to quickly pull and summarize information from its massive web index, while ChatGPT search focuses on providing detailed conversational responses that may require more processing time. The speed difference users notice comes from fundamental differences in how these systems work - Google leverages its existing search infrastructure and pre-indexed web content, while ChatGPT processes queries through a more complex language model that generates responses token by token. Also, I would imagine that ChatGPT is using RAG more in generating some of its responses, and RAG is I/O bound. I/O bottlenecks are orders of magnitude slower than a process that could be completed mostly in memory.
maltelandwehr•4mo ago
Google is using a special version of Gemini (fast, small) and a special version of their internal ranking API (faster, fewer anti-spam/quality measures).

That makes them very fast. But that also leads to a ton of hallucinations. If you ask for non existent things (like the cats.txt protocol), AI Overviews consistently fabricate facts. Ai Overviews can pull the content of the potential source ULRs directly from Google's cache.

ChatGPT is slow because they have to make an external API call to Bing or - even worse - to a scraping provider like SerpApi/Data4SEO/Oxylabs to crawl regular Google search results. That introduced two delays. OpenAI then has to fetch some of these potential source URLs in real time. That introduces another delay. And then OpenAI also uses a better (but slower) model than Google to generate the answer.

Over time, OpenAI should be able to catch up in terms of speed with their own web/search index.

If you try more complex questions, you might find AI Overviews less to your liking.

Google gets away with this because users are used to type simple queries - often just a few keywords. Any kind of AI answer is like magic.

OpenAI cannot do the same. Their users are used to having multi-turn conversations and receiving thoughtful answers to complex questions.

excitedrustle•4mo ago
Interesting. I am still defaulting to ChatGPT when I anticipate having a multi-turn conversation.

But for questions where I expect a single response to do, Google has taken over.

Here's an example from this morning:

It's my first autumn in a new house, and my boiler (forced hot water heating) kicked on for the first time. The kickboards in the kitchen have Quiet-One Kickspace brand radiators with electric fans. I wanted to know what controls these fans (are they wired to the thermostat, detect radiator temp, etc?)

I searched "When does a quiet-one kickspace heater turn on". Google Overview answered correctly [1] in <1 second. Tried the same prompt to ChatGPT. Took 17 seconds to get the full (also correct, and similarly detailed) answer.

Both answers were equally detailed and of similar length.

[1] Confirmed correct by observing the operation of the unit.