frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Start all of your commands with a comma

https://rhodesmill.org/brandon/2009/commands-with-comma/
142•theblazehen•2d ago•42 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
668•klaussilveira•14h ago•202 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
949•xnx•19h ago•551 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
122•matheusalmeida•2d ago•33 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
53•videotopia•4d ago•2 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
229•isitcontent•14h ago•25 comments

Jeffrey Snover: "Welcome to the Room"

https://www.jsnover.com/blog/2026/02/01/welcome-to-the-room/
16•kaonwarb•3d ago•19 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
28•jesperordrup•4h ago•16 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
223•dmpetrov•14h ago•117 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
330•vecti•16h ago•143 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
494•todsacerdoti•22h ago•243 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
381•ostacke•20h ago•95 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
359•aktau•20h ago•181 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
288•eljojo•17h ago•169 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
412•lstoll•20h ago•278 comments

Was Benoit Mandelbrot a hedgehog or a fox?

https://arxiv.org/abs/2602.01122
19•bikenaga•3d ago•4 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
63•kmm•5d ago•6 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
90•quibono•4d ago•21 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
256•i5heu•17h ago•196 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
32•romes•4d ago•3 comments

What Is Ruliology?

https://writings.stephenwolfram.com/2026/01/what-is-ruliology/
44•helloplanets•4d ago•42 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
12•speckx•3d ago•5 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
59•gfortaine•12h ago•25 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
33•gmays•9h ago•12 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1066•cdrnsf•23h ago•446 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
150•vmatsiiako•19h ago•67 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
288•surprisetalk•3d ago•43 comments

Why I Joined OpenAI

https://www.brendangregg.com/blog/2026-02-07/why-i-joined-openai.html
149•SerCe•10h ago•138 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
183•limoce•3d ago•98 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
73•phreda4•13h ago•14 comments
Open in hackernews

How Generative Engine Optimization (GEO) rewrites the rules of search

https://a16z.com/geo-over-seo/
70•eutropheon•8mo ago

Comments

gmuslera•8mo ago
I can't imagine how the feedback loop between LLMs optimizing content so it will be picked by LLMs based search engines will end. But it won't be good.
drekipus•8mo ago
there would have to be some added element of human interaction for a a feedback mechanism.

Unfortunately this will be profit driven, rather than something like human enjoyment or insight or something

kkaatii•8mo ago
Not too different from how SEO ends. Just spammy content written by bots.
rafaepta•8mo ago
ChatGPT ≠ Google-scale.Google: ~14 B searches/day; ChatGPT: ~37 M (~1 : 400). Only ~15 % of ChatGPT prompts look like classic “search”; most are writing/code tasks. Google’s own search volume grew 22 % in 2024 and still holds >90 % share. An LLM citation is nice for credibility, but it won’t move traffic or revenue anytime soon.
zdragnar•8mo ago
To add on top of this, how many Google searches also contain Gemini answers in the search results? I've been seeing more and more, especially for code and general factoid searches.
thierrydamiba•8mo ago
Turtles all the way down.

It is definitely interesting to see how much public opinion has been shifting on Google in AI as of late. I wonder what the main force for that is…

echelon•8mo ago
1. From 2010 - 2023(ish), Google slept on DeepMind and allowed OpenAI to steal the narrative. That led to a boom in AI development outside of the Google labs. This is akin to Microsoft's loss of internet/web. A half dozen trillion dollar companies emerged from Microsoft's stumbling, and we're likely to see the same with Google's missteps.

2. After the rise of so many AI startups in the 2020 - 2023 period, the end of Google was being forecast by many. Most of Google's revenue comes through search, and everyone (Nadella, Altman, investors, et al.) were talking about the incremental value of search - Google had to retain 100%, and other players just had to grow.

3. The Google founders, sensing a major blow to their cash cow, took a break from their zeppelin startups and came back. They gave Deep Mind more autonomy, took a knife to product culture, and empowered and encouraged everyone to innovate. The whole company has been re-focused and told they must win AI or face extinction.

4. As of 2025, Google has been killing it on their releases. Gemini, Veo, ... you name it, and they've got industry-leading developments that out-perform and undercut the competition. It's beginning to look like not only will Google not die, but that Google could wipe the field with their AI superiority. It looks like they'll be able to dance circles around OpenAI.

The looming threats are (1) DOJ antitrust eroding Google search ingress and (2) other players stealing Search TAM without the new AI markets being able to replace the search / ads revenue.

Any non-Google players would be wise to put antitrust pressure on Google. Even after the current case ends, they should try to strip away defaults on web and mobile. Stop Google from being able to deploy AI through Google Search, Android, and Chrome. Make Google use the same word of mouth marketing that the rest have to.

It'll be an exciting series of battles ahead.

1oooqooq•8mo ago
splitting these are not a good measure. people who know how to search (a skill the latter generations seem to have lost) also searched for the lowest common denominator coding recipes and produced naive code just like people do with current llm models. only sellers of the llm models make the distinction. it's all search.
echelon•8mo ago
1. (Anecdotal) I'm barely using Google anymore. I'm using ChatGPT for a ton of queries and getting far better results.

2. Antitrust actions might (should) strip Google of their "panes of glass" with which they force Google Search as the default. Most Google Search queries are simply the result of defaults. Once those defaults are gone, those queries will go elsewhere.

maltelandwehr•8mo ago
The 37M/day is an estimate from Rand Fishkin that often gets quoted as gospel. It is based on limited external data. OpenAI mentioned 1B/day - and had significant overall growth in usage since then.

Also, 1 search on ChatGPT easily replaces 5-10 searches on Google.

Many B2B SaaS companies already get the same amount of leads from ChatGPT that they get from Google. Because clicks from ChatGPT are better informed and have a significantly higher conversion rate. I am talking up to +700% CVR vs traffic from Google for some companies.

edwin•8mo ago
We looked at the same data that Rand Fishkin used and definitely came to a different conclusion.
bravesoul2•8mo ago
Yet.

And even that ignores Google runs an LLM on their search too.

the_arun•8mo ago
But users are not clicking on search results in google. They get satisfying response from Gemini & end there. It is a good thing for users but bad for inbound traffic to websites.
kibibu•8mo ago
Also bad for users who get completely bullshitted responses.
meander_water•8mo ago
I came across this recently on Fiverr [0]. I thought it was a joke initially, but the volume of people offering their services implies that there is demand out there somewhere.

[0] https://www.fiverr.com/categories/online-marketing/generativ...

handfuloflight•8mo ago
Absolute review count suggests otherwise.
SoKamil•8mo ago
It might be that there are more shovel sellers than gold to mine.
kurtoid•8mo ago
SEO and it's related fields are a net-negative for the Internet (and maybe humanity in general)
xnx•8mo ago
The only good thing about SEO (yuck) is it got some people to care about things that are good for humans too: fast pages, well-structured content, descriptive link text, etc.
bravesoul2•8mo ago
If SEO was literally Search Engine Optimisation it'd be fine.

They really mean SEG (search engine gaming)

godelski•8mo ago
I think the concept of SEO is fine. But the problem comes down to metric hacking.

Certainly you want to make your page easier to index by Google and others, but that's not the only thing that matters. You should improve your content and provide a good product to users. That's what Google intends to measure, but such a thing is actually impossible to do so accurately. So the problem comes down to this stupid cat and mouse game where sites happily shill out links that are immediate turnarounds for users.

I think this is larger than just search. We seem to just be optimizing towards whatever metric we've decided should be used. We then fool ourselves into thinking this proxy actually measures the real thing.

edwin•8mo ago
Unlike classic search, which got worse over time due to SEO gamings, AI search might actually improve with scale. If LLMs are trained on real internet discussions (Reddit, forums, reviews), and your product consistently gets called out as bad, the model will eventually reflect that. The pressure shifts from optimizing content to improving the product itself.
mettamage•8mo ago
I looked into GEO a bit. One of the things I've noticed is that you need to actually optimize for the idea "as if you're talking to a person" and that's because LLMs semantically understand what topics are about. Search engines typically don't, not at that level at least.
fiachamp•8mo ago
Reddit data is already super corrupted with marketers and bot accounts. Why I use https://thegigabrain.com to filter thru the bs on reddit.
jameslk•8mo ago
We are in the fleeting era where AI models are not entirely corrupted by marketing and propaganda. Like the early web circa 1990s. It will never be this pure while also being this up-to-date ever again. Enjoy it while it lasts
1oooqooq•8mo ago
you're literally commenting on a press release by the greediest vc firm. would you care to elaborate your point?
username223•8mo ago
Yep. It won't be long before the web is flooded with pages full of AI-generated content repeatedly mentioning brands near keywords, and "search engines" have been replaced by "summaries" monetized by prompt-stuffing. That's pretty much the extent of these people's genius.

EDIT: I guess the final step is for an "AI agent" to enter your credit card number based on this bot-chat.

theamk•8mo ago
my "marketing and propaganda", do you mean "other AI model output"?

The internet was pretty bad without AI already, but I can see it heading quickly towards complete nonsense and lack of trust. We are going to have all the same problems we had before AI, but multiplied 100x

AlienRobot•8mo ago
Yep. Google is an ad company that makes most of its money putting ads on its search page, and Gemini is a Google product that lets you get search results from Google without ads. It doesn't make any sense.
bloomca•8mo ago
Yep, I was discussing it with my partner a few months ago. It is just too good right now, in a lot of cases you just slip past all the fluff which are impossible to avoid with traditional search.

The opportunities for AI providers to capitalize on that are too prominent. No idea how long it will take, but imo it's inevitable.

sbrother•8mo ago
I launched a product in this space to beta customers just last week -- https://ellm.co -- and the response has been way more positive than I could have hoped for. Every SMB owner I talk to is thinking about this and looking for ways to be ahead of the curve on it even though the number of commercial AI search queries is still dwarfed by Google. It feels like a race, and we are figuring out the rules as we run it.
maltelandwehr•8mo ago
How is ellm different from the more established tools in the market (Profound and Peec AI)?
sbrother•8mo ago
So the short answer is that this space is so early still that there’s plenty of space for more competition. But currently both of those market leaders are targeted at enterprise and have little to no penetration of the SMB/self-serve market. Ellm has a self serve onboarding flow, a $20/month monitoring only tier, and a $100/mo self-serve optimization tier — none of which require scheduling a call with a sales team.
sync•8mo ago
Definitely a bit buggy but looks promising! Try the onboarding flow yourself as a real user (also on mobile!) - particularly leaving the site to research a competitor that is mentioned and then coming back (I got kicked out and had to start from the beginning, at which point it just paywalled me)

It also says I have canceled my subscription at the bottom of the paywall when I never had one. Still, these are little things and I think the bones are theee

sbrother•8mo ago
Thank you so much for giving it a spin and reporting those issues :)

Also if you're interested, here is what your dashboard looks like (the onboarding flow is sadly the buggiest part of the app right now): https://imgur.com/a/eysQzjT

edwin•8mo ago
A few take-aways from a study we ran (~800 consumer queries, repeated over a few days):

* AI answers shift a lot. In classic search a page-1 spot can linger for weeks; in our runs, the AI result set often changed overnight.

* Google’s new “AI Mode” and ChatGPT gave the same top recommendation only ~47 % of the time on identical queries.

* ChatGPT isn’t even consistent with itself. Results differ sharply depending on whether it falls back to live retrieval or sticks to its training data.

* When it does retrieve, ChatGPT leans heavily on publications it has relationships with (NYPost and People.com for product recs) instead of sites like rtings.com

Writeup: https://amplifying.ai/blog/why-ai-product-recommendations-ke...

Data: https://amplifying.ai/research/consumer-products

bravesoul2•8mo ago
Out the gate with an R.E.M. reference
arnklint•8mo ago
If only you could trust Gemini or gpt to serve truthful answers.

I’m still seeing them concluding the opposite of their own source reference.

duskwuff•8mo ago
Or wildly misinterpreting a source. A few months ago, I saw an especially egregious example where, when asked for the maximum current capacity of a 22 AWG copper wire, Google's AI responded confidently with "551 amps".

The correct answer is two orders of magnitude lower, around 5-7 A. 551 A is the fusing current of that wire - i.e. the current required to make it instantly melt.

nocoder•8mo ago
My experience with the LLM bot is that they are really keen to appease the user and often over confident about their responses. They are prioritizing user engagement over factual nature of the response, it's as if their reward function includes the time spent on the bot. This leads to the bot swaying too much in either direction when it comes to debatable information. They essentially learn what the user prefers and so tend to reinforce those ideas. In some sense, they are like social media influencers who are too confident in their opinion because they are trying to get you to like them. I see us going further into the echo chambers where on the same topic, the bots will give different information to an users based on what the bot thinks about the user preference.
corentin88•8mo ago
> In a world where AI is the front door to commerce and discovery, the question for marketers is: Will the model remember you?

So the question for marketers is: how do you get into the model. And once you are in, how do you outperform others that are in the model too?

corentin88•8mo ago
Maybe content marketing ins’t dead after all. Just the readers have shifted and now you write for LLMs. They will be trained on your articles, to summarize them for LLM readers (aka humans).
bigbuppo•8mo ago
And it lost me at "Traditional search was built on links. GEO is built on language."

The fuck it was. How in the hell do they think search works outside the web?