frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

EVs Are a Failed Experiment

https://spectator.org/evs-are-a-failed-experiment/
1•ArtemZ•6m ago•1 comments

MemAlign: Building Better LLM Judges from Human Feedback with Scalable Memory

https://www.databricks.com/blog/memalign-building-better-llm-judges-human-feedback-scalable-memory
1•superchink•7m ago•0 comments

CCC (Claude's C Compiler) on Compiler Explorer

https://godbolt.org/z/asjc13sa6
1•LiamPowell•8m ago•0 comments

Homeland Security Spying on Reddit Users

https://www.kenklippenstein.com/p/homeland-security-spies-on-reddit
2•duxup•11m ago•0 comments

Actors with Tokio (2021)

https://ryhl.io/blog/actors-with-tokio/
1•vinhnx•12m ago•0 comments

Can graph neural networks for biology realistically run on edge devices?

https://doi.org/10.21203/rs.3.rs-8645211/v1
1•swapinvidya•25m ago•1 comments

Deeper into the shareing of one air conditioner for 2 rooms

1•ozzysnaps•27m ago•0 comments

Weatherman introduces fruit-based authentication system to combat deep fakes

https://www.youtube.com/watch?v=5HVbZwJ9gPE
2•savrajsingh•27m ago•0 comments

Why Embedded Models Must Hallucinate: A Boundary Theory (RCC)

http://www.effacermonexistence.com/rcc-hn-1-1
1•formerOpenAI•29m ago•2 comments

A Curated List of ML System Design Case Studies

https://github.com/Engineer1999/A-Curated-List-of-ML-System-Design-Case-Studies
3•tejonutella•33m ago•0 comments

Pony Alpha: New free 200K context model for coding, reasoning and roleplay

https://ponyalpha.pro
1•qzcanoe•37m ago•1 comments

Show HN: Tunbot – Discord bot for temporary Cloudflare tunnels behind CGNAT

https://github.com/Goofygiraffe06/tunbot
1•g1raffe•40m ago•0 comments

Open Problems in Mechanistic Interpretability

https://arxiv.org/abs/2501.16496
2•vinhnx•46m ago•0 comments

Bye Bye Humanity: The Potential AMOC Collapse

https://thatjoescott.com/2026/02/03/bye-bye-humanity-the-potential-amoc-collapse/
2•rolph•50m ago•0 comments

Dexter: Claude-Code-Style Agent for Financial Statements and Valuation

https://github.com/virattt/dexter
1•Lwrless•52m ago•0 comments

Digital Iris [video]

https://www.youtube.com/watch?v=Kg_2MAgS_pE
1•vermilingua•57m ago•0 comments

Essential CDN: The CDN that lets you do more than JavaScript

https://essentialcdn.fluidity.workers.dev/
1•telui•58m ago•1 comments

They Hijacked Our Tech [video]

https://www.youtube.com/watch?v=-nJM5HvnT5k
1•cedel2k1•1h ago•0 comments

Vouch

https://twitter.com/mitchellh/status/2020252149117313349
34•chwtutha•1h ago•5 comments

HRL Labs in Malibu laying off 1/3 of their workforce

https://www.dailynews.com/2026/02/06/hrl-labs-cuts-376-jobs-in-malibu-after-losing-government-work/
4•osnium123•1h ago•1 comments

Show HN: High-performance bidirectional list for React, React Native, and Vue

https://suhaotian.github.io/broad-infinite-list/
2•jeremy_su•1h ago•0 comments

Show HN: I built a Mac screen recorder Recap.Studio

https://recap.studio/
1•fx31xo•1h ago•1 comments

Ask HN: Codex 5.3 broke toolcalls? Opus 4.6 ignores instructions?

1•kachapopopow•1h ago•0 comments

Vectors and HNSW for Dummies

https://anvitra.ai/blog/vectors-and-hnsw/
1•melvinodsa•1h ago•0 comments

Sanskrit AI beats CleanRL SOTA by 125%

https://huggingface.co/ParamTatva/sanskrit-ppo-hopper-v5/blob/main/docs/blog.md
1•prabhatkr•1h ago•1 comments

'Washington Post' CEO resigns after going AWOL during job cuts

https://www.npr.org/2026/02/07/nx-s1-5705413/washington-post-ceo-resigns-will-lewis
4•thread_id•1h ago•1 comments

Claude Opus 4.6 Fast Mode: 2.5× faster, ~6× more expensive

https://twitter.com/claudeai/status/2020207322124132504
1•geeknews•1h ago•0 comments

TSMC to produce 3-nanometer chips in Japan

https://www3.nhk.or.jp/nhkworld/en/news/20260205_B4/
3•cwwc•1h ago•0 comments

Quantization-Aware Distillation

http://ternarysearch.blogspot.com/2026/02/quantization-aware-distillation.html
2•paladin314159•1h ago•0 comments

List of Musical Genres

https://en.wikipedia.org/wiki/List_of_music_genres_and_styles
1•omosubi•1h ago•0 comments
Open in hackernews

Ask HN: Are LLMs just expensive search and scripting tools? Is it that simple?

6•edwin2•2mo ago
Can all of LLMs be summarized as a (currently) really expensive search that allows you to express nuanced queries and script the output of the search? Why or why not?

Take code. In a sense, StackOverflow is about finding a code snippet for an already solved problem. Auto complete does the same kind of search in a sense.

Take generative text. In a sense that’s the equivalent of making a query and then aggregating the many results into one phrase. You could imagine the bot searching 1,000 websites and then taking the average of the answers to the query and then outputting the result.

Does every LLM use case fit the following pattern?…

query —-> LLM does its work —-> result —> script of result (optional)

Comments

minimaxir•2mo ago
You're being reductive to the point that you're saying "LLMs are an algorithm like auto complete/search engine, therefore they're the same."

That's not how it works. They're different approaches to how they handle the same inputs.

edwin2•2mo ago
i would totally agree that they’re different approaches

i wouldn’t conclude “therefore they’re the same”. they’re clearly not the same

if it’s a different approach to search and scripting, does that not mean it is a kind of search and scripting?

adocomplete•2mo ago
It's even simpler. It's just 1's and 0's.
Sevii•2mo ago
No LLMs are not 'search'. Search as in google or a database query is deterministic. Are there results for x query? If there are we can return them. If there aren't we can't return anything.

LLMs do not work that way. LLMs do not have a conception of facts. Any query you make to an LLM has an output. The quality of that output depends on the training data. For high probability output you might think the LLM is returning the correct 'facts'. For low probability output you might think the LLM is hallucinating.

LLMs are not search. They are a fundamentally different thing from search. Most code is 100% deterministic. The program is executed exactly in order. LLMs are not 100% deterministic.

andrei_says_•2mo ago
One way to think of LLM output is that it is all hallucination. Sometimes it happens to coincide with reality.

To an LLM it’s all the same as there’s no relationship to reality, just to likelihood to reality.

It’s the difference between “this is something that Peter might say” and “this is something that Peter said”. To LLMs there’s no distinction.

apothegm•2mo ago
No. LLMs are NLP engines.

LLM-based chatbots can be used as mediocre, hallucination-prone search engines if you so choose. They’re especially bad if the answer you’re looking for is not an average of “collective wisdom” or opinion but a very specific fact that needs to be distinguished from other similar or related but ultimately very different facts. And even worse if the fact you’re looking for is not among the more frequently discussed ones in that cluster of similar or related facts.