frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: Mermaid Formatter – CLI and library to auto-format Mermaid diagrams

https://github.com/chenyanchen/mermaid-formatter
1•astm•8m ago•0 comments

RFCs vs. READMEs: The Evolution of Protocols

https://h3manth.com/scribe/rfcs-vs-readmes/
1•init0•15m ago•1 comments

Kanchipuram Saris and Thinking Machines

https://altermag.com/articles/kanchipuram-saris-and-thinking-machines
1•trojanalert•15m ago•0 comments

Chinese chemical supplier causes global baby formula recall

https://www.reuters.com/business/healthcare-pharmaceuticals/nestle-widens-french-infant-formula-r...
1•fkdk•18m ago•0 comments

I've used AI to write 100% of my code for a year as an engineer

https://old.reddit.com/r/ClaudeCode/comments/1qxvobt/ive_used_ai_to_write_100_of_my_code_for_1_ye...
1•ukuina•20m ago•1 comments

Looking for 4 Autistic Co-Founders for AI Startup (Equity-Based)

1•au-ai-aisl•30m ago•1 comments

AI-native capabilities, a new API Catalog, and updated plans and pricing

https://blog.postman.com/new-capabilities-march-2026/
1•thunderbong•31m ago•0 comments

What changed in tech from 2010 to 2020?

https://www.tedsanders.com/what-changed-in-tech-from-2010-to-2020/
2•endorphine•36m ago•0 comments

From Human Ergonomics to Agent Ergonomics

https://wesmckinney.com/blog/agent-ergonomics/
1•Anon84•39m ago•0 comments

Advanced Inertial Reference Sphere

https://en.wikipedia.org/wiki/Advanced_Inertial_Reference_Sphere
1•cyanf•41m ago•0 comments

Toyota Developing a Console-Grade, Open-Source Game Engine with Flutter and Dart

https://www.phoronix.com/news/Fluorite-Toyota-Game-Engine
1•computer23•43m ago•0 comments

Typing for Love or Money: The Hidden Labor Behind Modern Literary Masterpieces

https://publicdomainreview.org/essay/typing-for-love-or-money/
1•prismatic•44m ago•0 comments

Show HN: A longitudinal health record built from fragmented medical data

https://myaether.live
1•takmak007•47m ago•0 comments

CoreWeave's $30B Bet on GPU Market Infrastructure

https://davefriedman.substack.com/p/coreweaves-30-billion-bet-on-gpu
1•gmays•58m ago•0 comments

Creating and Hosting a Static Website on Cloudflare for Free

https://benjaminsmallwood.com/blog/creating-and-hosting-a-static-website-on-cloudflare-for-free/
1•bensmallwood•1h ago•1 comments

"The Stanford scam proves America is becoming a nation of grifters"

https://www.thetimes.com/us/news-today/article/students-stanford-grifters-ivy-league-w2g5z768z
3•cwwc•1h ago•0 comments

Elon Musk on Space GPUs, AI, Optimus, and His Manufacturing Method

https://cheekypint.substack.com/p/elon-musk-on-space-gpus-ai-optimus
2•simonebrunozzi•1h ago•0 comments

X (Twitter) is back with a new X API Pay-Per-Use model

https://developer.x.com/
3•eeko_systems•1h ago•0 comments

Zlob.h 100% POSIX and glibc compatible globbing lib that is faste and better

https://github.com/dmtrKovalenko/zlob
3•neogoose•1h ago•1 comments

Show HN: Deterministic signal triangulation using a fixed .72% variance constant

https://github.com/mabrucker85-prog/Project_Lance_Core
2•mav5431•1h ago•1 comments

Scientists Discover Levitating Time Crystals You Can Hold, Defy Newton’s 3rd Law

https://phys.org/news/2026-02-scientists-levitating-crystals.html
3•sizzle•1h ago•0 comments

When Michelangelo Met Titian

https://www.wsj.com/arts-culture/books/michelangelo-titian-review-the-renaissances-odd-couple-e34...
1•keiferski•1h ago•0 comments

Solving NYT Pips with DLX

https://github.com/DonoG/NYTPips4Processing
1•impossiblecode•1h ago•1 comments

Baldur's Gate to be turned into TV series – without the game's developers

https://www.bbc.com/news/articles/c24g457y534o
3•vunderba•1h ago•0 comments

Interview with 'Just use a VPS' bro (OpenClaw version) [video]

https://www.youtube.com/watch?v=40SnEd1RWUU
2•dangtony98•1h ago•0 comments

EchoJEPA: Latent Predictive Foundation Model for Echocardiography

https://github.com/bowang-lab/EchoJEPA
1•euvin•1h ago•0 comments

Disablling Go Telemetry

https://go.dev/doc/telemetry
1•1vuio0pswjnm7•1h ago•0 comments

Effective Nihilism

https://www.effectivenihilism.org/
1•abetusk•1h ago•1 comments

The UK government didn't want you to see this report on ecosystem collapse

https://www.theguardian.com/commentisfree/2026/jan/27/uk-government-report-ecosystem-collapse-foi...
5•pabs3•1h ago•0 comments

No 10 blocks report on impact of rainforest collapse on food prices

https://www.thetimes.com/uk/environment/article/no-10-blocks-report-on-impact-of-rainforest-colla...
3•pabs3•1h ago•0 comments
Open in hackernews

Ask HN: How does AI understand what I write?

2•deanebarker•2mo ago
I get generative AI. I understand the concept of an LLM and how it generates text. AI output processing seems relatively clear to me.

What I don't understand is how ChatGPT (or whatever) understands what I write. No matter how I phrase it, or how subtle or abstract the point or problem is, AI usually always figures out what I mean. I am mystified and constantly amazed at AI input processing.

What mechanism is at work here? If I want to deep dive on how AI understands meaning, what technology or concept do I need to research?

Comments

FaisalAbid•2mo ago
Good intro here that talks about how embeddings work! https://www.youtube.com/watch?v=wjZofJX0v4M
mouse_•2mo ago
The robot is copying how humans have previously responded to queries similar to yours, and semantically rephrasing to align with OpenAI/Microsoft's desired brand image.
reversemyplan•2mo ago
ChatGPT doesn't truly "understand" language the way humans do, but it models language in a highly advanced way by training & learning from a vast amounts of data. The key tech behind ChatGPT's ability to grasp meaning is their transformer architecture (self-attention mechanism). It allows the model to weigh and focus on different words in a sentence based on their importance or context. In simpler terms, it looks at how each word relates to every other word in the sentence and beyond. This allows it to understand context and nuances, even within long or abstract sentences.

Furthermore, ChatGPT (and other LLMs) is trained on a massive corpus of text, books, articles, websites, etc. From that training, the model learns patterns in how words, phrases, and sentences related to one another. It doesn't explicitly understand what a "dog" or "love" means in the human sense, but it understands it patterns about how they are expressed and used in language.

Without going into too much details, it also uses other techniques like Probabilistic Modeling and Semantic Representations to essentially be able to provide you with what it does currently.

If you wish to dive deeper and do some research, I'd recommend checking out the following:

1. Transformer Architecture 2. Self-Attention Mechanism 3. Pre-trained language models 4. Embeddings and Semantic Space 5. Attention is All You Need - which is a paper published by Vaswani et al., very interesting publication that is a key for understanding the self-attention mechanism and how it powers modern NLP models like GPT. 6. Contextual Language Models

I think those 6 would cover up all your questions and doubts

kid64•2mo ago
It sounds like you get that LLMs are just "next word" predictors. So the piece you may be missing is simply that behind the scenes, your prompt gets "rephrased" in a way that makes generating the response a simple matter of predicting the next word repeatedly. So it's not necessary for the LLM to "understand" your prompt the way you're imagining, this is just an illusion caused by extremely good next-word prediction.
beardyw•2mo ago
In my simple mind "Who is the queen of Spain?" becomes "The queen of Spain is ...".
KurSix•2mo ago
LLMs like Chat GPT don't actually understand text the way a person does. They don't have concepts or any life experience. When you type something, the model turns your text into a bunch of numbers (called a "vector"). Every token (like a word or part of a word) is basically a point with coordinates in a massive, high-dimensional space. The distance between these points shows how related their meanings are

For example, the vectors for "king" and "queen" will end up being really close together, while the vectors for "king" and "table" will be way far apart. Then, the transformer part kicks in with its self-attention mechanism. This is a fancy way of saying it analyzes how all the words in your text relate to each other and figures out how much "attention" to pay to each one. This is how the model gets the context. It's how it knows that the "bank" in "river bank" is totally different from the "bank" in "open a bank account" Based on all those relationships, it then predicts the next token. But it's not just guessing -it's making a highly probable prediction based on all the context it just looked at

To put it simply: the model isn't "aware" of what anything means. It's just incredibly good at modeling how meaning is expressed in language