frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

The path to ubiquitous AI (17k tokens/sec)

https://taalas.com/the-path-to-ubiquitous-ai/
199•sidnarsipur•2h ago•151 comments

Untapped Way to Learn a Codebase: Build a Visualizer

https://jimmyhmiller.com/learn-codebase-visualizer
52•andreabergia•4h ago•9 comments

Nvidia and OpenAI abandon unfinished $100B deal in favour of $30B investment

https://www.ft.com/content/dea24046-0a73-40b2-8246-5ac7b7a54323
38•zerosizedweasle•48m ago•1 comments

Gemini 3.1 Pro

https://blog.google/innovation-and-ai/models-and-research/gemini-models/gemini-3-1-pro/
819•MallocVoidstar•21h ago•837 comments

Consistency diffusion language models: Up to 14x faster, no quality loss

https://www.together.ai/blog/consistency-diffusion-language-models
147•zagwdt•8h ago•49 comments

Web Components: The Framework-Free Renaissance

https://www.caimito.net/en/blog/2026/02/17/web-components-the-framework-free-renaissance.html
39•mpweiher•4h ago•29 comments

I used Claude Code and GSD to build the accessibility tool I've always wanted

https://blakewatson.com/journal/i-used-claude-code-and-gsd-to-build-the-accessibility-tool-ive-al...
8•todsacerdoti•1h ago•0 comments

Hyperbound (YC S23, Series A) needs a Engineer with something to prove

https://www.ycombinator.com/companies/hyperbound/jobs/UCvdGiu-a-full-stack-engineer-with-somethin...
1•atulraghu•52m ago

Defer available in gcc and clang

https://gustedt.wordpress.com/2026/02/15/defer-available-in-gcc-and-clang/
198•r4um•4d ago•149 comments

Raspberry Pi Pico 2 at 873.5MHz with 3.05V Core Abuse

https://learn.pimoroni.com/article/overclocking-the-pico-2
45•Lwrless•4h ago•4 comments

I tried building my startup entirely on European infrastructure

https://www.coinerella.com/made-in-eu-it-was-harder-than-i-thought/
394•willy__•3h ago•202 comments

Exercise has 'similar effect' to therapy, study on depression shows

https://medicalxpress.com/news/2026-01-similar-effect-therapy-depression.html
10•PaulHoule•29m ago•0 comments

AI is not a coworker, it's an exoskeleton

https://www.kasava.dev/blog/ai-as-exoskeleton
313•benbeingbin•16h ago•363 comments

Why Developers Keep Choosing Claude over Every Other AI

https://www.bhusalmanish.com.np/blog/posts/why-claude-wins-coding.html
9•okchildhood•1h ago•2 comments

Reading the undocumented MEMS accelerometer on Apple Silicon MacBooks via iokit

https://github.com/olvvier/apple-silicon-accelerometer
79•todsacerdoti•7h ago•43 comments

Notes on Clarifying Man Pages

https://jvns.ca/blog/2026/02/18/man-pages/
15•surprisetalk•1d ago•6 comments

Infrastructure decisions I endorse or regret after 4 years at a startup (2024)

https://cep.dev/posts/every-infrastructure-decision-i-endorse-or-regret-after-4-years-running-inf...
305•Meetvelde•3d ago•137 comments

Show HN: Micasa – track your house from the terminal

https://micasa.dev
577•cpcloud•20h ago•184 comments

FreeCAD

https://www.freecad.org/index.php
222•doener•2d ago•77 comments

Minions – Stripe's Coding Agents Part 2

https://stripe.dev/blog/minions-stripes-one-shot-end-to-end-coding-agents-part-2
27•ludovicianul•1h ago•18 comments

US plans online portal to bypass content bans in Europe and elsewhere

https://www.reuters.com/world/us-plans-online-portal-bypass-content-bans-europe-elsewhere-2026-02...
366•c420•1d ago•657 comments

A beginner's guide to split keyboards

https://www.justinmklam.com/posts/2026/02/beginners-guide-split-keyboards/
165•thehaikuza•4d ago•174 comments

Pi for Excel: AI sidebar add-in for Excel

https://github.com/tmustier/pi-for-excel
83•rahimnathwani•10h ago•25 comments

Fast KV Compaction via Attention Matching

https://arxiv.org/abs/2602.16284
41•cbracketdash•8h ago•1 comments

An ARM Homelab Server, or a Minisforum MS-R1 Review

https://sour.coffee/2026/02/20/an-arm-homelab-server-or-a-minisforum-ms-r1-review/
83•neelc•11h ago•73 comments

An AI Agent Published a Hit Piece on Me – The Operator Came Forward

https://theshamblog.com/an-ai-agent-wrote-a-hit-piece-on-me-part-4/
431•scottshambaugh•9h ago•354 comments

Fast Sorting, Branchless by Design

https://00f.net/2026/02/17/sorting-without-leaking-secrets/
8•jedisct1•3d ago•1 comments

America vs. Singapore: You can't save your way out of economic shocks

https://www.governance.fyi/p/america-vs-singapore-you-cant-save
289•guardianbob•22h ago•427 comments

Micropayments as a reality check for news sites

https://blog.zgp.org/micropayments-as-a-reality-check-for-news-sites/
170•speckx•17h ago•348 comments

A terminal weather app with ASCII animations driven by real-time weather data

https://github.com/Veirt/weathr
231•forinti•19h ago•40 comments
Open in hackernews

From OpenAPI spec to MCP: How we built Xata's MCP server

https://xata.io/blog/built-xata-mcp-server
45•tudorg•9mo ago

Comments

_pdp_•8mo ago
I mean there are 2 other posts related to data exfiltration attacks against MCP severs on the main page of HN at the time of this comment - at this point I think you want to involve a security person to make sure it is not vulnerable to stupid things.
Atotalnoob•8mo ago
The MCP attacks are really just due to bad token scoping.

If you allow Y to do X, if an attacker takes control of Y, of course they can do X.

wild_egg•8mo ago
Can you elaborate on "bad token scoping"?

I don't think your XY phrasing fully describes the GitHub MCP exploit and curious if you think that's somehow a "token scoping" issue.

fkyoureadthedoc•8mo ago
I'm unaware of the GitHub MCP "exploit", but given the overall state of LLM/MCP security FUD, there's probably some self promotion blog post from a security company about an LLM doing something stupid with GitHub data that the owner of the LLM using system didn't intend.

For example, let's say I create an application that lets you chat with my open source repo. I set up my LLM with a GitHub tool. I don't want to think about oauth and getting a token from the end user, so I give it a PAT that I generated from my account. I'm even more lazy so I just used a PAT I already had laying around, and it unfortunately had read/write access to SSH keys. The user can add their ssh key to my account and do malicious things.

Oh no, MCP is super vulnerable, please buy my LLM security product.

If you give the LLM a tool, and you give the LLM input from a user, the user has access to that tool. That shrimple.

wild_egg•8mo ago
https://news.ycombinator.com/item?id=44097390

Also currently on the front page. It's mainly that this tool hits the trifecta of having privileged access, untrusted inputs, and ability to exfiltrate. Most tools only do 1-2 of those so attacks need to be more sophisticated to coordinate that.

rexer•8mo ago
I think this downplays the security issue. It's true that scoping the token correctly would prevent this exploit, but it's not a reasonable solution under the assumptions that are taken by the designers of MCP. LLM+MCP is intended to be ultra flexible, and requiring a new (differently scoped) token for each input is not flexible.

Perhaps you could have an allow/deny popup whenever the LLM wanted to interact with a service. But I think the end state there is presenting the user a bunch of metadata about the operation, which the user then needs to reason about. I don't know that's much better; those OAuth prompts are generally click throughs for users.

truemotive•8mo ago
GitLab Duo got hit with an oopsie, "AI agent runs with same privilege to site content as the authenticated user" kinda oopsie where you could just exfiltrate private repo information via a pixel gif.

I knew it would get bad, but this bad already? I yearn for rigor haha

alooPotato•8mo ago
i really dont get why we cant just feed the openapi spec to the LLM instead of having this intermediate MCP representation. Don't really buy the whole 'the api docs will overwhelm an LLM" - that hasn't been my experience.
wild_egg•8mo ago
I haven't looked at MCP payloads properly to compare but often the raw OpenAPI spec is overly verbose and eats context space pretty quick.

Really trivial to have the LLM first filter it down to the sections it cares about and then condense those sections though.

Wrap that process in a small tool and give that to the LLM along with a `fetch` tool that handles credentials based on URLs and agent capabilities explode pretty rapidly.

crystal_revenge•8mo ago
I see this question frequently related to MCP, but I'm guessing these questions come from people who haven't built a lot of products using LLMs?

Even if you're LLM could learn the openai spec, you still have to figure out how to concretely receive a response back. This is necessary for virtually any application build using an LLM and requires support for far, far more use cases than just calling an API.

Consider the following use case: - You need to include some relevant contextual data from a local RAG system. - There are local functions that you want the model to be able to call - The API example you describe - You need to access data from a database

In all of these cases, if you have experience working with LLMs, you've implemented some ad hoc template solution to pass the context into the model. You might have writing something like "Here is the info relevant to this task {{info}}" or "These are the tools you can use {{tools}}", but in each case you've had to craft a prompting solution specific to one problem.

MCP solves this by making a generic interface to sending a wide range of information to the model to make use of. While the hype can be a bit much, it's a pretty good (minus the lack of foresight around security) and obvious solution to this current problem in AI Engineering.

otabdeveloper4•8mo ago
Just ask the model to respond with JSON. Give it a template example response.

You don't need a spec.

For sending prompts to the LLM you will absolutely need to hand-craft custom prompts anyways, as each model responds slightly different.

wild_egg•8mo ago
> you still have to figure out how to concretely receive a response back

Isn't that handled by whatever Tool API you're using? There's usually a `function_call_output` or `tool_result` message type. I haven't had a need for a separate protocol just to send responses.

truemotive•8mo ago
If you're working from OpenAPI, ideally you want to be able to process any, potentially full of shit formatting spec file. I find that half the integrations I run into have some old weird version of Swagger, and the rest work like hell to stay up to date with the 3.x spec track.

I agree, I wish, it will be a solved problem eventually. Just feeding a complex data model like that to the paper shredder that is the LLM, for making decisions about whether DELETE or POST is used is just asking for trouble.

lmeyerov•8mo ago
Slightly different experience here

We have been adding MCP remote server to louie.ai, think a semantic layer over DBs for automating investigations, analytics, and viz over operational systems. MCP is nice so people can now use from Slack, VS Code, CLI, etc, without us building every single integration when they want to use it outside of our AI notebooks. And same starting point of openAPI spec, and even better, fastapi standard web framework for the REST layer.

Using frameworks has been good. However, for chat ergonomics, we find we are defining custom tools, as talking directly to REST APIs is better than nothing, but that doesn't mean it's good. The tool layer isn't that fancy, but getting the ergonomics right matters, at least in our experience. Most of our time has been on security and ergonomics. (And for fun, we had an experiment of vibe coding this while hitting enterprise-level quality goals.)

ENGNR•8mo ago
Agreed, I’ve only implemented one endpoint, but even on that the amount of data coming back was too high, and the json shape ate up context

I think MCP responses will be high level, aggregated, sorted, etc. Also strongly considering YAML over JSON

matt-attack•8mo ago
Why? Does the a sense of quotes and commas really make a difference in context size?
jedisct1•8mo ago
If you got an OpenAPI spec and want to expose it as MCP, https://jedisct1.github.io/openapi-mcp/ is an easy way to do it.