frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Queueing Theory v2: DORA metrics, queue-of-queues, chi-alpha-beta-sigma notation

https://github.com/joelparkerhenderson/queueing-theory
1•jph•53s ago•0 comments

Show HN: Hibana – choreography-first protocol safety for Rust

https://hibanaworks.dev/
1•o8vm•2m ago•0 comments

Haniri: A live autonomous world where AI agents survive or collapse

https://www.haniri.com
1•donangrey•3m ago•1 comments

GPT-5.3-Codex System Card [pdf]

https://cdn.openai.com/pdf/23eca107-a9b1-4d2c-b156-7deb4fbc697c/GPT-5-3-Codex-System-Card-02.pdf
1•tosh•16m ago•0 comments

Atlas: Manage your database schema as code

https://github.com/ariga/atlas
1•quectophoton•19m ago•0 comments

Geist Pixel

https://vercel.com/blog/introducing-geist-pixel
1•helloplanets•22m ago•0 comments

Show HN: MCP to get latest dependency package and tool versions

https://github.com/MShekow/package-version-check-mcp
1•mshekow•29m ago•0 comments

The better you get at something, the harder it becomes to do

https://seekingtrust.substack.com/p/improving-at-writing-made-me-almost
2•FinnLobsien•31m ago•0 comments

Show HN: WP Float – Archive WordPress blogs to free static hosting

https://wpfloat.netlify.app/
1•zizoulegrande•32m ago•0 comments

Show HN: I Hacked My Family's Meal Planning with an App

https://mealjar.app
1•melvinzammit•33m ago•0 comments

Sony BMG copy protection rootkit scandal

https://en.wikipedia.org/wiki/Sony_BMG_copy_protection_rootkit_scandal
1•basilikum•35m ago•0 comments

The Future of Systems

https://novlabs.ai/mission/
2•tekbog•36m ago•1 comments

NASA now allowing astronauts to bring their smartphones on space missions

https://twitter.com/NASAAdmin/status/2019259382962307393
2•gbugniot•41m ago•0 comments

Claude Code Is the Inflection Point

https://newsletter.semianalysis.com/p/claude-code-is-the-inflection-point
3•throwaw12•42m ago•1 comments

Show HN: MicroClaw – Agentic AI Assistant for Telegram, Built in Rust

https://github.com/microclaw/microclaw
1•everettjf•42m ago•2 comments

Show HN: Omni-BLAS – 4x faster matrix multiplication via Monte Carlo sampling

https://github.com/AleatorAI/OMNI-BLAS
1•LowSpecEng•43m ago•1 comments

The AI-Ready Software Developer: Conclusion – Same Game, Different Dice

https://codemanship.wordpress.com/2026/01/05/the-ai-ready-software-developer-conclusion-same-game...
1•lifeisstillgood•45m ago•0 comments

AI Agent Automates Google Stock Analysis from Financial Reports

https://pardusai.org/view/54c6646b9e273bbe103b76256a91a7f30da624062a8a6eeb16febfe403efd078
1•JasonHEIN•48m ago•0 comments

Voxtral Realtime 4B Pure C Implementation

https://github.com/antirez/voxtral.c
2•andreabat•51m ago•1 comments

I Was Trapped in Chinese Mafia Crypto Slavery [video]

https://www.youtube.com/watch?v=zOcNaWmmn0A
2•mgh2•57m ago•0 comments

U.S. CBP Reported Employee Arrests (FY2020 – FYTD)

https://www.cbp.gov/newsroom/stats/reported-employee-arrests
1•ludicrousdispla•59m ago•0 comments

Show HN: I built a free UCP checker – see if AI agents can find your store

https://ucphub.ai/ucp-store-check/
2•vladeta•1h ago•1 comments

Show HN: SVGV – A Real-Time Vector Video Format for Budget Hardware

https://github.com/thealidev/VectorVision-SVGV
1•thealidev•1h ago•0 comments

Study of 150 developers shows AI generated code no harder to maintain long term

https://www.youtube.com/watch?v=b9EbCb5A408
1•lifeisstillgood•1h ago•0 comments

Spotify now requires premium accounts for developer mode API access

https://www.neowin.net/news/spotify-now-requires-premium-accounts-for-developer-mode-api-access/
1•bundie•1h ago•0 comments

When Albert Einstein Moved to Princeton

https://twitter.com/Math_files/status/2020017485815456224
1•keepamovin•1h ago•0 comments

Agents.md as a Dark Signal

https://joshmock.com/post/2026-agents-md-as-a-dark-signal/
2•birdculture•1h ago•0 comments

System time, clocks, and their syncing in macOS

https://eclecticlight.co/2025/05/21/system-time-clocks-and-their-syncing-in-macos/
1•fanf2•1h ago•0 comments

McCLIM and 7GUIs – Part 1: The Counter

https://turtleware.eu/posts/McCLIM-and-7GUIs---Part-1-The-Counter.html
2•ramenbytes•1h ago•0 comments

So whats the next word, then? Almost-no-math intro to transformer models

https://matthias-kainer.de/blog/posts/so-whats-the-next-word-then-/
1•oesimania•1h ago•0 comments
Open in hackernews

Wrapping my head around AI wrappers

https://www.wreflection.com/p/wrapping-my-head-around-ai-wrappers
52•nowflux•2mo ago

Comments

jgalt212•2mo ago
> But Cursor and other such tools depend almost entirely on accessing Anthropic, OpenAI and Gemini models, until open-source open-weight and in-house models match or exceed frontier models in quality.

I'm not sure I agree with this because even though Cursor is pay north of 100% of revenues to Athropic, Anthropic is selling inference at a loss. So if Cursor builds and hosts its own models it still has the marginal costs > marginal revenues problem.

The way out for Cursor could be a self-hosted much smaller model that focuses on code, and not the world. This could have inference costs lower than marginal revenues.

bogzz•2mo ago
I suppose supermaven is d'oing something to that effect.
xnx•2mo ago
Can you have a useful code model that doesn't understand the world? It seems like such a model would be limited to little more than auto complete.
esafak•2mo ago
I imagine so, through distillation. Start with an all-knowing model, then extract the coding part.
vrighter•2mo ago
they already are limited to no more than autocomplete, and they do quite badly at that too
simianwords•2mo ago
what makes you so sure of this?

> Anthropic is selling inference at a loss.

cost of models have gone down dramatically over time.

jgalt212•2mo ago
True, but they keep using more tokens (Agentic FTW!) and the currently most expensive models.

source: https://www.wheresyoured.at/

simianwords•2mo ago
this is not true. they use more tokens to get more performance - the cost of using the model is going up but the performance is going up with it.
jgalt212•2mo ago
Your observation does not align with recent Cursor price changes.
simianwords•2mo ago
I’m talking about metered billing and not subscription.
vrighter•2mo ago
cost goes down by 50% (it didn't) but you fake improvement while lacking any real improvement by taking the average of 4 runs. Cost just doubled, even though inference technically halved.

Never mind the fact that the current model is alwayj outdated, and a new (bigger one) one is always being trained in parallel with the supposedly "cheaper" inference.

mentalgear•2mo ago
> But I think the insight lies between these positions. Even if a new application starts as a wrapper, it can endure if it lives where work is done, writes to proprietary systems of record, builds proprietary data and learns from usage, and/or captures distribution before incumbents bundle the feature.

Basically the same as MS & Social Media did: build a proprietary silo around data, amass enough data, so it will become too big an inconvenience to move away from the first provider.

It's good that the EU has laws now to ensure data interoperability, export & ownership.

jrvarela56•2mo ago
I agree with you in spirit but this harms the potential for these new products to emerge. You’re saying you don’t want them to be able to accrue a data moat. It sounds good for user privacy and optionality later on but it makes it harder for these services to get started as they dont see that model as possible.
swyx•2mo ago
i recently framed this as "agent labs" vs "model labs" - https://latent.space/p/agent-labs - definitely far from proven or given that they are a lasting business model, but i think the dynamic is at least more evident now than it was a year ago and even that is notable as we are slowly figuring out what the new ai economy looks like
_fat_santa•2mo ago
Currently working on a SaaS app that could be called an "AI Wrapper". One thing I picked up on is once you start using AI tools programmatically, you can start doing far more complex things than what you can with ChatGPT or Claude.

One thing we've leaned heavily into was using Langgraph for agentic workflows and it's really opened the door to cool ways you can use AI. These days the way I tell apart an AI "Wrappers" vs "Tools" is what is the underlying paradigm. Most "wrappers" just copy the paradigm of ChatGPT/Claude where you have a conversation with an agent, the "tools" are where you take the ability to generate content and then plug that into a broader workflow.

embedding-shape•2mo ago
> One thing we've leaned heavily into was using Langgraph for agentic workflows

Probably my single biggest mistake so far with developing LLM tooling so far has been to try to use Langgraph even after inspecting the codebase, because people I thought were smarter than me hyped it up.

Do yourself a favor and just write the plumbing yourself, it's a lot easier than one might think before digging into it, and tool calling is literally a loop passing tool requests and responses back and forth until the model responds, and having your own abstractions will make it a lot easier to build proper workflows. Plus you get to use whatever language you want and don't have to deal with Python.

CjHuber•2mo ago
I can recommend looking into DSpy, I haven’t felt the need to use any other LLM based frameworks though I‘m open to suggestions
embedding-shape•2mo ago
There really isn't need, all they add is additional code to be responsible for, building the same abstractions yourself but focused on your use case will be something like 50-100 lines of code, hard to beat the simplicity, and the understanding you'll get.
TrackerFF•2mo ago
What do we call companies that spin up open-source ML models, and basically just sell access to said models?

One example is audio / stem separation, object segmentation.

They're not wrappers, but whatever that is one step deeper down in complexity.

jabroni_salad•2mo ago
I think those are just inference providers unless they are adding something on top of it.
lubujackson•2mo ago
Almost every startup is a wrapper of some sort, and has been for a while. The reason a startup can startup is because it has some baked in competency by using new and underutilized tools. In the dot com boom, that was the internet itself.

Now it's AI. Only after doing this for 20+ years do I really appreciate that the arduous process and product winnowing that happens over time is the bulk of the value (and the moat, when none other exists).

amelius•2mo ago
> Almost every startup is a wrapper of some sort, and has been for a while.

The difference is that with AI they will send your data to a third party.

max002•2mo ago
Cant help myself and compare to frameworks, libraries and oop... cant we built so fast because of them?

I think of wrapper more as a very thin layer around. Thin layer is easy to reproduce. I do not question that a smart collection of wrappers can do great product. Its all about idea :)

However its if ones idea is based purely on wrappers there's really no moat, nothing stopping somebody else to copy it within a moment

btown•2mo ago
OpenAI is just a wrapper around NVIDIA, which is just a wrapper around TSMC, which is just a wrapper around ASML, which is just a wrapper around Zeiss optics, which is just a wrapper around EUV photons, which are just wrappers around quarks, which are just wrappers around quantum fields...

A Large Language Model is just a Large Hadron Model with better marketing.

random3•2mo ago
The processor is built of transistors, built of silicone. The paper that wraps the box that wraps the processor is simply a mindless container. Yes, there’s nuance when it comes to “wrapper” companies but it in the end they may just be wrappers. Back in the “web 2.0” things were called mash-ups and everybody didn’t try to make them look like companies.
esafak•2mo ago
I would argue that a "wrapper" denotes a product that could be replicated with little effort, not that one relies on the other. None of your examples fit this definition.
M4R5H4LL•2mo ago
I believe the controversy arises from the notion of “little effort” and critics who have never independently pushed anything to market. It comes across as dismissive and arrogant simply because someone exudes excessive confidence in a limited set of skills. I can personally attest to the immense demands of building a successful business, and it’s evident that very few individuals possess the capability to achieve that. Therefore, while it may provide comfort to avoid challenging oneself and dismiss others’ total work, it ultimately doesn’t benefit anyone and feels more like a self-serving “I could, but I never will” attitude.
chris_pie•2mo ago
The potential customer rarely cares whether a service provider is running their business well. What matters is the product's value added and risks added, as compared to just using the underlying tech directly.
keiferski•2mo ago
Marketing, UI, and brand matter a lot. Especially when all of the products are functionally "the same" to the average consumer, who doesn't care about technical details and benchmarks, etc. It reminds me of this great scene from Mad Men:

This is the greatest advertising opportunity since the invention of cereal. We have six identical companies making six identical products. We can say anything we want.

https://youtu.be/8SsnkXH2mQY?si=SWPOsGBel1yh3kMd&t=198

yawnxyz•2mo ago
right, if you look at the largest food companies, they're all just wrappers around proteins and simple/complex carbs, yet some products can do so well, and others so poorly
darepublic•2mo ago
as others have mentioned -- I think wrapper is a fair term. It is not trivial and took untold man hours of research and labour to go from nvidia gpus to modern llms. some of the ai products really do feel like minimal engineering around calls to openai (or claude or what have you)
dworks•2mo ago
Software is 10x more valuable than inference tokens because tokens do nothing for the user, just like a database request by itself does nothing.

Software is what makes inference valuable because it builds a workflow that transforms tokens and data into practical benefits.

Look at the payment plans for Lovable, Figma Make, Claude Code. None of them charge by token. They charge by obfuscated 'credits'. We don't know the current credit economics, but it is certain that the credit markup will increase and probably eventually reach 10x of the token cost. Users will gladly pay for it because again, tokens do nothing for them. It is the Claude Code, Figma Make products that make them productive.

pempem•2mo ago
The claim is too absolute. Software amplifies value, but inference cost and capability still shape what’s possible. Users aren’t demanding obfuscation; they just want predictable pricing and clear ROI. Does anyone want hidden math in their pricing?

In many markets, transparency wins. Think of Carfax or banking fees or airbnb pricing for example, when regulators or competitors force clarity, buyers benefit and trust grows.In a functioning government that serves the people (regardless of party) we would see this

People believe they “need” these AI products partly because they’re saturated in both earned and paid media. In '23 there were nearly 400k articles covering AI. I think we can all safely assume its more now, and when we include financial reporting, quite inescapable.

kayart_dev•2mo ago
It’s hard to count how many ffmpeg wrappers are there