frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Apple M5 chip

https://www.apple.com/newsroom/2025/10/apple-unleashes-m5-the-next-big-leap-in-ai-performance-for...
733•mihau•6h ago•775 comments

Things I've learned in my 7 Years Implementing AI

https://www.jampa.dev/p/llms-and-the-lessons-we-still-havent
48•jampa•1h ago•16 comments

I almost got hacked by a 'job interview'

https://blog.daviddodda.com/how-i-almost-got-hacked-by-a-job-interview
444•DavidDodda•6h ago•220 comments

Pwning the Nix ecosystem

https://ptrpa.ws/nixpkgs-actions-abuse
188•SuperShibe•6h ago•27 comments

Claude Haiku 4.5

https://www.anthropic.com/news/claude-haiku-4-5
231•adocomplete•2h ago•87 comments

Claude Haiku 4.5 System Card [pdf]

https://assets.anthropic.com/m/99128ddd009bdcb/original/Claude-Haiku-4-5-System-Card.pdf
40•vinhnx•1h ago•3 comments

Clone-Wars: 100 open-source clones of popular sites

https://github.com/GorvGoyl/Clone-Wars
23•ulrischa•1h ago•0 comments

US Passport Power Falls to Historic Low

https://www.henleyglobal.com/newsroom/press-releases/henley-global-mobility-report-oct-2025
60•saubeidl•2h ago•61 comments

Show HN: Halloy – Modern IRC client

https://github.com/squidowl/halloy
202•culinary-robot•7h ago•64 comments

F5 says hackers stole undisclosed BIG-IP flaws, source code

https://www.bleepingcomputer.com/news/security/f5-says-hackers-stole-undisclosed-big-ip-flaws-sou...
70•WalterSobchak•6h ago•31 comments

C++26: range support for std:optional

https://www.sandordargo.com/blog/2025/10/08/cpp26-range-support-for-std-optional
47•birdculture•5d ago•25 comments

A kernel stack use-after-free: Exploiting Nvidia's GPU Linux drivers

https://blog.quarkslab.com/./nvidia_gpu_kernel_vmalloc_exploit.html
92•mustache_kimono•5h ago•6 comments

Recreating the Canon Cat document interface

https://lab.alexanderobenauer.com/updates/the-jasper-report
56•tonyg•5h ago•1 comments

Reverse engineering a 27MHz RC toy communication using RTL SDR

https://nitrojacob.wordpress.com/2025/09/03/reverse-engineering-a-27mhz-rc-toy-communication-usin...
53•austinallegro•5h ago•10 comments

Garbage collection for Rust: The finalizer frontier

https://soft-dev.org/pubs/html/hughes_tratt__garbage_collection_for_rust_the_finalizer_frontier/
82•ltratt•7h ago•74 comments

Leaving serverless led to performance improvement and a simplified architecture

https://www.unkey.com/blog/serverless-exit
211•vednig•8h ago•148 comments

M5 MacBook Pro

https://www.apple.com/macbook-pro/
233•tambourine_man•6h ago•285 comments

Breaking "provably correct" Leftpad

https://lukeplant.me.uk/blog/posts/breaking-provably-correct-leftpad/
56•birdculture•1w ago•15 comments

Show HN: Scriber Pro – Offline AI transcription for macOS

https://scriberpro.cc/hn/
106•rezivor•7h ago•98 comments

Americans' love of billiards paved the way for synthetic plastics

https://invention.si.edu/invention-stories/imitation-ivory-and-power-play
30•geox•6d ago•18 comments

Helpcare AI (YC F24) Is Hiring

1•hsial•7h ago

Bots are getting good at mimicking engagement

https://joindatacops.com/resources/how-73-of-your-e-commerce-visitors-could-be-fake
297•simul007•8h ago•223 comments

Recursive Language Models (RLMs)

https://alexzhang13.github.io/blog/2025/rlm/
6•talhof8•1h ago•0 comments

Pixnapping Attack

https://www.pixnapping.com/
263•kevcampb•13h ago•61 comments

iPad Pro with M5 chip

https://www.apple.com/newsroom/2025/10/apple-introduces-the-powerful-new-ipad-pro-with-the-m5-chip/
168•chasingbrains•6h ago•196 comments

FSF announces Librephone project

https://www.fsf.org/news/librephone-project
1322•g-b-r•19h ago•531 comments

Just talk to it – A way of agentic engineering

https://steipete.me/posts/just-talk-to-it
140•freediver•13h ago•79 comments

Show HN: Specific (YC F25) – Build backends with specifications instead of code

https://specific.dev/
9•fabianlindfors•2h ago•0 comments

David Byrne Radio

https://www.davidbyrne.com/radio#filter=all&sortby=date:desc
73•bookofjoe•4h ago•17 comments

Flapping-wing robot achieves self-takeoff by adopting reconfigurable mechanisms

https://www.science.org/doi/10.1126/sciadv.adx0465
69•PaulHoule•6d ago•18 comments
Open in hackernews

Claude Haiku 4.5

https://www.anthropic.com/news/claude-haiku-4-5
227•adocomplete•2h ago

Comments

minimaxir•2h ago
$1/M input tokens and $5/M output tokens is good compared to Claude Sonnet 4.5 but nowadays thanks to the pace of the industry developing smaller/faster LLMs for agentic coding, you can get comparable models priced for much lower which matters at the scale needed for agentic coding.

Given that Sonnet is still a popular model for coding despite the much higher cost, I expect Haiku will get traction if the quality is as good as this post claims.

Bolwin•2h ago
With caching that's 10 cents per million in. Most of the cheap open source models (which this claims to beat, except glm 4.6) have limited and not as effective caching.

This could be massive.

logicchains•2h ago
$1/M is hardly a big improvement over GPT5's $1.250/M (or Gemini Pro's $1.5/M), and given how much worse Haiku is than those at any kind of difficult problem (or problems with a large context size), I can't imagine it being a particularly competitive alternative for coding. Especially for anything math/logic related, I find GPT5 and Gemini Pro to be significantly better even than Opus (which reflects in their models having won Olympiad prizes while Anthropic's have not).
HarHarVeryFunny•1h ago
GPT-5 is $10/M for output tokens, twice the cost of Haiku 4.5 at $5/M, despite Haiku apparently being better at some tasks (SWE Bench).

I suppose it depends on how you are using it, but for coding isn't output cost more relevant than input - requirements in, code out ?

criemen•1h ago
> I suppose it depends on how you are using it, but for coding isn't output cost more relevant than input - requirements in, code out ?

Depends on what you're doing, but for modifying an existing project (rather than greenfield), input tokens >> output tokens in my experience.

logicchains•1h ago
Unless you're working on a small greenfield project, you'll usually have 10s-100s of thousands of relevant words (~tokens) of relevant code in context for every query, vs a few hundred words of changes being output per query. Because most changes to an existing project are relatively small in scope.
Tiberium•2h ago
The funny thing is that even in this area Anthropic is behind other 3 labs (Google, OpenAI, xAI). It's the only one out of those 4 that requires you to manually set cache breakpoints, and the initial cache costs 25% more than usual context. The other 3 have fully free implicit caching. Although Google also offers paid, explicit caching.

https://docs.claude.com/en/docs/build-with-claude/prompt-cac...

https://ai.google.dev/gemini-api/docs/caching

https://platform.openai.com/docs/guides/prompt-caching

https://docs.x.ai/docs/models#cached-prompt-tokens

tempusalaria•1h ago
I vastly prefer the manual caching. There are several aspects of automatic caching that are suboptimal, with only moderately less developer burden. I don’t use Anthropic much but I wish the others had manual cache options
simonw•1h ago
What's sub-optimal about the OpenAI approach, where you get 90% discount on tokens that you've previously sent within X minutes?
criemen•1h ago
I don't understand why we're paying for caching at all (except: model providers can charge for it). It's almost extortion - the provider stores some data for 5min on some disk, and gets to sell their highly limited GPU resources to someone else instead (because you are using the kv cache instead of GPU capacity for a good chunk of your input tokens). They charge you 10% of their GPU-level prices for effectively _not_ using their GPU at all for the tokens that hit the cache.

If I'm missing something about how inference works that explains why there is still a cost for cached tokens, please let me know!

simonw•1h ago
It's not about storing data on disk, it's about keeping data resident in memory.
criemen•1h ago
Fascinating, so I have to think more "pay for RAM/redis" than "pay for SSD"?
nthypes•59m ago
"pay for data on VRAM" RAM of GPU
criemen•45m ago
But that doesn't make sense? Why would they keep the cache persistent in the VRAM of the GPU nodes, which are needed for model weights? Shouldn't they be able to swap in/out the kvcache of your prompt when you actually use it?
minimaxir•42m ago
That is slow.
dotancohen•19m ago
They are not caching to save network bandwidth. They are caching to increase interference speed and reduce (their own) costs.
simonw•2h ago
Yeah, I'm a bit disappointed by the price. Claude 3.5 Haiku was $0.8/$4, 4.5 Haiku is $1/$5.

I was hoping Anthropic would introduce something price-competitive with the cheaper models from OpenAI and Gemini, which get as low as $0.05/$0.40 (GPT-5-Nano) and $0.075/$0.30 (Gemini 2.0 Flash Lite).

diwank•1h ago
I am a bit mind boggled by the pricing lately, especially since the cost increased even further. Is this driven by choices in model deployment (unquantized etc) or simply by perceived quality (as in 'hey our model is crazy good and we are going to charge for it)?
odie5533•1h ago
There's probably less margin on the low end, so they don't want to focus on capturing it.
dr_dshiv•1h ago
Margin? Hahahahaha
odie5533•16m ago
Inference is profitable.
rudedogg•51m ago
This also means API usage through Claude Code got more expensive (but better if benchmarks are to be believed)
justinbaker84•33m ago
I am a professional developer so I don't care about the costs. I would be willing to pay more for 4.5 Haiku vs 4.5 Sonnet because the speed is so valuable.

I spend way to much time waiting for the cutting edge models to return a response. 73% on SWE Bench is plenty good enough for me.

aliljet•2h ago
What is the use case for these tiny models? Is it speed? Is it to move on device somewhere? Or is it to provide some relief in pricing somewhere in the API? It seems like most use is through the Claude subscription and therefore the use case here is basically non-existent.
kasey_junk•2h ago
They are great for building more specialized tool calls that the bigger models can call out to in agentic loops.
minimaxir•2h ago
If you look at the OpenRouter rankings for LLMs (generally, the models coders use for vibe/agentic coding), you can see that most of them are in the "small" model class as opposed to something like full GPT-5 or Claude Opus, albeit Gemini 2.5 Pro is higher than expected: https://openrouter.ai/rankings
pacoWebConsult•2h ago
One big use-case is that claude code with sonnet 4.5 will delegate into the cheaper model (configurable) more specific, contextful tasks, and spin up 1-3 sub-agents to do so. This process saves a ton of available context window for your primary session while also increasing token throughput by fanning-out.
dlisboa•2h ago
For me speed is interesting. I sometimes use Claude from the CLI with `claude -p` for quick stuff I forget like how to run some docker image. Latency and low response speed is what almost makes me go to Google and search for it instead.
pietz•2h ago
I think with gpt-5-mini and now Haiku 4.5, I’d phrase the question the other way around: what do you need the big models for anymore?

We use the smaller models for everything that’s not internal high complexity tasks like coding. Although they would do a good enough of a job there as well, we happily pay the uncharge to get something a little better here.

Anything user facing or when building workflow functionalities like extracting, converting, translating, merging, evaluating, all of these are mini and nano cases at our company.

JLO64•2h ago
In my product I use gpt-5-nano for image ALT text in addition to generating transcriptions of PDFs. It’s been surprisingly great for these tasks, but for PDFs I have yet to test it on a scanned document.
anuramat•2h ago
for me its the speed; eg cerebras qwen coder gets you a completely different workflow as its practically instant (3k tps) -- feels less like an agent and more like a natural language shell, very helpful for iterating on a plan that you them forward to a bigger model
85392_school•2h ago
System card: https://assets.anthropic.com/m/99128ddd009bdcb/original/Clau... (edit: discussed here https://news.ycombinator.com/item?id=45596168)

This is Anthropic's first small reasoner as far as I know.

RickHull•2h ago
If I'm close to weekly limits on Claude Code with Anthropic Pro, does that go away or stretch out if I switch to Haiku?
visarga•6m ago
Sonnet 4.5 was two weeks ago. In the past I never had such issues, but every week my quota ended in 2-3 days. I suspect the Sonnet 4.5 model consumes more usage points than old Sonnet 4.1

I am afraid Claude Pro subscription got 3x less usage

steveklabnik•2h ago
I am really interested in the future of Opus; is it going to be an absolute monster, and continue to be wildly expensive? Or is the leap from 4 -> 4.5 for it going to be more modest.
criemen•1h ago
Technically, they released Opus 4.1 a few weeks ago, so that alone hints at a smaller leap from 4.1 -> 4.5, compared to the leap from Sonnet 4 -> 4.5. That is, of course, if those version numbers represent anything but marketing, which I don't know.
steveklabnik•1h ago
I had forgotten that, given that Sonnet pretty much blows Opus out of the water these days.

Yeah, given how multi-dimensional this stuff is, I assume it's supposed to indicate broad things, closer to marketing than anything objective. Still quite useful.

dheera•1h ago
I wonder what the next smaller model after Haiku will be called. "Claude Phrase"?
steveklabnik•1h ago
It's interesting to think about various aspects of marketing the models, with ChatGPT going the "internal router" direction due to address the complexity of choosing. I'd never considered something smaller than Haiku to be needed, but I also rarely used Haiku in the first place...
ACCount37•26m ago
If you're going smaller than Haiku, you might be at the point of using various cheap open models already. The small model would need some good killer features to justify the margins.
Brendinooo•1h ago
Claude Couplet
u8080•48m ago
Claude Banger
dotancohen•22m ago
If they do come up with a tiny model tuned for generating conversion and code, I think that Claude Acronym would be a perfect name.
entanglr•21m ago
Claude Punchline
fnordsensei•19m ago
Claude Garden Path Sentence
gwd•13m ago
Opus disappeared for quite a while and then came back. Presumably they're always working on all three general sizes of models, and there's some combination of market need and model capabilities which determine if and when they release any given instance to the public.
seunosewa•2h ago
I'd like to see this price structure for Claude:

$5/mt for Haiku 4.5

$10/mt for Sonnet 4.5

$15/mt for Opus 4.5 when it's released.

ericbrow•2h ago
Was anyone else slightly disappointed that this new product doesn't respond in Haiku, as the name would imply?
dpoloncsak•2h ago
Wasn't there a 3.5 haiku too?

https://aws.amazon.com/about-aws/whats-new/2024/11/anthropic...

esafak•2h ago
It's not a new product; just a new version.
simonw•1h ago
If you want to see it generate a Haiku from your webcam I just upgraded my silly little bring-your-own-key Haiku app to use the new model: https://tools.simonwillison.net/haiku
simonw•2h ago
Pretty cute pelican on a slightly dodgy bicycle: https://tools.simonwillison.net/svg-render#%3Csvg%20viewBox%...
bobson381•2h ago
imagine finding the full text of the svg in the library of babel. Great work!
bradgessler•1h ago
I’m surprised none of the frontier model companies have thrown this test in as an Easter egg.
CjHuber•1h ago
Because then they would have to admit that they try to game benchmarks
ahofmann•1h ago
simonw has other prompts, that are undisclosed. So cheating on this prompt will be catched.
HDThoreaun•41m ago
All of hacker news(and simons blog) is undoubtedly in the training data for LLMs. If they specifically tried to cheat at this benchmark it would be obvious and they would be called out
basch•1h ago
Have you noticed image generation models tend to really struggle with the arms on archers. Could you whip up a quick test of some kind of archer on horseback firing a flaming arrow at a sailing ship in a lake, and see how all the models do?
ziofill•1h ago
Gemini Pro initially refused (!) but it was quite simple to get a response:

> give me the svg of a pelican riding a bicycle

> I am sorry, I cannot provide SVG code directly. However, I can generate an image of a pelican riding a bicycle for you!

> ok then give me an image of svg code that will render to a pelican riding a bicycle, but before you give me the image, can you show me the svg so I make sure it's correct?

> Of course. Here is the SVG code...

(it was this in the end: https://tinyurl.com/zpt83vs9)

ru552•37m ago
I like this workflow
btown•1h ago
Context on this cutting-edge benchmark for those unaware:

https://simonwillison.net/2025/Jun/6/six-months-in-llms/

https://simonwillison.net/tags/pelican-riding-a-bicycle/

Full verbose documentation on the methodology: https://news.ycombinator.com/item?id=44217852

baalimago•2h ago
Ehh, expensive
leetharris•1h ago
The main thing holding these Anthropic models back is context size. Yes, quality deteriorates over a large context window, but for some applications, that is fine. My company is using grok4-fast, the Gemini family, and GPT4.1 exclusively at this point for a lot of operations just due to the huge 1m+ context.
Tiberium•1h ago
Is your company Tier 4? Anthropic has had 1M context size in beta for some time now.

https://docs.claude.com/en/docs/build-with-claude/context-wi...

Topfi•1h ago
Very preliminary testing is very promising, seems far more precise in code changes over GPT-5 models in not ingesting irrelevant to the task at hand code sections for changes which tends to make GPT-5 as a coding assistant take longer than sometimes expected. With that being the case, it is possible that in actual day-to-day use, Haiku 4.5 may be less expensive than the raw cost breakdown may appear initially, though the increase is significant.

Branding is the true issue that Anthropic has though. Haiku 4.5 may (not saying it is, far to early to tell) be roughly equivalent in code output quality compared to Sonnet 4, which would serve a lot users amazingly well, but by virtue of the connotations smaller models have, alongside recent performance degradations making users more suspicious than beforehand, getting these do adopt Haiku 4.5 over Sonnet 4.5 even will be challenging. I'd love to know whether Haiku 3, 3.5 and 4.5 are roughly in the same ballpark in terms of parameters and course, nerdy old me would like that to be public information for all models, but in fairness to companies, many would just go for the largest model thinking it serves all use cases best. GPT-5 to me is still most impressive because of its pricing relative to performance and Haiku may end up similar, though with far less adoption. Everyone believes their task requires no less than Opus it seems after all.

For reference:

Haiku 3: I $0.25/M, O $1.25/M

Haiku 4.5: I $1.00/M, O $5.00/M

GPT-5: I $1.25/M, O $10.00/M

GPT-5-mini: I $0.25/M, O $2.00/M

GPT-5-nano: I $0.05/M, O $0.40/M

GLM-4.6: I $0.60/M, O $2.20/M

deadbabe•1h ago
Those numbers don’t mean anything without average token usage stats.
larodi•36m ago
Been waiiting for the Haiku update as I still do a lot of dumb work with the old one, and it is darrn cheap for what you get out of it with smart prompting. Very neat they finally release this, updating all my bots... sorry agents :)
knes•1h ago
At augmentcode.com, we've been evaluating Haiku for some time, it's actually a very good model. We found out it's 90% as good as Sonnet and is ~34% faster than sonnet!

Where it doesn't shine much is on very large coding task. but it is a phenomenal model for small coding tasks and the speed improvement is much welcome

samuelknight•1h ago
90% as good as Sonnet 4 or 4.5? Openrouter just started reporting, and it's saying Haiku is 2x as fast (60tps vs 125tps) and 2-3x less latent (2-3s vs 1s)
sim04ful•1h ago
Curious they don't have any comparison to grok code fast:

Haiku 4.5: I $1.00/M, O $5.00/M

Grok Code: I $0.2/M, O $1.5/M

Squarex•1h ago
wow, grok code fast is really cheap
samuelknight•1h ago
Sonnet 4.5 is an excellent model for my startup's use case. Chatting to Haiku it looks promising too, and it may be great drop in replacement for some of inference tasks that have a lot of input tokens but don't require 4.5-level intelligence.
shrisukhani•1h ago
In our (very) early testing at Hyperbrowser but we're seeing Haiku 4.5 do really well on computer use as well. Pretty cool that Haiku is like the cheapest computer use model from the big labs now.
stared•1h ago
Why I use cheaper models for summaries (a lot ogf gemini-2.5-flash), what’s the use case of cheaper AI for coding? Getting more errors, or more spaghetti code, seems never worth it.
baq•1h ago
If it’s fast enough it can make and correct mistakes faster, potentially getting to a solution quicker than a slower, more accurate model.
justinbaker84•19m ago
I feel like if I just do a better job of providing context and breaking complex tasks into a series of simple tasks then most of the models are good enough for me to code.
zone411•1h ago
I've benchmarked it on the Extended NYT Connections (https://github.com/lechmazur/nyt-connections/). It scores 20.0 compared to 10.0 for Haiku 3.5, 19.2 for Sonnet 3.7, 26.6 for Sonnet 4.0, and 46.1 for Sonnet 4.5.
senko•1h ago
I've tried it on a test case for generating a simple SaaS web page (design + code).

Usually I'm using GPT-5-mini for that task. Haiku 4.5 runs 3x faster with roughly comparable results (I slightly prefer the GPT-5-mini output but may have just accustomed to it).

justinbaker84•6m ago
I don't understand why more people don't talk about how fast the models are. I see so much obsession with bechmark scores but speed of response is very important for day to day use.

I agree that the models from OpenAI and Google have much slower responses than the models from Anthropic. That makes a lot of them not practical for me.

ilaksh•1h ago
What LLM do you guys use for fast inference for voice/phone agents? I feel like to get really good latency I need to "cheat" with Cerebras, groq or SambaNova.

Haiku 4.5 is very good but still seems to be adding a second of latency.

ashirviskas•1h ago
And I was wondering today why Sonnet 4.5 seemed so freaking slow. Now this explains it, Sonnet 4.5 is the new Opus 4.1 where Anthropic does not really want you to use it.
justinbaker84•41m ago
I am very excited about this. I am a freelance developer and getting responses 3x faster is totally worth the slightly reduced capability.

I expect I will be a lot more productive using this instead of claude 4.5 which has been my daily driver LLM since it came out.

philipp-gayret•31m ago
Tried it in Claude Code via /config, makes it feel like I'm running on Cerebras. It's seriously fast, bottleneck is on human review at this point.
singularity2001•7m ago
Do you need Pro?
KaiserPro•15m ago
Ok, I use claude, mostly on default, but with extended thinking and per project prompts.

What's the advantage of using haiku for me?

is it just faster?

singularity2001•8m ago
claude --model Haiku-4.5

doesn't work

mi_lk•5m ago
check the model name here: https://docs.claude.com/en/docs/about-claude/models/overview...