frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

LicGen – Offline License Generator (CLI and Web UI)

1•tejavvo•1m ago•0 comments

Service Degradation in West US Region

https://azure.status.microsoft/en-gb/status?gsid=5616bb85-f380-4a04-85ed-95674eec3d87&utm_source=...
1•_____k•1m ago•0 comments

The Janitor on Mars

https://www.newyorker.com/magazine/1998/10/26/the-janitor-on-mars
1•evo_9•3m ago•0 comments

Bringing Polars to .NET

https://github.com/ErrorLSC/Polars.NET
2•CurtHagenlocher•5m ago•0 comments

Adventures in Guix Packaging

https://nemin.hu/guix-packaging.html
1•todsacerdoti•6m ago•0 comments

Show HN: We had 20 Claude terminals open, so we built Orcha

1•buildingwdavid•6m ago•0 comments

Your Best Thinking Is Wasted on the Wrong Decisions

https://www.iankduncan.com/engineering/2026-02-07-your-best-thinking-is-wasted-on-the-wrong-decis...
1•iand675•6m ago•0 comments

Warcraftcn/UI – UI component library inspired by classic Warcraft III aesthetics

https://www.warcraftcn.com/
1•vyrotek•8m ago•0 comments

Trump Vodka Becomes Available for Pre-Orders

https://www.forbes.com/sites/kirkogunrinde/2025/12/01/trump-vodka-becomes-available-for-pre-order...
1•stopbulying•9m ago•0 comments

Velocity of Money

https://en.wikipedia.org/wiki/Velocity_of_money
1•gurjeet•11m ago•0 comments

Stop building automations. Start running your business

https://www.fluxtopus.com/automate-your-business
1•valboa•16m ago•1 comments

You can't QA your way to the frontier

https://www.scorecard.io/blog/you-cant-qa-your-way-to-the-frontier
1•gk1•17m ago•0 comments

Show HN: PalettePoint – AI color palette generator from text or images

https://palettepoint.com
1•latentio•17m ago•0 comments

Robust and Interactable World Models in Computer Vision [video]

https://www.youtube.com/watch?v=9B4kkaGOozA
2•Anon84•21m ago•0 comments

Nestlé couldn't crack Japan's coffee market.Then they hired a child psychologist

https://twitter.com/BigBrainMkting/status/2019792335509541220
1•rmason•23m ago•0 comments

Notes for February 2-7

https://taoofmac.com/space/notes/2026/02/07/2000
2•rcarmo•24m ago•0 comments

Study confirms experience beats youthful enthusiasm

https://www.theregister.com/2026/02/07/boomers_vs_zoomers_workplace/
2•Willingham•31m ago•0 comments

The Big Hunger by Walter J Miller, Jr. (1952)

https://lauriepenny.substack.com/p/the-big-hunger
2•shervinafshar•32m ago•0 comments

The Genus Amanita

https://www.mushroomexpert.com/amanita.html
1•rolph•37m ago•0 comments

We have broken SHA-1 in practice

https://shattered.io/
10•mooreds•38m ago•3 comments

Ask HN: Was my first management job bad, or is this what management is like?

1•Buttons840•39m ago•0 comments

Ask HN: How to Reduce Time Spent Crimping?

2•pinkmuffinere•40m ago•0 comments

KV Cache Transform Coding for Compact Storage in LLM Inference

https://arxiv.org/abs/2511.01815
1•walterbell•45m ago•0 comments

A quantitative, multimodal wearable bioelectronic device for stress assessment

https://www.nature.com/articles/s41467-025-67747-9
1•PaulHoule•47m ago•0 comments

Why Big Tech Is Throwing Cash into India in Quest for AI Supremacy

https://www.wsj.com/world/india/why-big-tech-is-throwing-cash-into-india-in-quest-for-ai-supremac...
3•saikatsg•47m ago•0 comments

How to shoot yourself in the foot – 2026 edition

https://github.com/aweussom/HowToShootYourselfInTheFoot
2•aweussom•47m ago•0 comments

Eight More Months of Agents

https://crawshaw.io/blog/eight-more-months-of-agents
4•archb•49m ago•0 comments

From Human Thought to Machine Coordination

https://www.psychologytoday.com/us/blog/the-digital-self/202602/from-human-thought-to-machine-coo...
1•walterbell•50m ago•0 comments

The new X API pricing must be a joke

https://developer.x.com/
1•danver0•50m ago•0 comments

Show HN: RMA Dashboard fast SAST results for monorepos (SARIF and triage)

https://rma-dashboard.bukhari-kibuka7.workers.dev/
1•bumahkib7•51m ago•0 comments
Open in hackernews

Cognition Releases SWE-1.5: Near-SOTA Coding Performance at 950 tok/s

https://cognition.ai/blog/swe-1-5
11•yashvg•3mo ago

Comments

swyx•3mo ago
(coauthor) xpost here: https://x.com/cognition/status/1983662836896448756

happy to answer any questions. i think my higher level insight to paraphrase McLuhan, "first the model shapes the harness, then the harness shapes the model". this is the first model that combines cognition's new gb200 cluster, cerebras' cs3 inference, and data from our evals work with {partners} as referenced in https://www.theinformation.com/articles/anthropic-openai-usi...

CuriouslyC•3mo ago
In the interest of transparency you should update your post with the model you fine tuned, it matters.
swyx•3mo ago
this is not a question and an assertion that your values are more important than mine, with none of my context nor none of your reasoning. you see how this tone is an issue?

regardless of what i'm allowed to say, i will personally defend that actually its increasingly less important the qualities of the base model you choose as long as its "good enough", bc then the RL/posttrain qualities and data takes over from there and is the entire point of differentiation

CuriouslyC•3mo ago
If you had enough tokens to completely wash out the parent latent distribution you would have just trained a new model instead of fine tuning. That means by definition your model is still inheriting properties of the parent, and for your business customers who want predictable, understandable systems, knowing the inherited properties of your model is going to be useful.

I think the real reason is that it's a Chinese model (I mean, come on) and your parent company doesn't want any political blowback.

luisml77•3mo ago
> you would have just trained a new model instead of fine tuning

As if it doesn't cost tens of millions to pre-train a model. Not to mention the time it takes. Do you want them to stall progress for no good reason?

CuriouslyC•3mo ago
Originally I just wanted to know what their base model was out of curiosity. Since they fired off such a friendly reply, now I want to know if they're trying to pass off a fine tuned Chinese model to government customers who have directives to avoid Chinese models with hand waiving about how it's safe now because they did some RL on it.
luisml77•3mo ago
I mean I was going to say that was ridiculous but now that I think about it more, its possible that the models can be trained to say spy on government data by calling a tool to send the information to China. And some RL might not wipe off that behavior.

I doubt current models from China are trained to do smart spying / injecting sneaky tool calls. But based on my Deep Learning experience with the models both training and inference, it's definitely possible to train a model to do this in a very subtle and hard to detect way...

So your point is valid and I think they should specify the base model for security concerns, or conduct safety evaluations on it before passing it to sensitive customers

pandada8•3mo ago
very curious, which model can only run up to 950 tok/s even with cerebras.