frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

The Joy of Playing Grandia, on Sega Saturn

https://www.segasaturnshiro.com/2025/11/27/the-joy-of-playing-grandia-on-sega-saturn/
36•tosh•1h ago•3 comments

LLM from scratch, part 28 – training a base model from scratch on an RTX 3090

https://www.gilesthomas.com/2025/12/llm-from-scratch-28-training-a-base-model-from-scratch
43•gpjt•6d ago•0 comments

No ARIA is better than bad ARIA

https://www.w3.org/WAI/ARIA/apg/practices/read-me-first/
57•robin_reala•6d ago•21 comments

Epsilon: A WASM virtual machine written in Go

https://github.com/ziggy42/epsilon
48•ziggy42•1w ago•12 comments

Icons in Menus Everywhere – Send Help

https://blog.jim-nielsen.com/2025/icons-in-menus/
568•ArmageddonIt•15h ago•231 comments

The universal weight subspace hypothesis

https://arxiv.org/abs/2512.05117
290•lukeplato•11h ago•103 comments

Richard Stallman on ChatGPT

https://www.stallman.org/chatgpt.html
18•colesantiago•12m ago•8 comments

Kroger acknowledges that its bet on robotics went too far

https://www.grocerydive.com/news/kroger-ocado-close-automated-fulfillment-centers-robotics-grocer...
175•JumpCrisscross•11h ago•156 comments

Manual: Spaces

https://type.today/en/journal/spaces
66•doener•11h ago•7 comments

Jepsen: NATS 2.12.1

https://jepsen.io/analyses/nats-2.12.1
370•aphyr•16h ago•134 comments

Strong earthquake hits northern Japan, tsunami warning issued

https://www3.nhk.or.jp/nhkworld/en/news/20251209_02/
320•lattis•20h ago•148 comments

Microsoft increases Office 365 and Microsoft 365 license prices

https://office365itpros.com/2025/12/08/microsoft-365-pricing-increase/
393•taubek•21h ago•456 comments

Horses: AI progress is steady. Human equivalence is sudden

https://andyljones.com/posts/horses.html
421•pbui•10h ago•328 comments

AMD GPU Debugger

https://thegeeko.me/blog/amd-gpu-debugging/
253•ibobev•19h ago•45 comments

Launch HN: Nia (YC S25) – Give better context to coding agents

https://www.trynia.ai/
116•jellyotsiro•18h ago•75 comments

Has the cost of building software dropped 90%?

https://martinalderson.com/posts/has-the-cost-of-software-just-dropped-90-percent/
293•martinald•16h ago•436 comments

Let's put Tailscale on a jailbroken Kindle

https://tailscale.com/blog/tailscale-jailbroken-kindle
289•Quizzical4230•18h ago•67 comments

Trials avoid high risk patients and underestimate drug harms

https://www.nber.org/papers/w34534
128•bikenaga•16h ago•39 comments

IBM to acquire Confluent

https://www.confluent.io/blog/ibm-to-acquire-confluent/
404•abd12•21h ago•325 comments

Paramount launches hostile bid for Warner Bros

https://www.cnbc.com/2025/12/08/paramount-skydance-hostile-bid-wbd-netflix.html
326•gniting•21h ago•334 comments

The Lost Machine Automats and Self-Service Cafeterias of NYC (2023)

https://www.untappedcities.com/automats-cafeterias-nyc/
78•walterbell•10h ago•24 comments

A thousand-year-long composition turns 25 (2024)

https://longplayer.org/news/2024/12/31/a-thousand-year-long-composition-turns-25/
23•1659447091•4h ago•5 comments

Hunting for North Korean Fiber Optic Cables

https://nkinternet.com/2025/12/08/hunting-for-north-korean-fiber-optic-cables/
260•Bezod•18h ago•92 comments

Periodic Spaces

https://ianthehenry.com/posts/periodic-spaces/
20•surprisetalk•5d ago•6 comments

Cassette tapes are making a comeback?

https://theconversation.com/cassette-tapes-are-making-a-comeback-yes-really-268108
97•devonnull•5d ago•150 comments

Show HN: Fanfa – Interactive and animated Mermaid diagrams

https://fanfa.dev/
115•bairess•4d ago•26 comments

OSHW: Small tablet based on RK3568 and AMOLED screen

https://oshwhub.com/oglggc/rui-xin-wei-rk3568-si-ceng-jia-li-chuang-mian-fei-gong-yi
83•thenthenthen•5d ago•34 comments

Microsoft Download Center Archive

https://legacyupdate.net/download-center/
168•luu•3d ago•24 comments

AI should only run as fast as we can catch up

https://higashi.blog/2025/12/07/ai-verification/
167•yuedongze•17h ago•146 comments

A series of tricks and techniques I learned doing tiny GLSL demos

https://blog.pkh.me/p/48-a-series-of-tricks-and-techniques-i-learned-doing-tiny-glsl-demos.html
187•ibobev•18h ago•24 comments
Open in hackernews

Amp, Inc. – Amp is spinning out of Sourcegraph

https://ampcode.com/news/amp-inc
82•pdubroy•6d ago

Comments

tonictato•6d ago
I’d love to know the internals here. Is equity being split? Seems like a legal minefield to split a company like this.

Amp was built by Sourcegraph, so I assume all investors and employees of Sourcegraph now get equity in Amp?

foota•12h ago
I have no insight, but I would assume it's a 1-1 sort of split, that is that everyone that previously had one share of sourcegraph now has one share of sourcegraph and one of amp? That seems like the least legally fraught way to do it.
jeeyoungk•11h ago
yes that is easiest; or just be a 100% owned subsidiary. (that's what say, waymo is).

the good thing is that you afterwards the cap table of the subsidiary or the spunoff can evolve (ex: waymo / amp can raise money independent of the parent company).

Touche•9h ago
Why is that preferable to just pivoting?
foota•9h ago
Presumably they wanted to continue working on sourcegraph? Or maybe they want to have something left if amp flops?
traceroute66•12h ago
No insight here either, but I would guess it is a spin-off ... mostly because it is a US company and in the US spin-off's are generally tax free to both company and shareholders.

Spin-off is where the parent company creates a subsidiary and distributes the shares in the subsidiary to the existing shareholders. So the shareholders end up holding shares of two companies. Share allocation is done on a pro-rata basis so each shareholder still has the same exposure they did before.

neom•12h ago
What are the chances we see local models rendering these paid IDEs useless? I presume it will be a very long time before we see a good enough local model that can compete with their ad supported frontier models on the vast majority of machines out there (I presume most people don't have the latest and greatest)?

I was watching the CEO of that Chad IDE give an interview the other day, they are doing the same thing as amp just with "brain rot" ads/games etc instead (different segment, fine), they are using that activity to produce value (ads/game usage or whatever) for another business such they can offset the costs of more expensive models. (I presume this could extend out into selling data also, but I don't know they do that today)

Congrats to the amp team on all the success btw, all I hear is great things about the product, good work.

htrp•12h ago
much easier to raise money as a frontier lab
esafak•12h ago
You have to pay by the token, without being able to use your subscription, right? If so, is it better enough than the coding agent that the others ship with to make up for that loss? This is a crowded space.
ics•12h ago
Is Amp the thing that supplanted Cody (which was either developed or acquired by Sourcegraph, can't remember)?
hud_dev•11h ago
Yep, it was all developed by Sourcegraph in-house.
ramraj07•12h ago
Can folks who have compared Amp with other agents share their experience? Some of my colleagues swear this is the best agent out there.
lvl155•11h ago
It was really good in early stages (this past summer). But that was before Claude Code and Codex took off big time. I would say the biggest downside of Amp is that it’s expensive. Like running Opus the whole time expensive. But they don’t have their own model so what are you really paying for? Prompts? Not so sure. Amp is for people who are not smart enough to roll their own agents. So in that case, they shouldn’t be using agentic workflow.
embedding-shape•11h ago
To be honest, I've gave it a try a couple of times, but it's so expensive I'm having a hard time even being able to judge it fairly. The first time I spent just $5, second $10 and the third time $20, but they all went by so fast I'm worried even if I find it great, it's way too expensive, and having a number tick up/down makes me nervous or something. And I'm the type of person who has ChatGPT Pro so I'm not exactly stingy with paying for things I find useful, but there is a limit somewhere and I guess for me Amp is that.
benatkin•11h ago
It sounds like you're being temporarily stingy due to having ChatGPT Pro. Might be good to get rid of it if you think the grass might be greener outside of Codex.
embedding-shape•11h ago
No, ChatGPT Pro was an example that I'm not stingy to pay for things I find useful. I'm also paying for Gemini, Claude and other types of software to do my job, not even just coding. But even if I do, I still find Amp too expensive to be able to use for anything useful.

I run every single coding prompt through Codex, Claude Code, Qwen and Gemini, compare which one gives me the best and go ahead with using that one. Maybe I go with Codex 60% of the times, Claude 20% and Qwen/Gemini the remaining 20%, not often at all either of them get enough right. I've tried integrating Amp into my workflow too, but as mentioned, too expensive. I do think the grass is currently the greenest with Codex, still.

benatkin•11h ago
It depends on your perspective. From a startup perspective, this makes you a less interesting potential customer, to which one might attach the term stingy. From a perspective of willingness to invest in your own productivity it doesn't sound stingy, though.
incoming1211•11h ago
As someone who switches between most CLIs to compare, Amp is still on top, costs more, but has the best results. The librarian and oracle make it leagues ahead of the competition.
Touche•11h ago
I don't understand how people use these tools without a subscription. Unless you are using it very infrequently paying per token gets costly very fast.
incoming1211•11h ago
Work pays for it. I don't work for stingy companies that don't provide the tools required to do the job. (our team spends > $1000/m EACH on Amp alone)
hboon•7h ago
Could you please share a little on why it's noticeably better than Claude Code on a sub (or 5? I mean, sometimes you can brute force a solution with agents)?
the_mitsuhiko•11h ago
I think it’s great but also pricey. Amp like Claude Code feels like a product used by the people that build it and oddly enough that does not seem to be the case for most coding agents out there.
Maxious•10h ago
https://www.askmodu.com/rankings independently aggregates traffic from a variety of agents and amp consistently has the highest success rate for small and large tasks

That aligns with my annecdata :)

alberth•11h ago
Who is Amp’s competition?

Because if I understand them correctly, aren’t they a wrapper around all the major LLMs (focused specially on developer use cases?

lukev•11h ago
I do wonder what the moat is around this class of products (call it "coding agents").

My intuition is that it's not deep... the differentiating factor is "regular" (non LLM) code which assembles the LLM context and invokes the LLM in a loop.

Claude/Codex have some advantage, because they can RLHF/finetune better than others. But ultimately this is about context assembly and prompting.

sergiotapia•10h ago
There is no moat. It's all prompts. The only potential moat is building your own specialized models using the code your customers send your way I believe.
jonahbenton•10h ago
I think "prompts" are a much richer kind of intellectual property than they are given credit for. Will put in here a pointer to the Odd Lots recent podcast with Noetica AI- a give to get M&A/complex debt/deal terms benchmarker. Noetica CEO said they now have over 1 billion "deal terms" in their database, which is only half a dozen years old. Growing constantly. Over 1 billion different legal points on which a complex financial contract might be structured. Even more than that, the representation of terms in contracts they see can change pretty dramatically quarter to quarter. The industry learns and changes.

The same thing is going to happen with all of the human language artifacts present in the agentic coding universe. Role definitions, skills, agentic loop prompts....the specific language, choice of words, sequence, etc really matters and will continue to evolve really rapidly, and there will be benchmarkers, I am sure of it, because quite a lot of orgs will consider their prompt artifacts to be IP.

I have personally found that a very high precision prompt will mean a smaller model on personal hardware will outperform a lazy prompt given to a foundation model. These word calculators are very very (very) sensitive. There will be gradations of quality among those who drive them best.

The best law firms are the best because they hire the best with (legal) language and are able to retain the reputation and pricing of the best. That is the moat. Same will be the case here.

malux85•7h ago
But the problem is the tight coupling of prompts to the models. The half-life of prompt value is short because the frequency of new models is high, how do you defend a moat that can half (or worse) any day a new model comes out?

You might get an 80% “good enough” prompt easily but then all the differentiation (moat) is in that 20% but that 20% is tied to the model idiosyncrasies, making the moat fragile and volatile.

sdesol•3h ago
I think the issue was they (the parent commenter) didn't properly convey and/or did not realize they were arguing for context. Data that is difficult to come by that can be used in a prompt is valuable. Being able to workaround something with clever wording (i.e. prompt) is not a moat.
brcmthrowaway•11h ago
What model does Amp use?
Leynos•11h ago
https://ampcode.com/models
truekonrads•11h ago
I love AMP, it delivers great results. I like that it’s opinionated in how it should be used and for example tells me that I need to hand off context.

I love that I am not welded to one model and someone smart has evaluated what’s best fit for what. It’s a bit a shame they lost Steve Yegge as their brand ambassador. A respected and practicing coder is big endorsement.

To anyone on the fence - give it a go.

Touche•11h ago
So the point of this is to clean the cap table, right? Current investors aren't getting a stake in the new company?