frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

We Will Not Be Divided

https://notdivided.org
1070•BloondAndDoom•5h ago•407 comments

Statement on the comments from Secretary of War Pete Hegseth

https://www.anthropic.com/news/statement-comments-secretary-war
781•surprisetalk•4h ago•268 comments

Don't use passkeys for encrypting user data

https://blog.timcappalli.me/p/passkeys-prf-warning/
87•zdw•3h ago•37 comments

Croatia declared free of landmines after 31 years

https://glashrvatske.hrt.hr/en/domestic/croatia-declared-free-of-landmines-after-31-years-12593533
90•toomuchtodo•3h ago•8 comments

OpenAI agrees with Dept. of War to deploy models in their classified network

https://twitter.com/sama/status/2027578652477821175
332•eoskx•3h ago•195 comments

Cash Issuing Terminals

https://computer.rip/2026-02-27-ibm-atm.html
7•zdw•54m ago•0 comments

Show HN: I ported Manim to TypeScript (run 3b1B math animations in the browser)

https://github.com/maloyan/manim-web
48•maloyan•2d ago•11 comments

Smallest transformer that can add two 10-digit numbers

https://github.com/anadim/AdderBoard
124•ks2048•1d ago•46 comments

OpenAI raises $110B on $730B pre-money valuation

https://techcrunch.com/2026/02/27/openai-raises-110b-in-one-of-the-largest-private-funding-rounds...
451•zlatkov•15h ago•496 comments

President Trump bans Anthropic from use in government systems

https://www.npr.org/2026/02/27/nx-s1-5729118/trump-anthropic-pentagon-openai-ai-weapons-ban
231•pkress2•8h ago•186 comments

A new California law says all operating systems need to have age verification

https://www.pcgamer.com/software/operating-systems/a-new-california-law-says-all-operating-system...
508•WalterSobchak•15h ago•481 comments

Qt45: A small polymerase ribozyme that can synthesize itself

https://www.science.org/doi/10.1126/science.adt2760
68•ppnpm•6h ago•14 comments

OpenAI reaches deal to deploy AI models on U.S. DoW classified network

https://www.reuters.com/business/openai-reaches-deal-deploy-ai-models-us-department-war-classifie...
89•erhuve•2h ago•25 comments

A better streams API is possible for JavaScript

https://blog.cloudflare.com/a-better-web-streams-api/
394•nnx•16h ago•136 comments

A Chinese official’s use of ChatGPT revealed an intimidation operation

https://www.cnn.com/2026/02/25/politics/chatgpt-china-intimidation-operation
186•cwwc•14h ago•116 comments

NASA announces overhaul of Artemis program amid safety concerns, delays

https://www.cbsnews.com/news/nasa-artemis-moon-program-overhaul/
238•voxadam•13h ago•259 comments

Eschewing Zshell for Emacs Shell (2014)

https://www.howardism.org/Technical/Emacs/eshell-fun.html
22•pvdebbe•3d ago•5 comments

Get free Claude max 20x for open-source maintainers

https://claude.com/contact-sales/claude-for-oss
518•zhisme•21h ago•211 comments

Open source calculator firmware DB48X forbids CA/CO use due to age verification

https://github.com/c3d/db48x/commit/7819972b641ac808d46c54d3f5d1df70d706d286
162•iamnothere•14h ago•84 comments

Time-Travel Debugging: Replaying Production Bugs Locally

https://lackofimagination.org/2026/02/time-travel-debugging-replaying-production-bugs-locally/
6•tie-in•2d ago•0 comments

I am directing the Department of War to designate Anthropic a supply-chain risk

https://twitter.com/secwar/status/2027507717469049070
1227•jacobedawson•7h ago•989 comments

Show HN: Claude-File-Recovery, recover files from your ~/.claude sessions

https://github.com/hjtenklooster/claude-file-recovery
68•rikk3rt•13h ago•25 comments

Semantic Syntax Highlighting for Lisp in Emacs

https://github.com/calsys456/lisp-semantic-hl.el
5•oumua_don17•3d ago•0 comments

Inventing the Lisa user interface – Interactions

https://dl.acm.org/doi/10.1145/242388.242405
26•rbanffy•2d ago•2 comments

Bootc and OSTree: Modernizing Linux System Deployment

https://a-cup-of.coffee/blog/ostree-bootc/
9•mrtedbear•3h ago•1 comments

Implementing a Z80 / ZX Spectrum emulator with Claude Code

https://antirez.com/news/160
139•antirez•2d ago•67 comments

Show HN: Unfucked - version all changes (by any tool) - local-first/source avail

https://www.unfudged.io/
81•cyrusradfar•1d ago•41 comments

Kyber (YC W23) Is Hiring an Enterprise Account Executive

https://www.ycombinator.com/companies/kyber/jobs/59yPaCs-enterprise-account-executive-ae
1•asontha•11h ago

Let's discuss sandbox isolation

https://www.shayon.dev/post/2026/52/lets-discuss-sandbox-isolation/
122•shayonj•11h ago•40 comments

Writing a Guide to SDF Fonts

https://www.redblobgames.com/blog/2026-02-26-writing-a-guide-to-sdf-fonts/
87•chunkles•11h ago•6 comments
Open in hackernews

Just make it scale: An Aurora DSQL story

https://www.allthingsdistributed.com/2025/05/just-make-it-scale-an-aurora-dsql-story.html
134•cebert•9mo ago

Comments

glzone1•9mo ago
Early dsql had some weird limits I think - anyone actually using in production with feedback on current corners and limits?
Marbling4581•9mo ago
I don't use it, but have been keeping an eye on it.

At launch, they limited the number of affected tuples to 10000, including tuples in secondary indexes. They recently changed this limit to:

> A transaction cannot modify more than 3,000 rows. The number of secondary indexes does not influence this number. This limit applies to all DML statements (INSERT, UPDATE, DELETE).

There are a lot of other (IMO prohibitive) restrictions listed in their docs.

https://docs.aws.amazon.com/aurora-dsql/latest/userguide/wor...

pzduniak•9mo ago
Who would use Preview products in production? I'm building out some software that would fit perfectly into the constraints set for DSQL, but I realistically can't commit to something with no pricing / guarantees.
EwanToo•9mo ago
This blog post appears to be part of the scheduled launch marketing, it's now generally available

https://aws.amazon.com/blogs/aws/amazon-aurora-dsql-is-now-g...

mjb•9mo ago
Which features would you like to see the team build first? Which limits would you like to see lifted first?

Most of the limitations you can see in the documentation are things we haven't gotten to building yet, and it's super helpful to know what folks need so we can prioritize the backlog.

avereveard•9mo ago
indexes! vector, trigram and maybe geospatial. (some may be in by now I didn't follow the service as closely as others)

note, doesn't have to be pg_vector pg_trgm or PostGIS, just the index component even if it's a clean room implementation would make this way more useful.

loginatnine•9mo ago
Views and foreign keys!
mjb•9mo ago
Thanks. The team's working on both. For views, do you need updatable views, or are read-only views sufficient?
loginatnine•9mo ago
For me it's RO views.
tigy32•9mo ago
I believe views were added to the preview a little while ago

edit from the launch: "With today’s launch, we’ve added support for AWS Backup, AWS PrivateLink, AWS CloudFormation, AWS CloudTrail, AWS KMS customer managed keys, and PostgreSQL views."

mjb•9mo ago
Correct: https://docs.aws.amazon.com/aurora-dsql/latest/userguide/wor...
tomComb•9mo ago
The lack of JSONB is what stopped me.
sgarland•9mo ago
Why does it not support TRUNCATE?
jashmatthews•9mo ago
My understanding is the way Aurora DSQL distributes data widely makes bulk writes extremely slow/expensive. So no COPY, INSERT with >3k rows, TRUNCATE etc
sgarland•9mo ago
TRUNCATE is DROP TABLE + CREATE TABLE, it’s not a bulk delete. It bypasses the typical path for writes entirely.
loevborg•9mo ago
Which ones? It seems eminently usable from the outside now, at least for greenfield work. The subset of Postgres it supports is most of good/core/essential Postgres. (But I haven't tried it)
geodel•9mo ago
Good read. I like the part that both writing low level as well as high level component in Rust was proven worthwhile.

Maybe one can transform slow code from high level languages to low level language via LLMs in future. That can be nice performance boost for those who don't have Amazon engineers and budgets

SahAssar•9mo ago
> Maybe one can transform slow code from high level languages to low level language

I think you are describing a compiler?

geodel•9mo ago
I mean reading this article:

1) Kotlin code --> Java byte code --> JVM execution (slow)

vs

2) Kotin code --> Rust/Zig code --> Zig compiler --> native execution (fast)

Compiler is involved in both cases but I was thinking of 2) where slower code in high level lang is converted to another lang code. The compiler of which is known to produce fast runinng code.

dhosek•9mo ago
You’re describing a transpiler, but the problem is that idioms in a GC language like Kotlin don’t necessarily translate to a non-GC language like Rust or Zig. Add in the fact that Rust doesn’t have OO inheritance which is essential for a lot of JVM code to work (I don’t know much about Zig) and I’d be very suspicious of code generated by a Kotlin to Rust transpiler. (On the other hand, one of the first transpilers I ever encountered, web2c, worked well because the source language, Pascal, could be fairly easily translated into functional C without much if any sacrifice of speed or accuracy.)
mjb•9mo ago
> Maybe one can transform slow code from high level languages to low level language via LLMs in future.

This is one of the areas I'm most excited for LLM developer tooling. Choosing a language, database, or framework is a really expensive up-front decision for a lot of teams, made when they have the least information about what they're building, and very expensive to take back.

If LLM-powered tools could take 10-100x off the cost of these migrations, it would significantly reduce the risk of early decisions, and make it a ton easier to make software more reliable and cheaper to run.

It's very believable to me that, even with today's model capabilities, that 10-100x is achievable.

geodel•9mo ago
I remember many years back one of Go language author wrote C to Go trasformer and used that to convert all compiler, runtime, GC etc into Go.

Now in today's time some experts like above could create base transformer for high level language and frameworks to low level language and frameworks and this all get exposed via llm interfaces.

One can say why all this instead of generating fast binary directly from high level code. But generating textual transformation would give developers opportunity to understand, tweak and adjust transformed code which generating direct binary would not.

bee_rider•9mo ago
Python -> C -> Assembly

Probably looks a lot like

Pseudocode -> C -> Assembly

Although the first is easier to run tests on and compare the outputs.

Demiurge•9mo ago
Many interesting things, for instance, I've been hearing a lot about how fast Java is, that it can be as fast as C++, and then I see this:

> But after a few weeks, it compiled and the results surprised us. The code was 10x faster than our carefully tuned Kotlin implementation – despite no attempt to make it faster. To put this in perspective, we had spent years incrementally improving the Kotlin version from 2,000 to 3,000 transactions per second (TPS). The Rust version, written by Java developers who were new to the language, clocked 30,000 TPS.

I feel like there is more to this, like some kind of a bottleneck, memory footprint, some IO overhead?

> Our conclusion was to rewrite our data plane entirely in Rust.

The point is well taken, figuring it out is not worth it, if you can just "rewrote" or have green field projects.

> These extension points are part of Postgres’ public API, allowing you to modify behavior without changing core code

Also, interesting. So PostgreSQL evolved to the point that it has a stable API for extensibility? This great for the project, maintain a modular design, and some stable APIs and, you can let people mix and match and reduce duplication of effort.

anarazel•9mo ago
> So PostgreSQL evolved to the point that it has a stable API for extensibility?

Not across major versions, no. I seriously doubt we will ever make promises around that. It would hamper development way too much.

Demiurge•9mo ago
I see, then they're probably saying they found the internal APIs that are just more naturally stable, perhaps because they are close to the APIs used for extensions.
ramanh•9mo ago
> I feel like there is more to this, like some kind of a bottleneck, memory footprint, some IO overhead?

blocking/nonblocking IO can explain this numbers

karl_p•9mo ago
The JVM can relocate memory to avoid fragmentation. Rust can't, at least natively. Are they not worried about this regression?
geodel•9mo ago
Well Java need it because it fragments memory a lot. With Rust one has value types and stack allocation which takes care of one of the biggest cause of fragmentation.
kikimora•9mo ago
Writing code that would not fragment memory over time is arguable much harder than writing GC friendly code.
geodel•9mo ago
Yeah, cooking food in kitchen is much harder than having it delivered from restaurant at doorstep.

Reasonable people will see if cost makes it worthwhile.

tigy32•9mo ago
I haven't found that to be the case in my experience: just for example in java you tend to end up with essentially a lot of `Vec<Box<Thing>>` which causes a lot of fragmentation. In rust you tend to end up with `Vec<Thing>` where `Thing`s are inlined. (And replace Vec with the stack for the common case). I find it more like Java is better at solving a problem it created by making everything an object.
mrkeen•9mo ago
With 10x the throughput (TPS) and the lack of GC pauses (which were the cause of the rewrite), how would they measure such a regression, let alone worry about it?
kondro•9mo ago
It would be really great to get more context on what a DPU is for pricing: https://aws.amazon.com/rds/aurora/pricing/

I understand that AWS did one TPC-C 95/5 read/write benchmark and got 700k transactions for 100k DPUs, but that’s not nearly enough context.

There either needs to be a selection of other benchmark-based pricing (especially for a primarily 50/50 read/write load), actual information on how a DPU is calculated or a way to return DPU per query executed, not just an aggregate CloudWatch figure.

We were promised DSQL pricing similar to DynamoDB and insofar as it’s truly serverless (i.e. no committed pricing) they’ve succeeded, but one of the best parts of DynamoDB is absolute certainty on cost, even if that can sometimes be high.

belter•9mo ago
> one of the best parts of DynamoDB is absolute certainty on cost

That depends if its On Demand or Provisioned, even if they recently added On Demand limits.

kondro•9mo ago
You still have absolute certainty. Read or write x amount of data and it will use exactly y R/WCU.

It then just becomes a modeling problem allowing you to determine your costs upfront during design. That’s one of the most powerful features of the truly serverless products in AWS in my opinion.

ejkra•9mo ago
Absolute certainty is challenging with a cost-based optimizer in the mix. DDB doesn't face this challenge. Although, cost for some query patterns in DDB would shift into your application layer - so you may not have exactly the cost certainty you imagine?

Would you be willing to pay more for certainty? E.g. rent the full server at peak + 20% and run at 15% utilization some of the time? Provisioned capacity or pre-committed spend seem like reasonable, but perhaps more costly, ways to get certainty.

mrkeen•9mo ago
Where can I go to read about distributed SQL and big JOINs or WHERE IN clauses? I was hoping this article would cover that elephant in the room, rather than Rust being significantly more performant than JVM languages.
louis-paul•9mo ago
Marc Brooker has written and spoken about DSQL quite a bit. It’s still rather high level. I’d expect one or more papers to come out in the next few months, similarly to other Amazon databases.

https://brooker.co.za/blog/2025/04/17/decomposing.html (includes talk)

https://brooker.co.za/blog/2024/12/03/aurora-dsql.html

https://brooker.co.za/blog/2024/12/04/inside-dsql.html

https://brooker.co.za/blog/2024/12/05/inside-dsql-writes.htm...

https://brooker.co.za/blog/2024/12/06/inside-dsql-cap.html

https://brooker.co.za/blog/2024/12/17/occ-and-isolation.html

mrkeen•8mo ago
That's a lot of links for 0 info on distributed JOIN or WHERE IN.