frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

You should write an agent

https://fly.io/blog/everyone-write-an-agent/
194•tabletcorry•3h ago•86 comments

Two billion email addresses were exposed

https://www.troyhunt.com/2-billion-email-addresses-were-exposed-and-we-indexed-them-all-in-have-i...
288•esnard•3h ago•206 comments

Kimi K2 Thinking, a SOTA open-source trillion-parameter reasoning model

https://moonshotai.github.io/Kimi-K2/thinking.html
540•nekofneko•9h ago•209 comments

Game design is simple

https://www.raphkoster.com/2025/11/03/game-design-is-simple-actually/
52•vrnvu•1h ago•16 comments

Show HN: I scraped 3B Goodreads reviews to train a better recommendation model

https://book.sv
187•costco•1d ago•78 comments

Universe's expansion 'is now slowing, not speeding up'

https://ras.ac.uk/news-and-press/research-highlights/universes-expansion-now-slowing-not-speeding
71•chrka•3h ago•72 comments

Swift on FreeBSD Preview

https://forums.swift.org/t/swift-on-freebsd-preview/83064
168•glhaynes•6h ago•93 comments

Open Source Implementation of Apple's Private Compute Cloud

https://github.com/openpcc/openpcc
342•adam_gyroscope•1d ago•66 comments

The Learning Loop and LLMs

https://martinfowler.com/articles/llm-learning-loop.html
79•johnwheeler•2h ago•47 comments

LLMs encode how difficult problems are

https://arxiv.org/abs/2510.18147
87•stansApprentice•5h ago•15 comments

The Parallel Search API

https://parallel.ai/blog/introducing-parallel-search
79•lukaslevert•7h ago•33 comments

Hightouch (YC S19) Is Hiring

https://job-boards.greenhouse.io/hightouch/jobs/5542602004
1•joshwget•2h ago

FBI tries to unmask owner of archive.is

https://www.heise.de/en/news/Archive-today-FBI-Demands-Data-from-Provider-Tucows-11066346.html
642•Projectiboga•7h ago•340 comments

ICC ditches Microsoft 365 for openDesk

https://www.binnenlandsbestuur.nl/digitaal/internationaal-strafhof-neemt-afscheid-van-microsoft-365
508•vincvinc•7h ago•155 comments

Eating stinging nettles

https://rachel.blog/2018/04/29/eating-stinging-nettles/
159•rzk•12h ago•161 comments

I analyzed the lineups at the most popular nightclubs

https://dev.karltryggvason.com/how-i-analyzed-the-lineups-at-the-worlds-most-popular-nightclubs/
139•kalli•10h ago•66 comments

Show HN: Auto-Adjust Keyboard and LCD Brightness via Ambient Light Sensor[Linux]

https://github.com/donjajo/als-led-backlight
4•donjajo•4d ago•0 comments

Mathematical exploration and discovery at scale

https://terrytao.wordpress.com/2025/11/05/mathematical-exploration-and-discovery-at-scale/
215•nabla9•14h ago•104 comments

The Geometry of Schemes [pdf]

https://webhomes.maths.ed.ac.uk/~v1ranick/papers/eisenbudharris.pdf
3•measurablefunc•6d ago•0 comments

Show HN: TabPFN-2.5 – SOTA foundation model for tabular data

https://priorlabs.ai/technical-reports/tabpfn-2-5-model-report
57•onasta•5h ago•11 comments

Auraphone: A simple app to collect people's info at events

https://andrewarrow.dev/2025/11/simple-app-collect-peoples-info-at-events/
26•fcpguru•9h ago•14 comments

Show HN: See chords as flags – Visual harmony of top composers on musescore

https://rawl.rocks/
103•vitaly-pavlenko•1d ago•27 comments

I may have found a way to spot U.S. at-sea strikes before they're announced

https://old.reddit.com/r/OSINT/comments/1opjjyv/i_may_have_found_a_way_to_spot_us_atsea_strikes/
281•hentrep•19h ago•403 comments

Supply chain attacks are exploiting our assumptions

https://blog.trailofbits.com/2025/09/24/supply-chain-attacks-are-exploiting-our-assumptions/
48•crescit_eundo•8h ago•36 comments

How often does Python allocate?

https://zackoverflow.dev/writing/how-often-does-python-allocate/
78•ingve•5d ago•54 comments

Show HN: Dynamic code and feedback walkthroughs with your coding Agent in VSCode

https://www.intraview.ai/hn-demo
14•cyrusradfar•7h ago•0 comments

Show HN: qqqa – A fast, stateless LLM-powered assistant for your shell

https://github.com/matisojka/qqqa
120•iagooar•13h ago•79 comments

IKEA launches new smart home range with 21 Matter-compatible products

https://www.ikea.com/global/en/newsroom/retail/the-new-smart-home-from-ikea-matter-compatible-251...
274•lemoine0461•10h ago•202 comments

How I am deeply integrating Emacs

https://joshblais.com/blog/how-i-am-deeply-integrating-emacs/
201•signa11•17h ago•137 comments

Ratatui – App Showcase

https://ratatui.rs/showcase/apps/
699•AbuAssar•21h ago•201 comments
Open in hackernews

Just make it scale: An Aurora DSQL story

https://www.allthingsdistributed.com/2025/05/just-make-it-scale-an-aurora-dsql-story.html
134•cebert•5mo ago

Comments

glzone1•5mo ago
Early dsql had some weird limits I think - anyone actually using in production with feedback on current corners and limits?
Marbling4581•5mo ago
I don't use it, but have been keeping an eye on it.

At launch, they limited the number of affected tuples to 10000, including tuples in secondary indexes. They recently changed this limit to:

> A transaction cannot modify more than 3,000 rows. The number of secondary indexes does not influence this number. This limit applies to all DML statements (INSERT, UPDATE, DELETE).

There are a lot of other (IMO prohibitive) restrictions listed in their docs.

https://docs.aws.amazon.com/aurora-dsql/latest/userguide/wor...

pzduniak•5mo ago
Who would use Preview products in production? I'm building out some software that would fit perfectly into the constraints set for DSQL, but I realistically can't commit to something with no pricing / guarantees.
EwanToo•5mo ago
This blog post appears to be part of the scheduled launch marketing, it's now generally available

https://aws.amazon.com/blogs/aws/amazon-aurora-dsql-is-now-g...

mjb•5mo ago
Which features would you like to see the team build first? Which limits would you like to see lifted first?

Most of the limitations you can see in the documentation are things we haven't gotten to building yet, and it's super helpful to know what folks need so we can prioritize the backlog.

avereveard•5mo ago
indexes! vector, trigram and maybe geospatial. (some may be in by now I didn't follow the service as closely as others)

note, doesn't have to be pg_vector pg_trgm or PostGIS, just the index component even if it's a clean room implementation would make this way more useful.

loginatnine•5mo ago
Views and foreign keys!
mjb•5mo ago
Thanks. The team's working on both. For views, do you need updatable views, or are read-only views sufficient?
loginatnine•5mo ago
For me it's RO views.
tigy32•5mo ago
I believe views were added to the preview a little while ago

edit from the launch: "With today’s launch, we’ve added support for AWS Backup, AWS PrivateLink, AWS CloudFormation, AWS CloudTrail, AWS KMS customer managed keys, and PostgreSQL views."

mjb•5mo ago
Correct: https://docs.aws.amazon.com/aurora-dsql/latest/userguide/wor...
tomComb•5mo ago
The lack of JSONB is what stopped me.
sgarland•5mo ago
Why does it not support TRUNCATE?
jashmatthews•5mo ago
My understanding is the way Aurora DSQL distributes data widely makes bulk writes extremely slow/expensive. So no COPY, INSERT with >3k rows, TRUNCATE etc
sgarland•5mo ago
TRUNCATE is DROP TABLE + CREATE TABLE, it’s not a bulk delete. It bypasses the typical path for writes entirely.
loevborg•5mo ago
Which ones? It seems eminently usable from the outside now, at least for greenfield work. The subset of Postgres it supports is most of good/core/essential Postgres. (But I haven't tried it)
geodel•5mo ago
Good read. I like the part that both writing low level as well as high level component in Rust was proven worthwhile.

Maybe one can transform slow code from high level languages to low level language via LLMs in future. That can be nice performance boost for those who don't have Amazon engineers and budgets

SahAssar•5mo ago
> Maybe one can transform slow code from high level languages to low level language

I think you are describing a compiler?

geodel•5mo ago
I mean reading this article:

1) Kotlin code --> Java byte code --> JVM execution (slow)

vs

2) Kotin code --> Rust/Zig code --> Zig compiler --> native execution (fast)

Compiler is involved in both cases but I was thinking of 2) where slower code in high level lang is converted to another lang code. The compiler of which is known to produce fast runinng code.

dhosek•5mo ago
You’re describing a transpiler, but the problem is that idioms in a GC language like Kotlin don’t necessarily translate to a non-GC language like Rust or Zig. Add in the fact that Rust doesn’t have OO inheritance which is essential for a lot of JVM code to work (I don’t know much about Zig) and I’d be very suspicious of code generated by a Kotlin to Rust transpiler. (On the other hand, one of the first transpilers I ever encountered, web2c, worked well because the source language, Pascal, could be fairly easily translated into functional C without much if any sacrifice of speed or accuracy.)
mjb•5mo ago
> Maybe one can transform slow code from high level languages to low level language via LLMs in future.

This is one of the areas I'm most excited for LLM developer tooling. Choosing a language, database, or framework is a really expensive up-front decision for a lot of teams, made when they have the least information about what they're building, and very expensive to take back.

If LLM-powered tools could take 10-100x off the cost of these migrations, it would significantly reduce the risk of early decisions, and make it a ton easier to make software more reliable and cheaper to run.

It's very believable to me that, even with today's model capabilities, that 10-100x is achievable.

geodel•5mo ago
I remember many years back one of Go language author wrote C to Go trasformer and used that to convert all compiler, runtime, GC etc into Go.

Now in today's time some experts like above could create base transformer for high level language and frameworks to low level language and frameworks and this all get exposed via llm interfaces.

One can say why all this instead of generating fast binary directly from high level code. But generating textual transformation would give developers opportunity to understand, tweak and adjust transformed code which generating direct binary would not.

bee_rider•5mo ago
Python -> C -> Assembly

Probably looks a lot like

Pseudocode -> C -> Assembly

Although the first is easier to run tests on and compare the outputs.

Demiurge•5mo ago
Many interesting things, for instance, I've been hearing a lot about how fast Java is, that it can be as fast as C++, and then I see this:

> But after a few weeks, it compiled and the results surprised us. The code was 10x faster than our carefully tuned Kotlin implementation – despite no attempt to make it faster. To put this in perspective, we had spent years incrementally improving the Kotlin version from 2,000 to 3,000 transactions per second (TPS). The Rust version, written by Java developers who were new to the language, clocked 30,000 TPS.

I feel like there is more to this, like some kind of a bottleneck, memory footprint, some IO overhead?

> Our conclusion was to rewrite our data plane entirely in Rust.

The point is well taken, figuring it out is not worth it, if you can just "rewrote" or have green field projects.

> These extension points are part of Postgres’ public API, allowing you to modify behavior without changing core code

Also, interesting. So PostgreSQL evolved to the point that it has a stable API for extensibility? This great for the project, maintain a modular design, and some stable APIs and, you can let people mix and match and reduce duplication of effort.

anarazel•5mo ago
> So PostgreSQL evolved to the point that it has a stable API for extensibility?

Not across major versions, no. I seriously doubt we will ever make promises around that. It would hamper development way too much.

Demiurge•5mo ago
I see, then they're probably saying they found the internal APIs that are just more naturally stable, perhaps because they are close to the APIs used for extensions.
ramanh•5mo ago
> I feel like there is more to this, like some kind of a bottleneck, memory footprint, some IO overhead?

blocking/nonblocking IO can explain this numbers

karl_p•5mo ago
The JVM can relocate memory to avoid fragmentation. Rust can't, at least natively. Are they not worried about this regression?
geodel•5mo ago
Well Java need it because it fragments memory a lot. With Rust one has value types and stack allocation which takes care of one of the biggest cause of fragmentation.
kikimora•5mo ago
Writing code that would not fragment memory over time is arguable much harder than writing GC friendly code.
geodel•5mo ago
Yeah, cooking food in kitchen is much harder than having it delivered from restaurant at doorstep.

Reasonable people will see if cost makes it worthwhile.

tigy32•5mo ago
I haven't found that to be the case in my experience: just for example in java you tend to end up with essentially a lot of `Vec<Box<Thing>>` which causes a lot of fragmentation. In rust you tend to end up with `Vec<Thing>` where `Thing`s are inlined. (And replace Vec with the stack for the common case). I find it more like Java is better at solving a problem it created by making everything an object.
mrkeen•5mo ago
With 10x the throughput (TPS) and the lack of GC pauses (which were the cause of the rewrite), how would they measure such a regression, let alone worry about it?
kondro•5mo ago
It would be really great to get more context on what a DPU is for pricing: https://aws.amazon.com/rds/aurora/pricing/

I understand that AWS did one TPC-C 95/5 read/write benchmark and got 700k transactions for 100k DPUs, but that’s not nearly enough context.

There either needs to be a selection of other benchmark-based pricing (especially for a primarily 50/50 read/write load), actual information on how a DPU is calculated or a way to return DPU per query executed, not just an aggregate CloudWatch figure.

We were promised DSQL pricing similar to DynamoDB and insofar as it’s truly serverless (i.e. no committed pricing) they’ve succeeded, but one of the best parts of DynamoDB is absolute certainty on cost, even if that can sometimes be high.

belter•5mo ago
> one of the best parts of DynamoDB is absolute certainty on cost

That depends if its On Demand or Provisioned, even if they recently added On Demand limits.

kondro•5mo ago
You still have absolute certainty. Read or write x amount of data and it will use exactly y R/WCU.

It then just becomes a modeling problem allowing you to determine your costs upfront during design. That’s one of the most powerful features of the truly serverless products in AWS in my opinion.

ejkra•5mo ago
Absolute certainty is challenging with a cost-based optimizer in the mix. DDB doesn't face this challenge. Although, cost for some query patterns in DDB would shift into your application layer - so you may not have exactly the cost certainty you imagine?

Would you be willing to pay more for certainty? E.g. rent the full server at peak + 20% and run at 15% utilization some of the time? Provisioned capacity or pre-committed spend seem like reasonable, but perhaps more costly, ways to get certainty.

mrkeen•5mo ago
Where can I go to read about distributed SQL and big JOINs or WHERE IN clauses? I was hoping this article would cover that elephant in the room, rather than Rust being significantly more performant than JVM languages.
louis-paul•5mo ago
Marc Brooker has written and spoken about DSQL quite a bit. It’s still rather high level. I’d expect one or more papers to come out in the next few months, similarly to other Amazon databases.

https://brooker.co.za/blog/2025/04/17/decomposing.html (includes talk)

https://brooker.co.za/blog/2024/12/03/aurora-dsql.html

https://brooker.co.za/blog/2024/12/04/inside-dsql.html

https://brooker.co.za/blog/2024/12/05/inside-dsql-writes.htm...

https://brooker.co.za/blog/2024/12/06/inside-dsql-cap.html

https://brooker.co.za/blog/2024/12/17/occ-and-isolation.html

mrkeen•5mo ago
That's a lot of links for 0 info on distributed JOIN or WHERE IN.