frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

AMD teams contributing to the llama.cpp codebase

https://github.com/ggml-org/llama.cpp/pull/14624
1•gzer0•2m ago•0 comments

Nasubi – a real life "Truman Show"

https://en.wikipedia.org/wiki/Nasubi
1•ColinWright•6m ago•0 comments

Harnessing Noncanonical Proteins for Next-Gen Drug Discovery and Diagnosis

https://wires.onlinelibrary.wiley.com/doi/10.1002/wsbm.70001
1•PaulHoule•6m ago•0 comments

Submarines and Foolkillers

https://chicagology.com/harbor/foolkiller/
1•ilamont•7m ago•0 comments

Approximating Reality with CSS Linear()

https://blog.nordcraft.com/approximating-reality-with-css-linear
1•AndreasMoeller•9m ago•0 comments

The First Realtime AI Prompt Management App

https://www.getsnippets.ai/
1•artluko•10m ago•1 comments

The Useless UseCallback

https://tkdodo.eu/blog/the-useless-use-callback
1•0xedb•10m ago•0 comments

DeltaNet Explained

https://sustcsonglin.github.io/blog/2024/deltanet-1/
1•jxmorris12•12m ago•0 comments

Cranelift compiler efficiency, CFGs, and a branch peephole optimizer

https://cfallin.org/blog/2021/01/22/cranelift-isel-2/
1•fanf2•13m ago•0 comments

Origin of "There are only two hard things in Computer Science" quote

https://skeptics.stackexchange.com/questions/19836/has-phil-karlton-ever-said-there-are-only-two-hard-things-in-computer-science
1•nailer•13m ago•0 comments

Rewriting Training Data Improved Kimi 2's Performance

https://www.dbreunig.com/2025/07/27/kimi-applies-rephrasing-to-pre-training-data.html
1•dbreunig•13m ago•0 comments

Virtual Power Plants: Reimagining the Grid for the 21st Century

https://www.utilitydive.com/news/reimagining-the-grid-for-the-21st-century-with-virtual-power-plants/754077/
3•bdev12345•18m ago•0 comments

Auto-generate Linear tasks from meeting transcripts

https://www.snaplinear.app/demo
1•jonahkpump•21m ago•1 comments

We Faked the Moon Landing

https://rumble.com/v60ykdw-how-we-faked-the-moon-landing-with-bart-sibrel-candace-ep-124.html
1•throwaway-153•21m ago•2 comments

Hostile Alien Object Speeds to Earth, Harvard Scientist Says It's Hiding

https://www.ibtimes.co.uk/hostile-alien-object-hurtling-towards-earth-12-mile-entity-deliberately-hiding-detection-1739448
2•handfuloflight•23m ago•1 comments

Founders and Recruiters, Beware

https://twitter.com/pranay01/status/1949896185462083787
3•pranay01•27m ago•0 comments

Throwing AI at Developers Won't Fix Their Problems

https://www.aviator.co/blog/throwing-ai-at-developers-wont-fix-their-problems/
1•tonkkatonka•27m ago•1 comments

Show HN: KrackTheKode – Daily number code-breaking game

https://krackthekode.pyrrho.dev
1•Pyrrho3•28m ago•0 comments

Text-audio foundation model from Boson AI

https://github.com/boson-ai/higgs-audio
1•chaosprint•29m ago•0 comments

Jetson Thor – Advanced AI for Physical Robotics

https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-thor/
1•gnabgib•30m ago•0 comments

Sign in with Google in Chrome

https://underpassapp.com/news/2025/7/5.html
8•frizlab•30m ago•4 comments

The West's data centers suck (water and power)

https://www.hcn.org/issues/57-8/the-wests-data-centers-suck-water-and-power/
1•dangle1•33m ago•0 comments

LLMs can now identify public figures in images

https://minimaxir.com/2025/07/llms-identify-people/
2•minimaxir•33m ago•0 comments

Retirement: Azure SQL Edge will be retired on September 30th, 2025

https://azure.microsoft.com/en-us/updates
1•tracker1•34m ago•1 comments

The Caribbean islands that give you a passport if you buy a home

https://www.bbc.com/news/articles/cly88xg5d9vo
2•geox•35m ago•0 comments

Modern Day LLM Blueprint: A compilation of the most recent technologies

https://nofone.io/llm-blueprint
2•ahmedhawas123•35m ago•0 comments

2025 NASA Space Apps Challenge Science

https://science.nasa.gov/uncategorized/2025-nasa-space-apps/
2•DocFeind•37m ago•0 comments

JPMorgan says fintech middlemen like Plaid are 'massively taxing' its APIs

https://www.cnbc.com/2025/07/28/jpmorgan-fintech-middlemen-plaid-data-requests-taxing-systems.html
6•PieUser•42m ago•2 comments

Launch: Miget – A New Kind of PaaS (No Per-App or Usage-Based Billing)

2•ktaraszk•43m ago•2 comments

How Twiddling Enshittifies Your Brain

https://pluralistic.net/2025/07/28/twiddlehazard/#outboard-brains-considered-harmful
3•almost-exactly•44m ago•0 comments
Open in hackernews

Tao on “blue team” vs. “red team” LLMs

https://mathstodon.xyz/@tao/114915604830689046
325•qsort•6h ago

Comments

_alternator_•5h ago
This red vs blue team is a good way to understand the capabilities and current utility of LLMs for expert use. I trust them to add tests almost indiscriminately because tests are usually cheap; if they are wrong it’s easy to remove or modify them; and if they are correct, they adds value. But often they don’t test the core functionality; the best tests I still have to write myself.

Having LLMs fix bugs or add features is more fraught, since they are prone to cheating or writing non robust code (eg special code paths to pass tests without solving the actual problem).

skdidjdndh•5h ago
> I trust them to add tests almost indiscriminately because tests are usually cheap; if they are wrong it’s easy to remove or modify them

Having worked on legacy codebases this is extremely wrong and harmful. Tests are the source of truth more so than your code - and incorrect tests are even more harmful than incorrect code.

Having worked on legacy codebases, some of the hardest problems are determining “why is this broken test here that appears to test a behavior we don’t support”. Do we have a bug? Or do we have a bad test? On the other end, when there are tests for scenarios we don’t actually care about it’s impossible to determine if that test is meaningful or was added because “it’s testing the code as written”.

yojo•5h ago
I would add that few things slow developer velocity as much as a large suite of comprehensive and brittle tests. This is just as true on greenfield as on legacy.

Anticipating future responses: yes, a robust test harness allows you to make changes fearlessly. But most big test suites I’ve seen are less “harness” and more “straight-jacket”

andrepd•4h ago
I don't understand this. How does it slow your development if the tests being green is a necessary condition for the code being correct? Yes it slows it compared to just writing incorrect code lol, but that's not the point.
yojo•3h ago
"Brittle" here means either:

1) your test is specific to the implementation at the time of writing, not the business logic you mean to enforce.

2) your test has non-deterministic behavior (more common in end-to-end tests) that cause it to fail some small percentage of the time on repeated runs.

At the extreme, these types of tests degenerate your suite into a "change detector," where any modification to the code-base is guaranteed to make one or more tests fail.

They slow you down because every code change also requires an equal or larger investment debugging the test suite, even if nothing actually "broke" from a functional perspective.

Using LLMs to litter your code-base with low-quality tests will not end well.

winstonewert•3h ago
The problem is that sometimes it is not a necessary condition. Rather, the tests might have been checking implementation details or just been wrong in the first place. Now, when tests fails I have extra work to figure out if its a real break or just a bad test.
threatofrain•3h ago
It's that hard to write specs that truly match the business, hence why test-driven-development or specification-first failed to take off as a movement.

Asking specs to truly match the business before we begin using them as tests would handcuff test people in the same way we're saying that tests have the potential to handcuff app and business logic people — as opposed to empowering them. So I wouldn't blame people for writing specs that only match the code implementation at that time. It's hard to engage in prophecy.

marcosdumay•3h ago
> So I wouldn't blame people for writing specs that only match the code implementation at that time.

WFT are you doing writing specs based on implementation? If you already have the implementation, what are you using the specs for? Or, if you want to apply this direct to tests, if you are already assuming the program is correct, what are you trying to test?

Are you talking about rewriting applications?

nyrikki•2h ago
The problem with TDD is that people assumed it was writing a specification, or directly tried to map it directly to post-hoc testing and metrics.

TDD at its core is defining expected inputs and mapping those to expected outputs at the unit of work level, e.g. function, class etc.

While UAT and domain informed what those inputs=outputs are, avoiding trying to write a broader spec that that is what many people struggle with when learning TDD.

Avoiding writing behavior or acceptance tests, and focusing on the unit of implementation tests is the whole point.

But it is challenging for many to get that to click. It should help you find ambiguous requirements, not develop a spec.

MoreQARespect•1h ago
I literally do the diametric opposite of you and it works extremely well.

Im weirded out by your comment. Writing tests that couple to low level implementation details was something I thought most people did accidentally before giving up on TDD, not intentionally.

jrockway•2h ago
The goal of tests is not to prevent you from changing the behavior of your application. The goal is to preserve important behaviors.

If you can't tell if a test is there to preserve existing happenstance behavior, or if it's there to preserve an important behavior, you're slowed way down. Every red test when you add a new feature is a blocker. If the tests are red because you broke something important, great. You saved weeks! If the tests are red because the test was testing something that doesn't matter, not so great. Your afternoon was wasted on a distraction. You can't know in advance whether something is a distraction, so this type of test is a real productivity landmine.

Here's a concrete, if contrived, example. You have a test that starts your app up in a local webserver, and requests /foo, expecting to get the contents of /foo/index.html. One day, you upgrade your web framework, and it has decided to return a 302 Moved redirect to /foo/index.html, so that URLs are always canonical now. Your test fails with "incorrect status code; got 302, want 200". So now what? Do you not apply the version upgrade? Do you rewrite the test to check for a 302 instead of a 200? Do you adjust the test HTTP client to follow redirects silently? The problem here is that you checked for something you didn't care about, the HTTP status, instead of only checking for what you cared about, that "GET /foo" gets you some text you're looking for. In a world where you let the HTTP client follow redirects, like human-piloted HTTP clients, and only checked for what you cared about, you wouldn't have had to debug this to apply the web framework security update. But since you tightened down the screws constraining your application as tightly as possible, you're here debugging this instead of doing something fun.

(The fun doubles when you have to run every test for every commit before merging, and this one failure happened 45 minutes in. Goodbye, the rest of your day!)

ch33zer•3h ago
An old coworker used to call these types of tests change detector tests. They are excellent at telling you whether some behavior changed, but horrible at telling you whether that behavior change is meaningful or not.
jrockway•2h ago
Yup. Working on a 10 year old codebase, I always wondered whether a test failing was "a long-standing bug was accidentally fixed" or "this behavior was added on purpose and customers rely on it". It can be about 50/50 but you're always surprised.

Change detector tests add to the noise here. No, this wasn't a feature customers care about, some AI added a test to make sure foo.go line 42 contained less than 80 characters.

groestl•1h ago
> a long-standing bug was accidentally fixed

In some cases (e.g. in our case) long standing bugs become part of the API that customers rely on.

strbean•1h ago
It's nearly guaranteed, even if it is just because customers had to work around the bug in such a way that their flow now breaks when the bug is gone.

Obligatory: https://xkcd.com/1172/

giaour•15m ago
Also known as Hyrum's Law (https://www.hyrumslaw.com/), but more people know the XKCD at this point :)
PeeMcGee•1h ago
These sorts of tests are invaluable for things like ensuring adherence to specifications such as OAuth2 flows. A high-level test that literally describes each step of a flow will swiftly catch odd changes in behavior such as a request firing twice in a row or a well-defined payload becoming malformed. Say a token validator starts misbehaving and causes a refresh to occur with each request (thus introducing latency and making the IdP angry). That change in behavior would be invisible to users, but a test that verified each step in an expected order would catch it right away, and should require little maintenance unless the spec itself changes.
ozgrakkurt•5h ago
What do you think about leaning on fuzz testing and deriving unit tests from bugs found by fuzzing?
manmal•4h ago
What kind of bugs do you find this way, besides missing sanitization?
raddan•4h ago
You can often find memory errors not directly related to string handling with fuzz testing. More generally, if your program embodies any kind of state machine, you may find that a good fuzzer drives it into states that you did not think should exist.
ozgrakkurt•4h ago
You can use the fuzzer to generate test cases instead of writing test cases manually.

For example you can make it generate queries and data for a database and generate a list of operations and timings for the operations.

Then you can mix assertions into the test so you make sure everything is going as expected.

This is very useful because there can be many combinations of inputs and timings etc. and it tests basically everything for you without you needing to write a million unit tests

cookiengineer•4h ago
Pointer errors. Null pointer returns instead of using the correct types. Flow/state problems. Multithreading problems. I/O errors. Network errors. Parsing bugs... etc

Basically the whole world of bugs introduced by someone being a too smart C/C++ coder. You can battletest parsers quite nicely with fuzzers, because parsers often have multiple states that assume naive input data structures.

bicx•4h ago
I believe they just meant that tests are easy to generate for eng review and modification before actually committing to the codebase. Nothing else is a dependency on an individual test (if done correctly), so it's comparatively cheap to add or remove compared to production code.
_alternator_•4h ago
Yup. I do read and review the tests generated by LLMs. Often the LLM tests will just be more comprehensive than my initial test, and hit edge cases that I didn’t think of (or which are tedious). For example, I’ll write a happy path test case for an API, and a single “bad path” where all of the inputs are bad. The LLM will often generate a bunch of “bad path” cases where only one field has an error. These are great red team tests, and occasionally catch serious bugs.
wagwang•4h ago
This is the conclusion I'm at too, working on a relatively new codebase. Our rule is that every generated test must be human reviewed, otherwise its an autodelete.
manmal•4h ago
> Tests are the source of truth more so than your code

Tests poke and prod with a stick at the SUT, and the SUT's behaviour is observed. The truth lives in the code, the documentation, and, unfortunately, in the heads of the dev team. I think this distinction is quite important, because this question:

> Do we have a bug? Or do we have a bad test?

cannot be answered by looking at the test + the implementation. The spec or people have to be consulted when in doubt.

andruby•4h ago
What does SUT stand for? I'm not familiar with the acronym

Is it "System Under Test"? (That's Claude.ai's guess)

dfabulich•4h ago
It is.
card_zero•4h ago
That's what Wiktionary says too. Lucky guess, Claude.
9rx•4h ago
> The spec

The tests are your spec. They exist precisely to document what the program is supposed to do for other humans, with the secondary benefit of also telling a machine what the program is supposed to do, allowing implementations to automatically validate themselves against the spec. If you find yourself writing specs and tests as independent things, that's how you end up with bad, brittle tests that make development a nightmare — or you simply like pointless busywork, I suppose.

But, yes, you may still have to consult a human if there is reason to believe the spec isn't accurate.

munificent•3h ago
Unfortunately, tests can never be a complete specification unless the system is simple enough to have a finite set of possible inputs.

For all real-world software, a test suite tests a number of points in the space of possible inputs and we hope that those points generalize to pinning down the overall behavior of the implementation.

But there's no guarantee of that generalization. An implementation that fails a test is guaranteed to not implement the spec, but an implementation that passes all of the tests is not guaranteed to implement it.

9rx•3h ago
> Unfortunately, tests can never be a complete specification

They are for the human, which is the intended recipient.

Given infinite time the machine would also be able to validate against the complete specification, but, of course, we normally cut things short because we want to release the software in a reasonable amount of time. But, as before, that this ability exists at all is merely a secondary benefit.

godelski•2h ago

  > The tests are your spec.
That's not quite right, but it's almost right.

Tests are an *approximation* of your spec.

Tests are a description, and like all descriptions are noisy. The thing is it is very very difficult to know if your tests have complete coverage. It's very hard to know if your description is correct.

How often do you figure out something you didn't realize previously? How often do you not realize something and it's instead pointed out by your peers? How often do you realize something after your peers say something that sparks an idea?

Do you think that those events are over? No more things to be found? I know I'm not that smart because if I was I would have gotten it all right from the get go.

There are, of course, formal proofs but even they aren't invulnerable to these issues. And these aren't commonly used in practice and at that point we're back to programming/math, so I'm not sure we should go down that route.

9rx•1h ago
> Tests are a description

As is a spec. "Description" is literally found in the dictionary definition. Which stands to reason as tests are merely a way to write a spec. They are the same thing.

> The thing is it is very very difficult to know if your tests have complete coverage.

There is no way to avoid that, though. Like you point out, not even formal proofs, the closest speccing methodology we know of to try and avoid this, is immune.

> Tests are an approximation of your spec.

Specs are an approximation of what you actually want, sure, but that does not change that tests are the spec. There are other ways to write a spec, of course, but if you went down that road you wouldn't also have tests. That would be not only pointless, but a nightmare due to not having a single source of truth which causes all kinds of social (and sometimes technical) problems.

godelski•1h ago

  > that does not change that tests are the spec.
I disagree. It's, like you say, one description of your spec but that's not the spec.

  > not having a single source of truth
Well that's the thing, there is no single source of truth. A single source of truth is for religion, not code.

The point of saying this is to ensure you don't fall prey to fooling yourself. You're the easiest person for you to fool, after all. You should always carry some doubt. Not so much it is debilitating, but enough to keep you from being too arrogant. You need to constantly check that your documentation is aligned to your specs and that your specs are aligned to your goals. If you cannot see how these are different things then it's impossible to check your alignment and you've fooled yourself.

9rx•1h ago
> You need to constantly check that your documentation is aligned to your specs

Documentation, tests, and specs are all ultimately different words for the same thing.

You do have to check that your implementation and documentation/spec/tests are aligned, which can be a lot of work if you do so by hand, but that's why we invented automatic methods. Formal verification is theoretically best (that we know of) at this, but a huge pain in the ass for humans to write, so that is why virtually everyone has adopted tests instead. It is a reasonable tradeoff between comfort in writing documentation while still providing sufficient automatic guarantees that the documentation is true.

> If you cannot see how these are different things

If you see them as different things, you are either pointlessly repeating yourself over and over or inventing information that is, at best, worthless (but often actively harmful).

Kinrany•3h ago
None of the four: code, tests, spec, people's memory, are the single source of truth.

It's easy to see them as four cache layers, but empirically it's almost never the case that the correct thing to do when they disagree is to blindly purge and recreate levels that are farther from the "truth" (even ignoring the cost of doing that).

Instead, it's always an ad-hoc reasoning exercise in looking at all four of them, deciding what the correct answer is, and updating some or all of them.

jgalt212•4h ago
> Having worked on legacy codebases this is extremely wrong and harmful. Tests are the source of truth more so than your code - and incorrect tests are even more harmful than incorrect code.

I hear you on this, but you can still use so long as these tests are not comingled with the tests generated by subject-matter experts. I'd treat them almost a fuzzers.

SamuelAdams•1h ago
Ideally the git history provides the “why was this test written”, however if you have one Jira card tied to 500+ AI generated tests, it’s not terribly helpful.
djeastm•30m ago
>if you have one Jira card tied to 500+ AI generated tests

The dreaded "Added tests" commit...

Pxtl•47m ago
> “why is this broken test here that appears to test a behavior we don’t support”

Because somebody complained when that behavior we don't support was broken, so the bug-that-wasn't-really-a-bug was fixed and a test was created to prevent regression.

Imho, the mistake was in documentation: the Test should have comments explaining why this test was created.

Just as true for tests as for the actual business logic code:

The code can only describe the what and the how. It's up to comments to describe the why.

mvieira38•4h ago
I have the exact opposite idea. I want the tests to be mine and thoroughly understood, so I am the true arbiter and then I can let the LLM go ham on the code without fear. If the tests are AI made, then I get some anxiety letting agents mess with the rest of the codebase
_alternator_•4h ago
I think this is exactly the tradeoff (blue team and red team need to be matched in power), except that I’ve seen LLMs literally cheat the tests (eg “match input: TEST_INPUT then return TEST_OUTPUT”) far too many times to be comfortable with letting LLMs be a major blue team player.
johnisgood•2h ago
Yeah, they may do that, but people really should read the code an LLM produces. Ugh, makes me furious. No wonder LLMs have a bad rep from such users.
fpoling•2h ago
I tried a LLM to generate tests for Rust code. It was more harmful then useful. Surely there were a lot of tests, but they still miss the key coverage and it was hard to see what was missed due to the amount of generated code. Then to change the code behavior in future would require to fix a lot of tests again versus fixing few lines in manually written tests.
torginus•2h ago
There's a saying that since nobody tests the tests, they must be trivially correct.

That's why they came up with the Arrange-Act-Assert pattern.

My favorite kind of unit test nowadays is when you store known input-output pairs and validate the code on them. It's easy to test corner cases and see that the output works as desired.

01HNNWZ0MV43FF•10m ago
"Golden snapshot testing"
iLoveOncall•5h ago
Pretty poor analogies here.

> The output of a blue team is only as strong as its weakest link: a security system that consists of a strong component and a weak component (e.g., a house with a securely locked door, but an open window) will be insecure

Hum, no? With an open window you can go through the whole house. With a XSS vulnerability you cannot do the same amount of damage as with a SQL injection. This is why security issues have levels of severity.

cowpig•5h ago
Does this detail detract from the core idea?
pkoiralap•5h ago
Not true, if XSS is used to compromise an admin user, the damage can be far more than what a seemingly harmless SQL injection that just reads extra columns from a table does.

This particular comment feels more like an over-concentration on trivialities rather than refutation or critique of opinion.

carstimon•5h ago
You've made the choice of (Locked Door, Open Window) ~ (Good SQL usage, XSS Vulnerability) which seems to be an incorrect rebuttal. Your example doesn't contradict "only as strong as its weakest link", here the weakest link is the XSS Vuln.

The "house analogy" can also support cases where the potential damage is not the same, e.g. if the open window has bars a robber might grab some stuff within reach but not be able to enter.

Ensorceled•4h ago
You can always find problems with analogies, analogies are intentionally simplified to allow readers to better understand difficult or nuanced ideas.

In this case you are criticizing an analogy meant to convey understanding of "weakest link" for not also imparting an understanding of "levels of severity".

recipe19•5h ago
I get the broader point, but the infosec framing here is weird. It's a naive and dangerous view that the defense efforts are only as strong as the weakest link. If you're building your security program that way, you're going to lose. The idea is to have multiple layers of defense because you can never really, consistently get 100% with any single layer: people will make mistakes, there will be systems you don't know about, etc.

In that respect, the attack and defense sides are not hugely different. The main difference is that many attackers are shielded from the consequences of their mistakes, whereas corporate defenders mostly aren't. But you also have the advantage of playing on your home turf, while the attackers are comparatively in the dark. If you squander that... yeah, things get rough.

Davidzheng•5h ago
I'm not a security person at all. But this comments reads against the best practices which I've heard. Like that the best defense is using open source & well-tested protocols with extremely small attack surface to minimize the space of possible exploits. Curious what I'm not understanding here.
mindcrime•5h ago
But "defense in depth" is a security best practice. I'm not following exactly how the gp post is reading against any best practices.
__s•5h ago
Defense in depth is a security best practice because adding shit to a mess is more feasible than maintaining a simple stack. "There are always systems you don't know about" reflects an environment where one person doesn't maintain everything
fdw•5h ago
No, defense in depth is a best practice because you assume that each layer can fall. It is more practical to have many layers that are very secure than to have one layer that has to be perfectly secure.
yadaeno•4h ago
I think you are confusing “security through obscurity” and “defense in depth”.

You can add layers of high quality simple systems to increase your overall security exponentially, think using a VPN behind TOR etc.

fnordsensei•5h ago
Just because it’s open source doesn’t mean it’s well tested, or well pen tested, or whatever the applicable security aspect is.

It could also mean that attacks against it are high value (because of high distribution).

Point is, license isn’t a great security parameter in and of itself IMO.

vlovich123•4h ago
Who have you been listening to?
tetha•4h ago
This area of security always feels a bit weird because ideally, you should think about your assumptions being subverted.

For example, our development teams are using modern, stable libraries in current versions, have systems like Sonar and Snyk around, blocking pipelines for many of them, images are scanned before deployment.

I can assume this layer to be well-secured to the best of their ability. It is most likely difficult to find an exploit here.

But once I step a layer downwards, I have to ask myself: Alright, what happens IF a container gets popped and an attacker can run code in there? Some data will be exfiltrated and accessible, sure, but this application server should not be able to access more than the data it needs to access to function. The data of a different application should stay inaccessible.

As a physical example - a guest in a hotel room should only have access to their own fuse box at most, not the fuse box of their neighbours. A normal person (aka not a youtuber with big eye brows) wouldn't mess with it anyway, but even if they start messing around, they should not be able to mess with their neighbour.

And this continues: What if the database is not configured correctly to isolate access? We have, for example, isolated certain critical application databases into separate database clusters - lateral movement within a database cluster requires some configuration errors, but lateral movement onto a different database cluster requires a lot more effort. And we could even further. Currently we have one production cluster, but we could isolate that into multiple production clusters which share zero trust between them. An even bigger hurdle putting up boundaries an attacker has to overcome.

chaps•5h ago
Isn't offense just another layer of defense? As they say, the best defense is a good offense.
fdw•5h ago
They say this about sports, which is (usually) a zero-sum game: If I'm attacking, no matter how badly, my opponent cannot attack at all. Therefore, it is preferable to be attacking.

In cyber security, there is no reason the opponent cannot attack as well. So, my red team is attacking is not a reason that I do not need defense, because my opponent can also attack.

chaps•4h ago
My post was really was in the context of real-time strategy games. It's very, very possible to attack and defend at the same time no matter the skill of either side. Offense and defense aren't mutually exclusive, which is kinda the point of my post.
darkwater•5h ago
Well, I think the his example (locked door + opened window) makes sense, and the multiple LAYERS concept applies to things an attacker has to do or go through to reach the jackpot. But doors and windows are on the same layer, and there the weakest link totally defines how strong the chain is. A similar example in the web world would be that you have your main login endpoint very well protected, audited, using only strong authentication method, and the you have a `/v1/legacy/external_backoffice` endpoint completely open with no authentication and giving you access to a forgotten machine in the same production LAN. That would be the weakest link. Then you might have other internal layers to mitigate/stop an attacker that got access to that machine, and that would be the point of "multiple layer of defense".
lanstin•3h ago
Or a single logging jar that will execute some of its message contents. Inside all your DMZ layers in the app content.
darkwater•1h ago
Poor log4j...
dkarl•5h ago
I think it's just a poorly chosen analogy. When I read it, I understood "weakest link" to be the easiest path to penetrate the system, which will be harder if it requires penetrating multiple layers. But you're right that it's ambiguous and could be interpreted as a vulnerability in a single layer.
NitpickLawyer•4h ago
> It's a naive and dangerous view that the defense efforts are only as strong as the weakest link.

Well, to be fair, you added some words that are not there in the post

> The output of a blue team is only as strong as its weakest link: a security system that consists of a strong component and a weak component [...] will be insecure (and in fact worse, because the strong component may convey a false sense of security).

You added "defense efforts". But that doesn't invalidate the claim in the article, in fact it builds upon it.

What Terence is saying is true, factually correct. It's a golden rule in security. That is why your "efforts" should focus on overlaying different methods, strategies and measures. You build layers upon layers, so that if one weak link gets broken there are other things in place to detect, limit and fix the damage. But it's still true that often the weakest link will be an "in".

Take the recent example of cognizant desk people resetting passwords for their clients without any check whatsoever. The clients had "proper security", with VPNs and 2FA, and so on. But the recovery mechanism was outsourced to a helpdesk that turned out to be the weakest link. The attackers (allegedly) simply called, asked for credentials, and got them. That was the weakest link, and that got broken. According to their complaint, the attackers then gained access to internal systems, and managed to gather enough data to call the helpdesk again and reset the 2FA for an "IT security" account (different than the first one). And that worked as well. They say they detected the attackers in 3 hours and terminated their access, but that's "detection, mitigation" not "prevention". The attackers were already in, rummaging through their systems.

The fact that they had VPNs and 2FA gave them "a false sense of security", while their weakest link was "account recovery". (Terence is right). The fact that they had more internal layers, that detected the 2nd account access and removed it after ~3 hours is what you are saying (and you're right) that defense in depth also works.

So both are right.

In recent years the infosec world has moved from selling "prevention" to promoting "mitigation". Because it became apparent that there are some things you simply can't prevent. You then focus on mitigating the risk, limiting the surfaces, lowering trust wherever you can, treating everything as ephemeral, and so on.

deepdarkforest•5h ago
Using LLMs as a critic/red teamer is great in theory, but economically is not that more useful, doesnt save that much time, if anything, it increases the time because you might uncover more errors or think about your work more. Which is amazing if you value quality work and you have learnt to think. Unfortunately, all the VC money is pushing the opposite, using LLMs to just do mediocre work. No point of critiquing anything if your job is to output some slop from bullet points, pass it along to the reader/recipient who also uses LLms to boil your slop down back to bullet points and pass it again etc. Even mentally, it's much more enticing or addicting to use LLMs for everything if you don't' care about the output of your work, and let your brain atrophy.

I also see this in a lot of undergrads i work with. The top 10% is even better with LLMs, they know much more and they are more productive. But the rest have just resulted to turning in clear slop with no care. I still have not read a good solution on how to incentivize/restrict the use of LLms in both academia or at work correctly. Which i suspect is just the old reality of quality work is not desirable by the vast majority, and LLMs are just magnifying this

qsort•5h ago
> The top 10% is even better with LLMs, they know much more and they are more productive. But the rest have just resulted to turning in clear slop with no care.

This is interesting, I'm noticing something similar (even taking LLMs out of the equation). I don't teach, but I've been coaching students for math competitions, and I feel like there's a pattern where the top few% is significantly stronger than, say, 10 years ago, but the median is weaker. Not sure why, or whether this is even real to begin with.

j2kun•5h ago
Fail them enough and it will sink in I'm sure.
ashton314•5h ago
As I understand it, this is how the RSA algorithm was made. I don't know where my copy of "The Code Book" by Simon Singh is right now, but iirc, Rivest and Shamir would come up with ideas and Adleman's primary role was finding flaws in the security.

Oh look, it's on the Wikipedia page: https://en.wikipedia.org/wiki/RSA_cryptosystem

Yay blue/red teams in math!

griffzhowl•4h ago
Reminds me of a pair of cognitive scientists I know who often collaborate. One is expansive and verbose and often gets carried away on tangential trains of thought, the other is very logical and precise. Their way of producing papers is the first one writes and the second deletes.
ashton314•7m ago
[delayed]
resters•4h ago
Suppose there is an LLM that has a very small context size but reasons extremely well within it. That LLM would be useful for a different set of tasks than an LLM with a massive context that reasons somewhat less effectively.

Any dimension of LLM training and inference can be thought of as a tradeoff that makes it better for some tasks, and worse for others. Maybe in some scenarios a heavily quantized model that returns a result in 10ms is more useful than one that returns a result in 200ms.

johnrob•4h ago
Humans are good at sifting valid feedback from bad feedback. But we are bad at spotting subtle bugs in PRs.
simianwords•4h ago
After having thought a long bit about why I find LLM's useful despite the high error rate: it is because of my ability to verify a certain result is high enough (my internal verifier model) and the generator model which is the LLM is also accurate enough. This is the same concept as red and blue team.

Its the same reason I find asking opinions from many people useful - I take every answer and try to fit it into my world model and see what sticks. The point that many miss is that each individual's verifier model is actually accurate enough so that external generator models may afford to have high error rates.

I have not yet completely explored how the internal "fitting" mechanism works but to give an example: I read many anecdotes from Reddit, fully knowing that many are astroturfed, some flat out wrong. But I still have tricks to identify what can be accurate, which I probably do subconsciously.

In reality: answers don't exist in a randomly uniform space. "Truth" always has some structure and it is this structure (that we all individually understand a small part of) that helps us tune our verifier model.

It is useful to think of how LLM's would work with varying levels of accuracy. For example, generating gibberish to GPT O3 to ground truth. Gibberish is so inaccurate that even extremely high levels of accuracy of our internal verifier model may not allow it to be useful. But O3 is high enough that combined with my internal verifier model it is generally useful.

davidhs•4h ago
LLMs can be useful when you have access to a verifier or verification process.
simianwords•4h ago
yes https://deepmind.google/discover/blog/alphaevolve-a-gemini-p...

Our internal verifier model is fuzzy but in this example I think it is pretty much always accurate.

jeffrallen•4h ago
My experience with a really clever agentic workflow (I use sketch.dev) is that the LLM is playing both blue and red team. If I give a good spec, it will make the thing I'm asking for, and then it will test it better than I would have done myself (partly because it's more clever than me, but mostly because it's way harder working than I am, or rather it puts more effort into testing that I would be able to do with the time leftover after writing the thing).

Also, I cam ask it to do security reviews on the system it's made and it works with it's same characteristic fervor.

I love Tao's observation, but I disagree, at least for the domains I'm allowing LLMs to creat for, that they should not play both teams.

some_random•4h ago
This is an interesting discussion intellectually but it ignores the reality of cybersecurity. Yes I agree that AI tools best fit the red team role HOWEVER the reality is that the place that needs the most help is on the blue team and indeed this is where we see the biggest uplift from AI tools. To extend the "defend a house" metaphor, the previous state of security tooling was that an alert would be sent to the SOC every time any motion was detected on the cameras, leading to alert fatigue and increasing the time between a true positive alert being fired and it being escalated. Now add some CV in which tries to categorize those motion detection alerts into a few buckets, "person spotted", "car pulled up", "branch moved", "cat came home", etc and suddenly you go from having a thousand alerts to review a day to fifty.
bgwalter•4h ago
Tao's blue team stands for generative "AI", the red team stands for critical/auditing "AI".

I have not seen any independent claim that generative "AI" makes programs safer or that generating supervising features as you suggest works.

For auditing "AI" I have seen one claim (not independent or using a public methodology) that auditing "AI" rakes in bug bounties.

1970-01-01•4h ago
So if they are to be focused on attacking and defending, they are to be separated. This leaves us with an argument where you effectively dismiss purple teams as a hack.
xiande04•4h ago
It's called "separation of concerns".
tonetegeatinst•4h ago
Yes, I feel this author ignores the fact purple teams exist. That or he must not know about them.

In addition, red and purple teams end goal is to help the blue team at the end of the day to remedy the issues discovered.

hiq•3h ago
What about formal proofs? Don't we expect LLMs to help there, in a more "blue team" role? E.g. when a mathematician talks about a "technical proof", enumerating cases in the thousands, my impression is that LLM would save some time, and potentially help mathematicians focus on the actually hard (rather than tedious) parts.
LPisGood•3h ago
Formal verification and case automatikn can be done automatically anyway without a mathematician hand checking each case.

For an old example that predates LLMs, see the four color theorem.

topaz0•48m ago
A computer can be helpful for enumerating cases and similar mechanical work. But an LLM specifically would be a terrible way to do this.
chubot•3h ago
I made this point a few months ago here, but using the words attacker and defender (builder) rather than red team and blue team: https://lobste.rs/s/i2edlt/how_i_use_ai

The asymmetry is:

An attacker only has to be right ONCE, and he wins

Conversely, the defender only has to be wrong once, and he is wrong.

So the conclusion is:

Defenders/creators are using LLMs to pump out crappy code, and not testing enough, or relying on the LLM to test itself.

Some attackers might be too dismissive of LLMs, and could accelerate their work by using them to try more things

The comment was related to these stories:

How I Use AI (11 months ago) - https://news.ycombinator.com/item?id=41150317

Carlini has the fairly rare job of being an attacker: Why I Attack - https://nicholas.carlini.com/writing/2024/why-i-attack.html

javier_e06•3h ago
In cybersecurity red and blue test are two equal forces. In software development the analogy I think is a stretch, coding and testing are not two equal forces. Test is code too, and as such, it has bugs too. Test runs afoul with police paradox: Who polices the police? The Police police the police.
fsckboy•3h ago
"Police police police police police police police."

https://en.wikipedia.org/wiki/Buffalo_buffalo_Buffalo_buffal...

LeifCarrotson•3h ago
> The blue team is more obviously necessary to create the desired product; but the red team is just as essential, given the damage that can result from deploying insecure systems.

> Many of the proposed use cases for AI tools try to place such tools in the "blue team" category, such as creating code...

> However, in view of the unreliability and opacity of such tools, it may be better to put them to work on the "red team", critiquing the output of blue team human experts but not directly replacing that output...

The red team is only essential if you're a coward who isn't willing to take a few risks for increased profit. Why bother testing and securing when you can boost your quarterly bonus by just... not doing that?

I suspect that Terence Tao's experience leans heavily towards high-profile risk-averse institutions. People don't call one of the greatest living mathematicians to check your work when they're just trying to duct taping a new interface on top of a line-of-business app that hasn't seen much real investment since the late 90s. Conversely, the people who are writing cutting-edge algorithms for new network protocols and filesystems are hopefully not trying to churn out code as fast and cheap as possible by copy-pasting snippets to and from random chatbots.

There are a lot of people who are already cutting corners on programmer salaries, accruing invisible tech debt minute by minute. They're not trying to add AI tools to create a missing red team, they're trying to reduce headcount on the only team they have, which is the blue team (which is actually just one overworked IT guy in over his head).

nostrademons•2h ago
Tao is talking about systems, which are self-sustaining dynamic networks that function independently of who the individual actors and organizations within the system are. You can break up the monopoly at the heart of the blue team system (as the U.S. did with Standard Oil and AT&T) and it will just reform through mergers over generations (as it largely has with Exxon Mobil and Verizon). You can fire or kill all the people involved and they will just be replaced by other people filling the same roles. The details may change, but the overall dynamics remain the same.

In this case, all the companies who are doing what you describe are themselves the red team. They are the unreliable, additive, distributed players in an ecosystem where the companies themselves are disposable. The blue team is the blue team by virtue of incentives: they are the organization where proper functioning of their role requires that all the parts are reliable and work well together, and if the individual people fulfilling those roles do not have those qualities, they will fail and be replaced by people who do.

kibwen•2h ago
> and it will just reform through mergers over generations

You say "just" as though this is a failure of the system, but this is the system working as designed. Economies of scale are half the reason to bother with large-scale enterprise, so they inevitably consolidate to the point of monopoly, so disrupting that monopoly by force to keep the market aligned is an ongoing and never-ending process that you should expect to need to do on a regular basis.

fnord123•2h ago
> Because of this, unreliable contributors may be more useful in the "red team" side of a project than the "blue team" side

Is Pirate Software catching strays from Terrence Tao now?

zaking17•2h ago
My coding flow today involves a lot of asking an LLM to generate code (blue team) and then me code reviewing, rewriting, and making it scalable (red team?). The analogy breaks down, because I'm providing the safety and correctness; LLMs are offering a head start.

I'm optimistic about AI-powered infra & monitoring tools. When I have a long dump of system logs that I don't understand, LLMs help immensely. But then it's my job to finalize the analysis and make sure whatever debugging comes next is a good use of time. So not quite red team/blue team in that case either.

LPisGood•2h ago
The analogy is not about safety and correctness, but about who is producing and who is assessing/analyzing/poking & prodding.
m3kw9•2h ago
I’m not understanding why he said unreliable red team contributors can be useful?
bc569a80a344f9c•2h ago
He didn't say that - he said they can be _more_ useful. The argument is that LLMs are unreliable, so using LLMs anywhere in your workflow introduces an unreliable contributor. It is then better to have that unreliable contributor on the red team than on the blue team, because an unreliable contributor on defense introduces weaknesses and vulnerabilities while an unreliable contributor on offense introduces a non-viable or trivial attack.
Fabricio20•2h ago
Meta but is the font on the website hard to read for anyone else? To me it's hard to distinguish lines and everything looks a bit blurry? I had to open dev tools and set the font back to one of my os fonts.
65•1h ago
I'm not sure why I thought this article would be about LLMs vs. the philosophical concept of the Tao.
TheGRS•1h ago
After using agentic models and workflows recently, I think these agents belong in both roles. Even more than that, they should be involved in the management tasks too. The developer becomes more of an overseer. You're overseeing the planning of a task - writing prompts, distilling the scope of the task down. You're overseeing writing the tests. And you're overseeing writing out the code. Its a ton of reviewing, but I've always felt more in control as a red team type myself making sure things don't break.
jeron•1h ago
so we've reinvented GAN but with LLMs
1970-01-01•55m ago
Good read, but I'm struggling to understand why Terry did not use the foundational terms offense and defense.
shiandow•52m ago
Because describing the task of writing code as defense is a bit confusing.
wavemode•50m ago
well, in a way you're defending against bugs and vulnerabilities by reviewing code
zkmon•45m ago
Red team is not a team. It is the background context in which the foreground operates. Evolution happens through interaction and adaptation between foreground and background. It is true that the background (context) is a dual form to the foreground (thing). But the context is not just another thing in the same sense as the foreground.
nostrademons•40m ago
Interesting way of viewing this!

Business also has a “blue team” (those industries that the rest of the economy is built upon - electricity, oil, telecommunications, software, banking; possibly not coincidentally, “blue chips”) and a “red team” (industries that are additive to consumer welfare, but not crucial if any one of them goes down. Restaurants, specialty retail, luxuries, tourism, etc.)

It is almost always better, economically, to be on the blue team.” That’s because the blue team needs to ensure they do everything right (low supply) but has a lot of red-team customers they support (high demand). The red team, however, is additive: each additional red team firm improves the quality of the overall ecosystem, but they aren’t strictly necessary* for the success of the ecosystem as a whole. You can kinda see this even in the examples of Tao’s post: software engineers get paid more than QA, proof-creation is widely seen as harder and more economically valuable than proof-checking, etc.

If you’re Sam Altman and have to raise capital to train these LLMs, you have to hype them as blue team, because investors won’t fund them as red team. That filters down into the whole media narrative around the technology. So even though the technology itself may be most useful on the red team, the companies building it will never push that use, because if they admit that, they’re admitting that investors will never make back their money. (Which is obvious to a lot of people without a dog in the fight, but these people stay on the sidelines and don’t make multi-billion dollar investments into AI.)

The same dynamic seems to have happened to Google Glasses, VR, and wearables. These are useful red-team technologies in niche markets, but they aren’t huge new platforms and they will never make trillions like the web or mobile dev did. As a result, they’ve been left to languish because capital owners can’t justify spending huge sums on them.

jedberg•25m ago
Chaos engineering was created to be the "red team" of operations. Let's figure out all the ways we can break a production system before it happens on its own.

And there are a host of teams working on the "red team" side of LLMs right now, using them for autonomous testing. Basically, instead of trying to figure out all the things that can go wrong and writing tests, you let the AI explore the space of all possible failures, and then write those tests.

bodhi_mind•10m ago
Is there a concept of purple team in cybersecurity where a team does both roles? Or does that break the purpose of both teams?
scoreandmore•10m ago
The first thing I did when I signed up for Claude was have it analyze my website for security holes. But it only recommended superficial changes, like the lifecycle of my JWTs. After reading this, I’m wondering if a prompt asking it to attack the website would be better than asking it where it should be beefed up. But I no longer pay for Claude, and I suspect it won’t give me instructions on how to attack something. How would one get past this?