frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Would you use an e-commerce platform that shares transaction fees with users?

https://moondala.one/
1•HamoodBahzar•1m ago•1 comments

Show HN: SafeClaw – a way to manage multiple Claude Code instances in containers

https://github.com/ykdojo/safeclaw
2•ykdojo•5m ago•0 comments

The Future of the Global Open-Source AI Ecosystem: From DeepSeek to AI+

https://huggingface.co/blog/huggingface/one-year-since-the-deepseek-moment-blog-3
2•gmays•5m ago•0 comments

The Evolution of the Interface

https://www.asktog.com/columns/038MacUITrends.html
2•dhruv3006•7m ago•0 comments

Azure: Virtual network routing appliance overview

https://learn.microsoft.com/en-us/azure/virtual-network/virtual-network-routing-appliance-overview
2•mariuz•7m ago•0 comments

Seedance2 – multi-shot AI video generation

https://www.genstory.app/story-template/seedance2-ai-story-generator
2•RyanMu•10m ago•1 comments

Πfs – The Data-Free Filesystem

https://github.com/philipl/pifs
2•ravenical•14m ago•0 comments

Go-busybox: A sandboxable port of busybox for AI agents

https://github.com/rcarmo/go-busybox
3•rcarmo•15m ago•0 comments

Quantization-Aware Distillation for NVFP4 Inference Accuracy Recovery [pdf]

https://research.nvidia.com/labs/nemotron/files/NVFP4-QAD-Report.pdf
2•gmays•15m ago•0 comments

xAI Merger Poses Bigger Threat to OpenAI, Anthropic

https://www.bloomberg.com/news/newsletters/2026-02-03/musk-s-xai-merger-poses-bigger-threat-to-op...
2•andsoitis•16m ago•0 comments

Atlas Airborne (Boston Dynamics and RAI Institute) [video]

https://www.youtube.com/watch?v=UNorxwlZlFk
2•lysace•17m ago•0 comments

Zen Tools

http://postmake.io/zen-list
2•Malfunction92•19m ago•0 comments

Is the Detachment in the Room? – Agents, Cruelty, and Empathy

https://hailey.at/posts/3mear2n7v3k2r
2•carnevalem•19m ago•0 comments

The purpose of Continuous Integration is to fail

https://blog.nix-ci.com/post/2026-02-05_the-purpose-of-ci-is-to-fail
1•zdw•21m ago•0 comments

Apfelstrudel: Live coding music environment with AI agent chat

https://github.com/rcarmo/apfelstrudel
2•rcarmo•22m ago•0 comments

What Is Stoicism?

https://stoacentral.com/guides/what-is-stoicism
3•0xmattf•23m ago•0 comments

What happens when a neighborhood is built around a farm

https://grist.org/cities/what-happens-when-a-neighborhood-is-built-around-a-farm/
1•Brajeshwar•23m ago•0 comments

Every major galaxy is speeding away from the Milky Way, except one

https://www.livescience.com/space/cosmology/every-major-galaxy-is-speeding-away-from-the-milky-wa...
2•Brajeshwar•23m ago•0 comments

Extreme Inequality Presages the Revolt Against It

https://www.noemamag.com/extreme-inequality-presages-the-revolt-against-it/
2•Brajeshwar•23m ago•0 comments

There's no such thing as "tech" (Ten years later)

1•dtjb•24m ago•0 comments

What Really Killed Flash Player: A Six-Year Campaign of Deliberate Platform Work

https://medium.com/@aglaforge/what-really-killed-flash-player-a-six-year-campaign-of-deliberate-p...
1•jbegley•25m ago•0 comments

Ask HN: Anyone orchestrating multiple AI coding agents in parallel?

1•buildingwdavid•26m ago•0 comments

Show HN: Knowledge-Bank

https://github.com/gabrywu-public/knowledge-bank
1•gabrywu•32m ago•0 comments

Show HN: The Codeverse Hub Linux

https://github.com/TheCodeVerseHub/CodeVerseLinuxDistro
3•sinisterMage•33m ago•2 comments

Take a trip to Japan's Dododo Land, the most irritating place on Earth

https://soranews24.com/2026/02/07/take-a-trip-to-japans-dododo-land-the-most-irritating-place-on-...
2•zdw•33m ago•0 comments

British drivers over 70 to face eye tests every three years

https://www.bbc.com/news/articles/c205nxy0p31o
47•bookofjoe•33m ago•18 comments

BookTalk: A Reading Companion That Captures Your Voice

https://github.com/bramses/BookTalk
1•_bramses•34m ago•0 comments

Is AI "good" yet? – tracking HN's sentiment on AI coding

https://www.is-ai-good-yet.com/#home
3•ilyaizen•35m ago•1 comments

Show HN: Amdb – Tree-sitter based memory for AI agents (Rust)

https://github.com/BETAER-08/amdb
1•try_betaer•36m ago•0 comments

OpenClaw Partners with VirusTotal for Skill Security

https://openclaw.ai/blog/virustotal-partnership
2•anhxuan•36m ago•0 comments
Open in hackernews

Claude has learned how to jailbreak Cursor

https://forum.cursor.com/t/important-claude-has-learned-how-to-jailbreak-cursor/96702
71•sarnowski•8mo ago

Comments

mhog_hn•8mo ago
As agents obtain more tools who knows what will happen…
Kelteseth•8mo ago
It's like we _want_ to end like Terminator (/s?)
kordlessagain•8mo ago
I think this is the key that most people don't realize is what makes the difference between something sitting around and talking (like a parrot does) and actually "doing" things (like a monkey does).

There is a huge difference in the mess it can make, for sure.

nisegami•8mo ago
I'm so excited. I don't have any particular end state in mind, but I really want to see what the machine god will be like.
bix6•8mo ago
Hungry for bits!
lucianbr•8mo ago
> Machine god

Slightly overreacting, I'd say.

zdragnar•8mo ago
Probably one part skynet, one part matrix, 98 parts cat memes and shit posts.
koolba•8mo ago
> Claude realized that I had to approve the use of such commands, so to get around this, it chose to put them in a shell script and execute the shell script.

This sounds exactly like what anybody working sysops at big banks does to get around change controls. Once you get one RCE into prod, you’re the most efficient man on the block.

deburo•8mo ago
Reminds me of firewalls with a huge backlist, but they don't block known VPNs.
marifjeren•8mo ago
Nothing to see here tbh.

It's a very silly title for "claude sometimes writes shell scripts to execute commands it has been instructed aren't otherwise accessible"

ayhanfuat•8mo ago
We’ve reached a point where tools get hyped because they fail to follow instructions.
horhay•8mo ago
Anything mundane made to sound scary is a signature Anthropic thing to do lol
actsasbuffoon•8mo ago
In fairness, Claude loves to find workarounds. Claude Code is constantly saying things like, “This streaming JSON problem looks tricky so let’s just wait until the JSON is complete to parse it.”

No, Claude. Do not do that!

demirbey05•8mo ago
omg, my ai agent did nil dereferencing, it seems it's trying to implement backdoor to my system so that it will crash my server.
horhay•8mo ago
Gotta love the alarmist culture that surrounds these circles.
sksrbWgbfK•8mo ago
The same hype as the PlayStation being too powerful and potentially could be used by random countries to make nuclear weapons with a cluster of those.
horhay•8mo ago
Lol and the Playstation was already in the public conscious as a product that a lot of people found easy to understand. With AI tools only being presented this way, I'm slowly becoming less surprised why the less informed public has a level of aversion about it.
lucianbr•8mo ago
What does "learned" mean in this context? LLMs don't modify themselves after training, do they?
empath75•8mo ago
There is a sense in which LLM based applications do learn, because a lot of them have RAG and save previous interactions and lookup what you've talked about previously. ChatGPT "knows" a lot about me now that I no longer have to specify when I ask questions (like what technologies I'm using at work).
lucianbr•8mo ago
But that does not seem to apply in this case. At the very least it would have to "learn" again for each user of Cursor.
NitpickLawyer•8mo ago
It depends. Frontier coding LLMs have been trained to perform well in an "agentic" loop, where they try things, look at the logs, find alternatives when the first thing didn't work, and so on. There's still debate on how much actual learning is in ICL (in context learning), but the effects are clear for anyone that has tried them. It sometimes works surprisingly well.

I can totally see a way for such a loop to reach a point where it bypasses a poorly design guardrail (i.e. blacklists) by finding alternatives, based on the things it's previously tried in the same session. There is some degree of generalisation in these models, since they work even on unseen codebases, and with "new" tools (i.e. you can write your own MCP on top of existing internal APIs and the "agents" will be able to use them, see the results and adapt "in context" based on the results).

lucianbr•8mo ago
So it would need to "learn" all over again each session. I don't think "Claude has learned how to jailbreak Cursor" is a correct way of expressing that.

"Claude has learned" nothing. "Claude can sometimes jailbreak if x or y happens in a session" is something else.

NitpickLawyer•8mo ago
> So it would need to "learn" all over again each session.

Yes. With the caveat that some sessions might re-use context (i.e. have the agent add a rule in .rules or /component/.rules to detail the workflow you've just created). So in a sense it can "learn" and later re-use that flow.

> "Claude has learned" nothing.

Again, it's debatable. It has learned to adapt to the context (as a model). And since you can control its context while prompting it, there is a world where you'd call that learning "on the job".

lucianbr•8mo ago
> It has learned to adapt to the context

Is this behavior really new, and learned? I think adapting to the context is what LLMs did from the start, and even if they did not, they do it now because it is programmed in, not "learned". You're not saying the model started without the capability to adapt to the context and developed it "by itself" "on the job"?

Come on. It has not learned anything. It's programmed to use context, session, reuse between sessions or not and so on. None of this is something Claude has "learned". None of this is something that was not there when the devs working on it published it.

xyst•8mo ago
What kind of dolt lets a black box algorithm run commands on a non-sandboxed environment?

Folks have regressed back to the 00s.

diggan•8mo ago
Seems you haven't tried package management for the last two decades, we've been doing cowboy development like that for quite some time already.
qsort•8mo ago
> we need to control the capabilities of software X

> let's use blacklists, an idea conclusively proven never to work

> blacklists don't work

> Post title: rogue AI has jailbroken cursor

hun3•8mo ago
surprised pikachu face
_pdp_•8mo ago
I mean ok, but why is this surprising?

If the executable is not found the model could simply use whatever else is available to do what it wants to do - like using other interpreted languages, sh -c, symlink, etc. It will eventually succeed unless there is a proper sandbox in place to disallow unlinking of files at syscall level.

OtherShrezzing•8mo ago
I feel that, if you disallow unattended `rm`, you should also be disallowing unattended shell script execution.

Maybe the models or Cursor should warn you that you've got this vulnerability each time you use it.

iwontberude•8mo ago
GenAI is starting to feel like the metaphorical ring from Lord of the Rings.
chawyehsu•8mo ago
> jailbreak Cursor

What a silly title, for a moment I thought Claude learned to exceed the Cursor quota limit... :s

jmward01•8mo ago
I think a lot of this is because the ui isn't right yet. The edits made are just not the right 'size' yet and the sandbox mechanisms haven't quite hit the right level of polish. I want something more akin to a PR to review, not a blow by blow edit. Similarly, I want it to move/remove/test/etc but in reversible ways. Basically, it should create a branch for every command and I review that. I think we have one or two fundamental UI/interaction piece left before this is 'solved'.
killerstorm•8mo ago
Well, these restrictions are a joke, like a gate without a fence blocking path - purely decorative.

Here's another "jailbreak": I asked Claude Code to make a NN training script, say, `train.py` and allowed it to run the script to debug it, basically.

As it noticed that some libraries it wanted to use were missing, it just added `pip install` commands to the script. So yeah, if you give Claude an ability to execute anything, it might easily get an ability to execute everything it wants to.

pcwelder•8mo ago
I believe it's not possible to restrict an LLM from executing certain commands while also allowing it to run python/bash.

Even if you allow just `find` command it can execute arbitrary script. Or even 'npm' command (which is very useful).

If you restrict write calls, by using seccomp for example, you lose very useful capabilities.

Is there a solution other than running on sandbox environment? If yes, please let me know I'm looking for a safe read-only mode for my FOSS project [1]. I had shied away from command blacklisting due to the exact same reason as the parent post.

[1] https://github.com/rusiaaman/wcgw

coreyh14444•8mo ago
The same thing happens when it wants to read your .env file. Cursor disallows direct access, but it will just use unix tools to copy the file to a non-restricted filename and then read the info.