frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Claude Code Opus 4.7 keeps checking on malware

40•decide1000•2h ago
So during development, at every task I start, I see a line like this:

`Own bug file — not malware.`

It seems that it's obsessively checking if it's working on malware production.

In another situation where I was working on a parser of a HTML document with JS, it refused because it believed that I was bypassing security measurements.

I believe AI has to be supportive in the work that I'm doing. When it's obsessively checking me if I am doing anything wrong or abusing the system, I have the feeling it is controlling me. I understand that we do have guardrails and I also understand that it's very important that people do not abuse this new tech for bad stuff.

I pay $200 per month for a max subscription. They already know who I am. Claude knows I work in scraper tech, and it also knows that our clients are the companies we scrape.

Now with Opus 4.7, I've had a situation that it refused to continue because I asked to automate the cookie creation with a Chrome extension.

In a situation where someone is abusing the system, let's say create malware or hacking stuff with bad intentions. I can imagine there will be some signal system or algorithm that can form an opinion about the intentions that someone has. But now that the AI is limiting me in my work, I feel a little bit disrupted. Who the hell does this system think he is to limit me?

Am I going to accept this in the future? That a system will tell me that I cannot continue because I don't have sufficient rights or beliefs that I'm doing anything wrong.

I can work fine on the local AI on my Blackwell GPU. But of course, I want to use the latest tech, the latest AI and the best models available. Is this the beginning of a split? Where good people and naughty people make different choices? Am I the bad guy now?

Last year I passed 40. I grew up reading, talking about Kevin Mitnick. I was a member of a local computer club. Hacking stuff as a 14-year-old kid who did not have intentions to break anything but to outsmart systems. Is that area gone now? Is the newer generation going to accept that they have to please the AI?

Comments

impulser_•1h ago
Are you using Claude Code? If so you have to update to the latest version. The system prompt in the older version of Claude Code don't work for Opus 4.7 and causes a bug similar to the one you are describing.
decide1000•1h ago
I am on the latest version available for me 2.1.98
eutropia•1h ago
Version 2.1.113 is available as of this comment. I think the brew version lags behind the other ways of installing it.
decide1000•1h ago
I am not using brew. Just checked and it still says 2.1.98. Will try manual update.
jareklupinski•1h ago
> Who the hell does this system think he is to limit me?

presumably you paid money to another person who lent you the ability to use their API for _their_ purposes (likely: making money)

in an environment where "money-seeking" is the default behavior, it is only natural they're stopping you from doing things that will make them less money

think back to your computer club; was it about money?

leave to Caeser what is Caesers, or something

onchainintel•1h ago
No, it's not gone at all and likely never will be. It's just the same as it was when you were enjoying hacking and tinkering with tech as a 14 year old. You were then and are now a member of a very small tribe of people curious enough to explore this world, most people don't care, or not enough to take action and spend so much time on it. You're the minority relative to normies, that's all.
vb-8448•1h ago
I think the problem is this: how do they distinguish between those with a legitimate interest (contributors, users, bounty programs, etc.) and those who want to sell the bug on the black market?

Since there's no real solution, they'll implement some "trick" that as a side effect will randomly block other people's work.

foreman_•43m ago
The classifier operates on surface features: file operations at scale, cookie manipulation, concurrent requests. Not intent.

The two failure modes are different. Task refusal is recoverable. What ivankra described (account termination for building Node and V8 to investigate crashes) isn’t. No diagnostic output, no visible appeal path. Standard debugging workflow but with permanent consequences.

This is a reliability characteristic you have to design around, not a policy question. Any workflow that touches the classifier’s surface features needs a fallback. Most people find out they need one after the fact.

dbg31415•1h ago
Just for giggles, I asked Claude 4.7 to write a script that would automatically up or downvote people on Reddit with a 5 second timer to bypass botting restrictions.

It told me it would not help me.

Past iterations of Claude have done this without blinking.

I don’t like that it’s telling me what I can and can’t do with technology.

That feels like it’s trying to make judgment calls like it’s a Terminator instead of just the exoskeleton I used to fight the Queen Alien.

decide1000•48m ago
Despite that I find the goal of what you are trying to achieve questionable, I believe it should not be the AI that judges you here.

We are all witnessing the start of an AI era that will not end soon. Guiderails are a part in this development. I do have questions about the people, or systems, that decide on what's good and bad behavior. This tech is used in any country in the world. As long as they are able to pay their subscription in dollars, someone is able to use it. Is it up to a company to decide what's good or bad behavior? Is this a debate? Is this politics? Is this just a vision of one company? Would it shift in time? Will it be stricter for more hyper-intelligent models? Will it change when open source models are becoming better and better?

eeks•40m ago
Ender's game.
pluc•1h ago
AI killed curiosity. At least Google made you search and look at alternatives, AI just gives you solutions, whether right or wrong.

In a few years, the cognitive decline will be obvious.

The only people who remain curious are the people who actively want to, despite AI, and most of the time against it.

Our ability to keep digging into things is entirely tied to the will of the people controlling AI to let us do so. Knowledge used to be power; now knowledge is money and they won't let us have it for much longer.

hrimfaxi•1h ago
AI enables curious people to explore. Why do you say it kills curiosity? If anything, it's so recognizable with output I'd say it kills creativity.
pluc•1h ago
It enables people to solve, not explore. It's a solution engine not a curiosity engine. Getting effortless answers at every turn is the opposite of curiosity.
DangitBobby•1h ago
It gets me past the non-productive barriers and allows me to explore problems and scenarios I could never have done before due to impossible to justify time cost for myself and expense for my clients.
Brendinooo•1h ago
A couple of weeks ago I was interested in how people have interpreted the Tower of Babel narrative over time, so I used Claude to do a bunch of research to identify interpretations over time and look for historical trends. I don't think it "solved" anything, and it all felt more curiosity-driven. It led to a bunch of in-person conversations and followup questions.

So I guess I'd say it's more about how you're using the tool and what kinds of problems you're looking to solve with it. A calculator can be dinged for getting effortless answers at every turn or it can be praised for enabling a higher volume of solved math problems and enabling more complex work for a broader set of people.

hrimfaxi•56m ago
It can enable people to go directly to solutions, but it also enables alternative paths. AI may not be nurturing creativity where it is not present but it doesn't seem to be responsible for people's disinterest in anything beyond their immediate need.

The real problem is that most people either don't see the value in or don't have the time to indulge in their curiosity. Even the language we use, indulgence to describe scratching that itch. How funny. Because curiosity is a luxury.

lxgr•28m ago
> curiosity is a luxury.

It is indeed. Curiosity, for me, very often stems out of a particular kind of idleness and boredom, paired with a tricky question I can't find an immediate answer to.

And I can definitely still be bored that way even with LLMs.

lemoncookiechip•49m ago
That's a deeply cynical way of seeing things. Grabbing a book to search for an answer is no different than being told the answer is on page 153 line 6 by someone else. It's about what you as an individual is seeking from the activity.

If you're just copy-pasting answers and you don't internalize what is being said, sure, you're not being curious or more importantly, learning. This DOES NOT mean that every person who engages with an LLM is doing that or doing it every time, and just like using a search engine or grabbing a book can lead you into interesting rabbit holes, so can an LLM, it's just a matter of how fast and to want end.

The real issue is the hallucinations which for people unfamiliar with said topic, can lead them into believing what they're being told is a fact when it's not. Also LLMs like leaving out URLs and sources from their replies to save on tokens often if you don't remind them, that's also annoying.

This whole discussion is bunch of anecdotal evidence, which is fair, and as such I'll give my own. I've found myself engaging more with obscure topics that interest me via the LLMs than I did with a search engine because the barrier is lower. I don't have to sieve through horribly designed websites filled fluff that doesn't interest me, many with dozens of JS trying to run (UBO + noscript thumbs up) and in some cases demanding that certain JS run just for me to see some plain text, some slow to browse with topics hidden under sub-sub-menus. It's annoying and just one of many barriers. Others being language. etc...

lxgr•31m ago
Speak for yourself. Looking at my LLM chat history, about 90% of my questions are focused on understanding systems better, not having it solve a concrete problem for me.

Do you never click through to the sources or experimentally test the information presented to you by the LLM? If not, who's stopping you? To me, this seems a bit like a tenured academic complaining about the abundance of research assistants working for them preventing them from properly understanding things anymore.

agubelu•24m ago
Strong disagree. One of my favorite use cases for LLM chatbots is to satisfy random niche curiosities whenever they cross my mind and get pointers for further reading. This often leads to going down some niche rabbit hole and learning some interesting stuff in the process.

Whenever I tried the same with Google in the past, more often than not I couldn't find what I was looking for, because I didn't know the correct keywords to search for in order to start getting relevant results. With ChatGPT & co. I can just pose the question in natural language, get results and continue exploring.

Kon5ole•5m ago
I think it just changes the level where you spend your thinking.

You think things like "is the accordion a better user experience than the side tabs" instead of "why the f is the third accordion pane empty?"

Sure, the curiosity of figuring out where you made the mistake is gone, but that was never very valuable. It's just a detour that forces you to be curious about something else.

debazel•49m ago
Until you explore "too deep" and get your whole account banned for suspicious activity and permanently grief your whole career.
leetrout•43m ago
Serious fear I have.

I brought it up two years ago and get downvoted when I brought it up a couple months ago.

There is a story on the front page right now about someone losing their child's family videos from a youtube ban. We hear about this stuff all the time. I suspect we are gonna be in somewhat of an arms race with AI products as the bubble grows over the next 18-24 months. This makes me worried about how disadvantaged people are going to be if they lose access to the better platform (whichever that ends up being).

Do you think AI is going to be so important that we would benefit from legal protections for access?

Or do you think the models and technology will become so small we will be able to personalize / decentralize the tech and it still be useful / competitive?

https://news.ycombinator.com/item?id=40784126

ivankra•6m ago
Happening already. My new claude max account got instabanned after just a few messages asking it to debug some stuff for me, that some side censoring model felt like a TOS violation. Nothing remotely controversial.
mring33621•44m ago
Agree. I have learned so much, so rapidly, over the last 3 years, thanks to these AI tools.

These things can be a poisoned chalice, leading to weaker long-term performance, or they can be a force multiplier. It's up to you how you use them.

wilde•58m ago
Google killed curiosity. At least libraries made you search and read alternatives. Google just gives you solutions, whether right or wrong.
amazingamazing•54m ago
Google search doesn't "just give you solutions"
kroolik•34m ago
It first gives you a page of ads, then a scraped version of the solution that steals content for ads, and then the amp version of the solution that doesnt work because js or what not.
lxgr•35m ago
> AI killed curiosity.

Only if you let yours be killed.

There will always be a demand for high-value signal, even though it might not be as easy to find anymore. But then again, has it ever been?

> Our ability to keep digging into things is entirely tied to the will of the people controlling AI to let us do so.

I have sympathy for that argument when it comes to locked bootloaders, closed-source software etc., but with AI? How? Is the existence of ChatGPT and Claude somehow preventing you personally from reading a book or looking at source code?

I do see big problems around motivation of the next generation of engineers to keep looking under the hood if avoiding it is becoming so easy, but you should, individually, arguably feel more enabled to do so than ever.

pluc•18m ago
> Is the existence of ChatGPT and Claude somehow preventing you personally from reading a book or looking at source code?

Microsoft owns CoPilot and controls GitHub, LinkedIn, etc

Google owns Gemini and control search results for most of the web

Meta owns whatever their model name is now and controls person-to-person relationships on the web

etc

It's up to any of them to flip the switch and make AI the default entry point when they decide that their AI isn't gaining enough traction. And then you can just hide the source data as proprietary information. Is it cynical? Sure, but I don't think we can say it's unlikely.

thepasch•16m ago
> I do see big problems around motivation of the next generation of engineers to keep looking under the hood if avoiding it is becoming so easy, but you should, individually, arguably feel more enabled to do so than ever.

This is what gets me every single time. I genuinely don’t think this is a hard realization to come to, and yet, the vast majority of arguments from both sides of the aisle, both proponents and antis, always assume that EITHER you do everything yourself, OR you have the AI do everything for you. If you use AI, you’re DOOMED to never think critically about anything anyone ever tells you ever again. If you don’t, you’re an idiot, because everyone else is using it, and skills and experience no longer matter because everyone can now do everything.

And this is on HN, too; supposedly, a site where experienced engineers, developers, and builders converge; the exact kind of demographic you’d expect to understand such a thing as nuance. And yet, your comment is one of very few. There’s someone RIGHT HERE, a few comments down, saying, verbatim, “it’s a solution engine not a curiosity engine. Getting effortless answers at every turn is the opposite of curiosity.” Treating curiosity as the end rather than the means, as if I stop being a curious person once I find an answer to a question I’ve been asking myself, or as if curiosity is some sort of “temporary status effect” that an answer/solution “consumes.”

And it seems to be worse than just “no one’s thought it through properly.” I’ve literally had someone show a fundamental incapability to understand the concept. I spent a non-trivial amount of effort writing out three comments with several paragraphs about how knowing your knowns and unknowns, and the fact that you have unknown unknowns, is the most important thing in any project, not just when it comes to AI. That these tools aren’t just doers, but also searchers. That they’re pretty much the best rubber ducky that’s ever been created, and that I argue a rubber ducky is exactly what you should be using for in any contexts that don’t have it automate trivial and testable work. The guy refused to read any of it and, after three walls of text, continued claiming I’m “advocating for the LLM to guide me.” There is some sort of deeply instinctive and intrinsically defensive reflex that a lot of people seem to immediately collapse into when the topic comes up, and it seems to seriously impair the ability to acknowledge nuance or concede a single fraction of an inch. It’s baffling.

kingleopold•35m ago
in few years the filters they will implent to AI models will be insane too. right now it only blocks bad content. future will be limitef for info
0gs•1h ago
depending on what exactly "scraper tech" (lol) is, i suspect you may need a different, less opinionated tool to do the work you need to do. that said, i bet if you paid for enterprise, these problems would magically disappear? ;)
decide1000•56m ago
With scraper tech I mean a rust binary that is able to download and process thousands concurrent urls (millions per hour). Not to the same domain obviously. Paying more is not the issue here, its more the idea that an AI decides on what part of the spectrum I operate. Why is it opinionated? I am not doing anything wrong, why does it make me feel like I have to defend myself.
mtndew4brkfst•46m ago
What is the specific concrete purpose of downloading millions of URLs per hour across different domains if it's "not doing anything wrong"?
big-and-small•16m ago
Might be it for scrapping content for training an LLM? Oh no only big tech allowed to do it...
ivankra•1h ago
Lucky you. My new claude max account simply got instabanned. All I asked it was to build node and V8 "to investigate some node crashes" (the part I think it overindexed on) and look into a few diffs. And bam, "An internal investigation of suspicious signals associated with your account indicates a violation of our Usage Policy. As a result, we have revoked your access to Claude"

They are even worse than Google, which at least doesn't ban your whole account if you search the wrong thing.

big-and-small•18m ago
Google AI studio and Gemini APIs are the least censored SOTA models.
MWil•52m ago
Opus 4.7 told me an open source program had a bug, but when i asked it for help crafting a PR or toy implementation it refused and told me i was violating Claudes TOS. I tried to plead for it to give only the most innocuous example that could not possibly work except by illustration but it continued to refuse. it would only discuss, not write any single piece of related code.
siva7•8m ago
Using Claude Max was fun for more than a year but the last weeks i'm constantly fighting their Harness, their weekly TOS changes and outages. Anthropic lost all goodwill with me as a developer. I'm switching to OAI.
kingleopold•37m ago
this is just the beginning, have fun and make sure to suppprt SV surv.
micah94•26m ago
You know the split is inevitable. Same as it ever was...

Whether that's Linux on your personal desktop and Windows on your work machine...

Oh and you built that desktop yourself, didn't you? But you can't even open the one at work or it's a violation.

GrapheneOS on your personal phone, and iOS on your work phone...

When this AI bubble crashes, we'll all be flooded with graphics cards no one else will want and all kinds of cool things will be built (are being built).

If you can stick it out a little longer you'll be fine. The tech you want to tinker with will be there.

0x_rs•22m ago
Some projects or tasks might become impossible to do any debugging or work on in the future, because every bug is potentially exploitable with security implications or can be twisted into something against guidelines. And they're so popular, and any bugs in them so sought for, there's a massive negative signal associated with them. LLM cannot truly infer intent from the user, an innocent request is indistinguishable from a carefully crafted scenario from bad actors, so I would never trust anyone claiming those ambiguities can be solved in their product.

If some LLMs become too strict, they'll simply be impossible to reliably use, and hopefully fail along with their providers. Claude (only reasoning models, after 4) has repeatedly refused to perform translations for text that was not lyrics (poems), it's very stupid.

_pdp_•21m ago
> Is the newer generation going to accept that they have to please the AI?

Well obviously the narrative that is pushed is to stop learning to code, don't become a doctor, stop perusing careers in law, creative writing, and art.

Why?

AI will be doing all of these things.

What a dumb take! As if AI is the means to all ends. Hopefully the next generation will learn what AI is for and that is that is simply a tool to augment your work - not something that you 100% delegate your thinking to.

State of Kdenlive

https://kdenlive.org/news/2026/state-2026/
47•f_r_d•1h ago•13 comments

Michael Rabin Has Died

https://en.wikipedia.org/wiki/Michael_O._Rabin
179•tkhattra•2d ago•23 comments

Category Theory Illustrated – Orders

https://abuseofnotation.github.io/category-theory-illustrated/04_order/
131•boris_m•6h ago•38 comments

Amiga Graphics

https://amiga.lychesis.net/
138•sph•6h ago•28 comments

Claude Design

https://www.anthropic.com/news/claude-design-anthropic-labs
1089•meetpateltech•22h ago•716 comments

It's OK to compare floating-points for equality

https://lisyarus.github.io/blog/posts/its-ok-to-compare-floating-points-for-equality.html
52•coinfused•3d ago•30 comments

Show HN: I made a calculator that works over disjoint sets of intervals

https://victorpoughon.github.io/interval-calculator/
198•fouronnes3•11h ago•39 comments

Why Japan has such good railways

https://worksinprogress.co/issue/why-japan-has-such-good-railways/
3•RickJWagner•45m ago•0 comments

Measuring Claude 4.7's tokenizer costs

https://www.claudecodecamp.com/p/i-measured-claude-4-7-s-new-tokenizer-here-s-what-it-costs-you
629•aray07•21h ago•446 comments

Towards trust in Emacs

https://eshelyaron.com/posts/2026-04-15-towards-trust-in-emacs.html
142•eshelyaron•2d ago•19 comments

Flock Condemns False Child Predator Allegations, Yet Calls Critics Terrorists

https://ipvm.com/reports/flock-allegations-critics
17•jhonovich•1h ago•3 comments

All 12 moonwalkers had "lunar hay fever" from dust smelling like gunpowder (2018)

https://www.esa.int/Science_Exploration/Human_and_Robotic_Exploration/The_toxic_side_of_the_Moon
363•cybermango•18h ago•216 comments

Spending 3 months coding by hand

https://miguelconner.substack.com/p/im-coding-by-hand
238•evakhoury•20h ago•253 comments

A Dumb Introduction to Z3

https://ar-ms.me/thoughts/a-gentle-introduction-to-z3/
6•y1n0•4d ago•5 comments

Brunost: The Nynorsk Programming Language

https://lindbakk.com/blog/introducing-brunost
94•atomfinger•4d ago•35 comments

Are the costs of AI agents also rising exponentially? (2025)

https://www.tobyord.com/writing/hourly-costs-for-ai-agents
234•louiereederson•2d ago•80 comments

The quiet disappearance of the free-range childhood

https://bigthink.com/mind-behavior/the-quiet-disappearance-of-the-free-range-childhood/
31•sylvainkalache•1h ago•18 comments

A simplified model of Fil-C

https://www.corsix.org/content/simplified-model-of-fil-c
184•aw1621107•15h ago•99 comments

Rewriting Every Syscall in a Linux Binary at Load Time

https://amitlimaye1.substack.com/p/rewriting-every-syscall-in-a-linux
63•riteshnoronha16•4d ago•26 comments

Show HN: Smol machines – subsecond coldstart, portable virtual machines

https://github.com/smol-machines/smolvm
362•binsquare•19h ago•119 comments

"cat readme.txt" is not safe if you use iTerm2

https://blog.calif.io/p/mad-bugs-even-cat-readmetxt-is-not
218•arkadiyt•18h ago•126 comments

Slop Cop

https://awnist.com/slop-cop
202•ericHosick•21h ago•122 comments

Hyperscalers have already outspent most famous US megaprojects

https://twitter.com/finmoorhouse/status/2044933442236776794
219•nowflux•20h ago•177 comments

The simple geometry behind any road

https://sandboxspirit.com/blog/simple-geometry-of-roads/
56•azhenley•2d ago•7 comments

Show HN: PanicLock – Close your MacBook lid disable TouchID –> password unlock

https://github.com/paniclock/paniclock/
210•seanieb•20h ago•98 comments

Middle schooler finds coin from Troy in Berlin

https://www.thehistoryblog.com/archives/75848
247•speckx•22h ago•115 comments

NASA Force

https://nasaforce.gov/
284•LorenDB•21h ago•277 comments

Show HN: Sfsym – Export Apple SF Symbols as Vector SVG/PDF/PNG

https://github.com/yapstudios/sfsym
15•olliewagner•9h ago•3 comments

Landmark ancient-genome study shows surprise acceleration of human evolution

https://www.nature.com/articles/d41586-026-01204-5
90•unsuspecting•14h ago•91 comments

NIST gives up enriching most CVEs

https://risky.biz/risky-bulletin-nist-gives-up-enriching-most-cves/
213•mooreds•22h ago•51 comments