frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

I wrote to Flock's privacy contact to opt out of their domestic spying program

https://honeypot.net/2026/04/14/i-wrote-to-flocks-privacy.html
339•speckx•2h ago•139 comments

YouTube now world's largest media company, topping Disney

https://www.hollywoodreporter.com/business/digital/youtube-worlds-largest-media-company-2025-tops...
141•bookofjoe•5d ago•113 comments

Rare concert recordings are landing on the Internet Archive

https://techcrunch.com/2026/04/13/thousands-of-rare-concert-recordings-are-landing-on-the-interne...
391•jrm-veris•6h ago•115 comments

Spain to expand internet blocks to tennis, golf, movies broadcasting times

https://bandaancha.eu/articulos/telefonica-consigue-bloqueos-ips-11731
351•akyuu•3h ago•305 comments

Claude Code Routines

https://code.claude.com/docs/en/routines
200•matthieu_bl•3h ago•127 comments

5NF and Database Design

https://kb.databasedesignbook.com/posts/5nf/
87•petalmind•4h ago•41 comments

Turn your best AI prompts into one-click tools in Chrome

https://blog.google/products-and-platforms/products/chrome/skills-in-chrome/
42•xnx•3h ago•18 comments

California ghost-gun bill wants 3D printers to play cop, EFF says

https://www.theregister.com/2026/04/14/eff_california_3dprinted_firearms/
87•Bender•1h ago•59 comments

Let's Talk Space Toilets

https://mceglowski.substack.com/p/lets-talk-space-toilets
78•zdw•21h ago•21 comments

guide.world: A compendium of travel guides

https://guide.world/
30•firloop•5d ago•5 comments

The Orange Pi 6 Plus

https://taoofmac.com/space/reviews/2026/04/11/1900
18•rcarmo•3d ago•5 comments

OpenSSL 4.0.0

https://github.com/openssl/openssl/releases/tag/openssl-4.0.0
107•petecooper•2h ago•25 comments

ClawRun – Deploy and manage AI agents in seconds

https://github.com/clawrun-sh/clawrun
12•afshinmeh•1h ago•0 comments

Show HN: Plain – The full-stack Python framework designed for humans and agents

https://github.com/dropseed/plain
26•focom•2h ago•9 comments

Show HN: LangAlpha – what if Claude Code was built for Wall Street?

https://github.com/ginlix-ai/langalpha
70•zc2610•5h ago•24 comments

Gas Town: From Clown Show to v1.0

https://steve-yegge.medium.com/gas-town-from-clown-show-to-v1-0-c239d9a407ec
25•martythemaniak•1h ago•9 comments

Backblaze has stopped backing up OneDrive and Dropbox folders and maybe others

https://rareese.com/posts/backblaze/
826•rrreese•12h ago•514 comments

Show HN: A memory database that forgets, consolidates, and detects contradiction

https://github.com/yantrikos/yantrikdb-server
28•pranabsarkar•4h ago•18 comments

jj – the CLI for Jujutsu

https://steveklabnik.github.io/jujutsu-tutorial/introduction/what-is-jj-and-why-should-i-care.html
437•tigerlily•10h ago•373 comments

Introspective Diffusion Language Models

https://introspective-diffusion.github.io/
205•zagwdt•12h ago•39 comments

The Mouse Programming Language on CP/M

https://techtinkering.com/articles/the-mouse-programming-language-on-cpm/
34•PaulHoule•3d ago•3 comments

Carol's Causal Conundrum: a zine intro to causally ordered message delivery

https://decomposition.al/zines/
31•evakhoury•4d ago•2 comments

Nucleus Nouns

https://ben-mini.com/2026/nucleus-nouns
46•bewal416•4d ago•11 comments

Show HN: Kontext CLI – Credential broker for AI coding agents in Go

https://github.com/kontext-dev/kontext-cli
55•mc-serious•7h ago•24 comments

DaVinci Resolve – Photo

https://www.blackmagicdesign.com/products/davinciresolve/photo
999•thebiblelover7•18h ago•255 comments

A new spam policy for “back button hijacking”

https://developers.google.com/search/blog/2026/04/back-button-hijacking
779•zdw•17h ago•449 comments

The acyclic e-graph: Cranelift's mid-end optimizer

https://cfallin.org/blog/2026/04/09/aegraph/
60•tekknolagi•4d ago•16 comments

Lean proved this program correct; then I found a bug

https://kirancodes.me/posts/log-who-watches-the-watchers.html
367•bumbledraven•20h ago•164 comments

The M×N problem of tool calling and open-source models

https://www.thetypicalset.com/blog/grammar-parser-maintenance-contract
107•remilouf•5d ago•37 comments

40% of lost calories globally are from beef, needing 33 cal of feed per 1 cal

https://iopscience.iop.org/article/10.1088/2976-601X/ae4f6b
114•randycupertino•2h ago•165 comments
Open in hackernews

Claude Code Routines

https://code.claude.com/docs/en/routines
197•matthieu_bl•3h ago

Comments

minimaxir•3h ago
Given the alleged recent extreme reduction in Claude Code usage limits (https://news.ycombinator.com/item?id=47739260), how do these more autonomous tools work within that constraint? Are they effectively only usable with a 20x Max plan?

EDIT: This comment is apparently [dead] and idk why.

breakingcups•2h ago
You seem to be vouched for now, no longer dead for me.
minimaxir•2h ago
Hmm, I can't edit the original comment to retract that edit either. Either my account is flagged for something or HN is being weird.
TacticalCoder•2h ago
Everything looks good to me: you don't look like you have a flagged account (but then I don't work for HN).
giancarlostoro•1h ago
I've been talking to friends about this extensively, and read all sorts of different social media posts on X where people deep dove things (I'm at work so I don't have any links handy - though I did submit one on HN, grain of salt, unsure how valid it is but it was interesting: https://news.ycombinator.com/item?id=47752049 ).

I think the real issue stems from the 1 Million token context window change. They did not anticipate the amount of load it would give you. That first few days after they released the new token window, I was making amazing things in one single session from nothing, to something (a new .NET based programming language inspired by Python, and a Virtual Actor framework in Rust). I think since then they've been trying too many things to tweak things, whilst irritating their users.

They even added a new "Max" thinking mode, and made "High" the old medium, which is ridiculous because you think you're using "High" but really you're not. There's a hidden config file to change their terrible defaults to let Claude be smarter still, and apparently you can toggle off the 1M tokens.

I think the real fix, and I'm surprised nobody there has done this yet, is to let the user trim down their context window.

Think about it, you used to have what? 350k tokens or so? Now Claude will keep sending your prompt from 30 minutes ago that's completely irrelevant to the back-end, whereas 3 months ago it would have been compacted by now.

Others have noted that similar prompting for some ungodly reason adds tens of thousands of extra garbage tokens (not sure why).

Edit looks like someone figured out that if you downgrade your version of Claude Code and change one single setting it unruins Claude:

https://news.ycombinator.com/item?id=47769879

dacox•1h ago
Yeah, I have been seeing lots of comments, tweets, etc, but given everything I have learned about these models - i do not think the change to 1M was innocuous. I'm not sure what they've claimed publicly, but I'm fairly certain they must be doing additional quantization, or at minimum additional quantization of the KV cache. Plus, sequence length can change things even when not fully utilized. I had to manually re-enable the "clear context and continue" feature as well.
giancarlostoro•46m ago
I used the heck out of it when it was announced, and it felt like I was using one of the best models I've ever used, but then so were all of their other customers, I don't think they accounted for such heavy load, or maybe follow up changes goofed something up, not sure. Like I said, the 1M token, for the first few days allowed me to bust out some interesting projects in one session from nothing to "oh my" in no time.

I'm thinking they should go back to all their old settings and as a user cap you at their old token limit, and ask you if you want to compact at your "soft" limit or burst for a little longer, to finish a task.

imhoguy•1h ago
AI race to the bottom is a debt game now. Once the party is over somebody will have to pay the bill.
matthieu_bl•3h ago
Blog post https://claude.com/blog/introducing-routines-in-claude-code
bpodgursky•3h ago
OpenClawd had about a two week moat...

Feature delivery rate by Anthropic is basically a fast takeoff in miniature. Pushing out multiple features each week that used to take enterprises quarters to deliver.

nightpool•3h ago
Do you mean a 3 months moat? Moltbot started going viral in January. That seems to be about a quarter to deliver to me : )
whalesalad•2h ago
Hard to wanna go all-in on the Anthropic ecosystem with how inconsistent model output from their top-tier has been recently. I pay $$$ for api-level opus 4.6 to avoid any low-tier binning or throttling or subversive "its peak rn so we're gonna serve up sonnet in place of opus for the next few hours" but I still find that the quality has been really hit or miss lately.

The bell curve up and then back down has been so jarring that I am pivoting to fully diversifying my use of all models to ensure that no one org has me by the horns.

bpodgursky•2h ago
yeah i mean nobody uses Claude anymore, the utilization is too high
chrisweekly•2h ago
right, like the bar nobody goes to anymore bc it's always too crowded
slopinthebag•2h ago
You're delusional if you think these features would take competent programmers quarters to deliver.
unshavedyak•2h ago
Maybe they were accounting for huge layers of red tape in large orgs. God knows those are far slower than "competent programmers" lol
buster•2h ago
He said "enterprises" not "competent programmers".
dbbk•2h ago
And yet none of them work properly and are unstable.
renticulous•2h ago
Anthropic is trying to be AI version of AWS.
twoodfin•1h ago
That is a really tough business if you can't match AWS' efficiency & reliability at scale. Presumably AWS also wants to be the AI version of AWS.

(Amazon + Anthropic does seem like a much more compelling enterprise collaboration / acquisition than Microsoft + OpenAI ever did.)

jcims•1h ago
>Feature delivery rate by Anthropic is basically a fast takeoff in miniature.

I like to just check the release notes from time to time:

https://github.com/anthropics/claude-code/releases

and the equally frenetic openclaw:

https://github.com/openclaw/openclaw/releases

GPT-4.1 was released a year ago today. Sonnet 4 is ~11 months old. The claude-code cli was released last Feb. Gas Town is 3 months old.

This is a chart that simply counts the bullet points in the release notes of claude code since inception:

https://imgur.com/a/tky9Pkz

This is as bad and as slow as it's going to be.

irthomasthomas•1h ago
The velocity of shipping is wild. Though I cannot recall a novel feature they shipped first. Can you?
summarity•3h ago
If you’re trying this for automating things on GitHub, also take a look at Agentic Workflows: https://github.github.com/gh-aw/

They support much of the same triggers and come with many additional security controls out of the box

gavinray•2h ago
Why have I not heard of this? Was looking for a way to integrate LLM CLI's to do automated feature development + PR submission triggered by Github issues, seems like this would solve it.
deadfall23•46m ago
Why not https://github.com/anthropics/claude-code-action?
eranation•28m ago
+1 for that, having that said, because GH agentic workflows require a bit more handholding and testing to work, (and have way more guardrails, which is great, but limiting), and lack some basic connectors (for example - last time I tried it, it had no easy slack connector, I had to do it on my own). This is why I'm moving some of the less critical gh-aw (all the read only ones) to Claude Routines.
ctoth•3h ago
You'd think that if they were compute-limited ... Trying to get people to use it less ... The rational thing to do would be to not ship features that will use more compute automatedly? Or does this use extra usage?
whicks•2h ago
I would imagine that this sort of scheduling allows them to have more predictable loads, and they may be hoping that people will schedule some of their tasks in “off hours” to reduce daytime load.
ctoth•2h ago
I thought about that but I'm pretty sure that if the backlog is automatically clean and I don't need to run my skill for that when I start up in the morning that just means I can do the next task I would have done which will probably use Claude Code.

Your own, personal, Jevons.

andai•2h ago
It also beats OC's heartbeat where it auto-runs every 30 minutes and runs a bunch of prompts to see if it actually needed to run or not.
pkulak•2h ago
Man, this just bit me too. I started playing with OC over the weekend (in a VM), and the spend was INSANE even though I wasn't doing anything. I don't see this as very useful as an "assistant" that wanders around and anticipates my needs. But I do like the job system, and the ability to make skills, then run them on a schedule or in response to events. But when I looked into what it was doing behind my back, 48 times a day it was packaging up 20K tokens of silly context ("Be a good agent, be helpful, etc, for 30 paragraphs"), shipping it off to the model, and then responding with a single HEARTBEAT_OK.

Luckily you can turn if off pretty easily, but I don't know why it's on by default to begin with. I guess holdover from when people used it with a $20 subscription and didn't care.

pletnes•2h ago
Also you can schedule it a bit off. Every hour? Delay it a few seconds. Can’t do that with a chat message. Also, batch up a bunch of them, maybe save some compute that way? Latency is not an issue.
iBelieve•2h ago
Max accounts get 15 daily runs included, any runs above that will get billed as extra usage.
dockerd•2h ago
It's how they can lock more users into their eco-system.
AlexCoventry•1h ago
I don't think "usage" is exactly the metric they're going for, more like "usage in line with our developmental strategy." Transcripts of people using Claude to write code are probably far more valuable to them than transcripts of OpenClaw trying to set up a calendar invite.
fgkramer•1h ago
I mean, they don’t train on your data unless you have the setting enabled. Do you really think they are reading your prompts at all? Free inference providers sure, but Anthropic?
dpark•34m ago
They are more worried about building a moat than anything else. They want people building integrations that are difficult to undo so that they lock into the platform.
vessenes•2h ago
This is one of the best features of OpenClaw - makes sense to swipe it into Claude Code directly. I wonder if Anthropic wants to just make claude a full stand-in replacement for openclaw, or just chip away at what they think the best features are, now that oAI has acquired.
mkw5053•2h ago
What are some of the best use cases you've found? I have some gh actions set up to call claude code, but those have already been possible.
ale•2h ago
So MCP servers all over again? I mean at the end of the day this is yet another way of injecting data into a prompt that’s fed to a model and returned back to you.
airstrike•2h ago
Still no moat.

The reason someone would use this vs. third-party alternatives is still the fact that the $200/mo subscription is markedly cheaper than per-token API billing.

Not sure how this works out in the long term when switching costs are virtually zero.

petesergeant•2h ago
I think at this point the aim is less about moat, and more about getting an advantage that self-sustains: https://www.rand.org/pubs/research_reports/RRA4444-1.html
TacticalCoder•2h ago
> Not sure how this works out in the long term when switching costs are virtually zero.

All these not really helpful, but vendor specific, "bonuses" sounds like a way to try to lock people in, to try to raise the switching cost.

I'm using, on purpose, a simple process so that at any time I can switch AI provider.

netdur•2h ago
didn’t we have several antitrust cases where a vendor used its monopoly to disadvantage rivals? did not anthropic block openclaw?
andai•2h ago
It's not blocked, you just can't use the Claude-only subscription endpoint with unauthorized 3rd party software. (You can use it via the regular API (7x more expensive) and pay per token just fine.)

...Except now you sorta-kinda can: now they auto-detect 3rd party stuff and bill you per-token for it?

If I'm reading it right:

https://news.ycombinator.com/item?id=47633568

dmix•2h ago
How is Anthropic a monopoly? The market is barely even fully developed and has multiple large and small competitors
Someone1234•1h ago
They did not.

You can still use OpenClaw on their API pricing tier as much as you want. What they did is not allow subscriptions to be used to power automated third-party workloads, including OpenClaw.

Now, is their messaging around this confusing? Absolutely. The whole thing has been handled shambolically. Everyone knows that they lack the compute to keep up, and likely have lower margins on subscriptions than API; but they cannot just say that because investors may be skittish.

nico•2h ago
Nice, could this enable n8n-style workflows that run fully automatically then?
outofpaper•2h ago
Yes but much less efficiently. Having LLMs handle automation is like using a steam engine to heat your bath water. It will work most of the time but it's super inefficient and not really designed for that use and it can go horribly wrong from time to time.
meetingthrower•2h ago
Correct. But the llm can also program you the exact automation you want! Much more efficiently than gui madness with N8N. And if you want observability just program that too!
meetingthrower•2h ago
Already very possible and super easy if you do a little vibecoding. Although it will hit the api. Have a stack polling my email every five minutes, classifying email, taking action based on the types. 30 minute coding session.
andai•2h ago
I'm a little confused on the ToS here. From what I gathered, running `claude -p <prompt>` on cron is fine, but putting it in my Telegram bot is a ToS violation (unless I use per-token billing) because it's a 3rd party harness, right? (`claude -p` being a trivial workaround for the "no 3rd party stuff on the subscription" rule)

This Routines feature notably works with the subscription, and it also has API callbacks. So if my Telegram bot calls that API... do I get my Anthropic account nuked or not?

unshavedyak•2h ago
Wait we can't use claude -p around other tools? What is the point of the JSON SDK then? Anthropic is confusing here, ugh.

edit: And specifically i'm making an IDE, and trying to get ClaudeCode into it. I frankly have no clue when Claude usage is simply part of an IDE and "okay" and when it becomes a third party harness..

grafmax•2h ago
They’re shooting themselves in the foot with these dumb restrictions.
taytus•2h ago
They are not dumb restrictions. They just don't have the compute. That is the dumb part. Dario did not secure the compute they need so now they are obviously struggling.
dgellow•2h ago
Their growth over the past months has been more than insane. It’s completely expected they don’t have the compute. You don’t have infinite data centers around
taytus•1h ago
Like or not, openai isn't having the same compute strain, meaning this was predictable.
joshstrange•2h ago
The restrictions are dumb not because they're lower than any of us want them to be, but because they're unclear. Every time Claude comes up on Hacker News, someone asks this question. And every time people chime in to agree that they also are unclear or someone weighs in saying, no, it's totally clear, while proceeding not to point at any official resource and/or to "explain" the rules in a that is incompatible with official documentation.

Example: https://news.ycombinator.com/item?id=47737924

taytus•1h ago
You are arguing something different. My point is that they must apply these restrictions. Do I think they could have calculated their growth a little better? Yes, of course, but hindsight is 20/20.
joshstrange•31m ago
We might be talking past each other, I promise I'm not just trying to argue.

> My point is that they must apply these restrictions.

I fully understand and respect they need restrictions on how you can use your subscription (or any of their offerings). My issue is not there there _are_ restrictions but that the restrictions themselves are unclear which leads to people being unsure where the line is (that they are trying not to cross).

Put simply: At what point is `claude -p` usage not allowed on a subscription:

- Running `claude -p` from the CLI?

- Running `claude -p` on a Cron?

- Running `claude -p` as a response to some external event? (GH action, webhook, etc?)

- Running `claude -p` when I receive a Telegram/Discord/etc message (from myself)?

Different people will draw the line in different places and Anthropic is not forthcoming about what is or is not allowed. Essentially, there is a spectrum between "Running claude by hand on the command line" and "OpenClaw" [0] and we don't know where they draw the line. Because of that, and because the banning process is draconian and final with no appeals, it leads to a lot of frustration.

[0] I do not use OpenClaw nor am I arguing it should be allowed on the subscription. It would be nice if it was but I'm not saying it should be. I'm just saying that OpenClaw clearly is _not_ allowed but `claude -p` wouldn't be usable at all with a subscription if it was completely banned so what can it (safely) be used for?

hmokiguess•2h ago
Wouldn't ACP be better for an IDE? https://agentclientprotocol.com/get-started/introduction
unshavedyak•2h ago
Possibly, though at first i was entirely focusing (and still am) on Claude Code usage. Given that CC had an API, i figured its own SDK would update faster/better/etc to new Claude features that Anthropic introduces. I'm sure ACP is a flexible protocol, but nonetheless i was just aiming for direct Claude integration.. and you know, it's an official SDK, seemed quite logical to me.

It would be absurd to me if the same application is somehow allowed via ACP but not via official SDK. Though perhaps the official SDK offers data/features that they don't want you to use for certain scenarios? If that were they case though it would be nice if they actually published a per-SDK-API restrictions list.

That we're having to guess at this feels painful.

edit: Hah, hilariously you're still using the SDK even if you use ACP, since Claude doesn't have ACP support i believe? https://github.com/agentclientprotocol/claude-agent-acp

cortesoft•2h ago
I was pretty sure that claude -p would always be fine, but I looked at the TOS and it is a bit unclear.

It says in the prohibited use section:

> Except when you are accessing our Services via an Anthropic API Key or where we otherwise explicitly permit it, to access the Services through automated or non-human means, whether through a bot, script, or otherwise.

So it seems like using a harness or your own tools to call claude -p is fine, AS LONG AS A HUMAN TRIGGERS IT. They don’t want you using the subscription to automate things calling claude -p… unless you do it through their automation tools I guess? But what if you use their automation tool to call your harness that calls claude -p? I don’t actually know. Does it matter if your tool loops to call claude -p? Or if your automation just makes repeated calls to a routine that uses your harness to make one claude -p call?

It is not nearly as clear as I thought 10 minutes ago.

Edit: Well, I was just checking my usage page and noticed the new 'Daily included routine runs' section, where it says you get 15 free routine runs with your subscription (at least with my max one), and then it switches to extra usage after that. So I guess that answers some of the questions... by using their routine functionality they are able to limit your automation potential (at least somewhat) in terms of maxing out your subscription usage.

joshstrange•2h ago
Anthropic deserves to have this as the top comment on every HN post. It's absurd that they don't clarify this better and so many people are running around online saying the exact opposite from what their, confusing, docs say.

The Chilling Effect of this is real and it gets more and more frustrating that they can't or won't clarify.

throwup238•1h ago
It’s also absurd that they’re doing their communication on a bunch of separate platforms like HN, Reddit, and Github with no coherent strategy or consistency as far as I can tell. Can’t I just get policy clarifications in my email like a normal business?

I downgraded my $200/mo sub to $20 this past week and I’m going to try out Codex’s Pro plans. Between the cache TTL (does it even affect me? No idea), changes in the rate limit, 429 rate limit HTTP status code during business hours, adaptive thinking (literally the worst decision they’ve ever made, as far as my line of work is concerned), dumb agent behavior silently creating batshit insane fallthroughs, clearly vibe coded harness/infrastructure, and their total lack of transparency, I think I’m done. It was fun while it lasted but I’m tired of paying for their mistakes in capacity planning and I feel like the big rug pull (from all three SOTA providers) is coming like a freight train.

sidrag22•43m ago
I was "Claude only" for well over a year. Kinda crazy how they seem to be gaining a LOT of public attention the last few months, yet i see this type of sentiment from other devs/myself. for me it started with their opencode drama, and openai's decision to embrace opencode in response.

I didn't even know what opencode was prior to that drama, yet now here i am using opencode and a ton of crafted openai agents in my projects. Would love to have some claude agents in that mix, but i guess im stuck in Claude Code if i wanna even touch their models... I'd love to go back to just claude as i "trust" them more in a sorta less evil vibe manner, but if they are gonna prevent subscription usage to something people use to allow themselves more freedom, they gotta then close that gap with their own tools rather than pumping out stuff like this which scares me off given the past couple months.

I totally understand why they are cutting off 3pa access to stuff like openclaw, where the avg user is just a power user in comparison to avg claude user or whatever. I haven't kept up a ton with their opencode issues, but I just know i can't get behind a company actively trying to make my potential usage of tokens less optimized to keep me locked into their ecosystem.

Really just kinda hoping local models kill it all for devs after a few years, I'm not interested in perma relying on data centers for my workflow.

stephbook•51m ago
The ambiguity is intentional. Like Microsoft not banning volume licenses. They want to scare you, so you don't max out your subscription – which they sell at a loss.

Another comparison would be "unlimited storage", where "unlimited" means some people will abuse it and the company will soon limit the "unlimited."

pixel_popping•22m ago
Literally yeah, the ambiguity is just so they can boycott anytime they want, people underestimate Anthropic too much, obviously they have insane amount of scrappers, bots... no comments online is made without their awareness and analyzed by a bunch of agents that then do prediction and for sure so much more. They know exactly what they are doing.
causal•36m ago
Yeah in the span of a month or so we had:

- SDK that allows you to use OAuth authentication!

- Docs updated to say DO NOT USE OAUTH authentication unless authorized! [0]

- Anthropic employee Tweeting "That's not what we meant! It's fine for personal use!" [1]

- An email sent out to everyone saying it's NOT fine do NOT use it [2]

Sigh.

[0] https://code.claude.com/docs/en/agent-sdk/overview#get-start...

[1] https://www.reddit.com/r/ClaudeAI/comments/1r8et0d/update_fr...

[2] https://news.ycombinator.com/item?id=47633396

mellosouls•2h ago
Put Claude Code on autopilot. Define routines that run on a schedule, trigger on API calls, or react to GitHub events...

We ought to come up with a term for this new discipline, eg "software engineering" or "programming"

avaer•2h ago
Setting up your agent. This part doesn't deserve a name; there is no programming or engineering or really much thinking involved.
baq•2h ago
Does ‘vibe coding’ work?
jnpnj•2h ago
gramming
realo•1h ago
Ah! Totally... We have:

airgramming plusgramming programming maxgramming studiogramming

and recently the brand new way of working: Neogramming !

Personally I stick for now with the "Programming " tier. Maybe will upgrade to "Maxgramming" later this year...

raincole•2h ago
Sounds more like openclawing.
oxag3n•32m ago
It's "promptramming".
watermelon0•2h ago
Seems like it only supports x86_64. It would be nice if they offered a way to bring your own compute, to be able to work on projects targeting arm64.
crooked-v•2h ago
The obvious functionality that seems to be missing here is any way to organize and control these at an organization rather than individual level.
varispeed•2h ago
Why would you use it if you don't know whether the model will be nerfed at that run?
desireco42•2h ago
I think they are using Claude to come up with these and they will bringing one every second day... In fact, this is probably routine they set.
consumer451•2h ago
meta:

Sorry, but I just have to ask. Why is u/minimaxir's comment dead? Is this somehow an error, an attack, or what?

This is a respected user, with a sane question, no?

I vouched, but not enough.

edit: His comment has arisen now. Leaving this up for reference.

irthomasthomas•1h ago
We live in strange times!
theodorewiles•2h ago
How does this deal with stop hooks? Can it run https://github.com/anthropics/claude-code/blob/main/plugins/...
Eldodi•2h ago
Anthropic is really good at releasing features that are almost the same but not exactly the same as other features they released the week before
dymk•2h ago
7 days is long enough for work to leave the context window, hence…
tclancy•2h ago
And or things I’ve spent a bunch of time building already. And naming them the same. I should have trademarked “dispatch”!
dbish•1h ago
you're telling me dispatchagents.ai :) (open to new names if anyone has cool ones, didn't expect anthropic to start using dispatch with their agents, naming is way too hard)
spelunker•2h ago
> In the Desktop app, click New task and choose New remote task; choosing New local task instead creates a local Desktop scheduled task, which runs on your machine and is not a routine.

Oh uh... ok then.

eranation•2h ago
I've been using it for a while (it was just called "Scheduled", so I assume this is an attempt to rebrand it?)

It was a bit buggy, but it seems to work better now. Some use cases that worked for me:

1. Go over a slack channel used for feedback for an internal tool, triage, open issues, fix obvious ones, reply with the PR link. Some devs liked it, some freaked out. I kept it.

2. Surprisingly non code related - give me a daily rundown (GitHub activity, slack messages, emails) - tried it with non Claude Code scheduled tasks (CoWork) not as good, as it seems the GitHub connector only works in Claude Code. Really good correlation between threads that start on slack, related to email (outlook), or even my personal gmail.

I can share the markdowns if anyone is interested, but it's pretty basic.

Very useful, (when it works).

joshstrange•2h ago
LLMs and LLM providers are massive black boxes. I get a lot of value from them and so I can put up with that to a certain extent, but these new "products"/features that Anthropic are shipping are very unappealing to me. Not because I can't see a use-case for them, but because I have 0 trust in them:

- No trust that they won't nerf the tool/model behind the feature

- No trust they won't sunset the feature (the graveyard of LLM-features is vast and growing quickly while they throw stuff at the wall to see what sticks)

- No trust in the company long-term. Both in them being around at all and them not rug-pulling. I don't want to build on their "platform". I'll use their harness and their models but I don't want more lock-in than that.

If Anthropic goes "bad" I want to pick up and move to another harness and/or model with minimal fuss. Buying in to things like this would make that much harder.

I'm not going to build my business or my development flows on things I can't replicate myself. Also, I imagine debugging any of this would be maddening. The value add is just not there IMHO.

EDIT: Put another way, LLM companies are trying to climb the ladder to be a platform, I have zero interest in that, I was a "dumb pipe", I want a commodity, I want a provider, not a platform. Claude Code is as far into the dragon's lair that I want to venture and I'm only okay with that because I know I can jump to OpenCode/Codex/etc if/when Anthropic "goes bad".

chinathrow•1h ago
Yeah so better to convert tokens into sw doing the job at close to zero costs running on own systems.
verdverm•1h ago
I fully endorse building a custom stack (1) because you will learn a lot (2) for full control and not having Big Ai define our UX/DX for this technology. Let's learn from history this time around?
gritspants•56m ago
Here's the problem I keep running into with AI and 'history'. We all know where this is going. We'll pick our winners and losers in the interim, but so far, this is a technology that mostly impacts tech practitioners. Most people don't care, in the sense that you're a taxi driver. Perhaps you have a manual transmission and the odd person comments on your prowess with it. No one cares. I see a bunch of boys making fools out of themselves otherwise.
palata•1h ago
> - No trust that they won't nerf the tool/model behind the feature

I actually trust that they will.

gardenhedge•1h ago
Yeah, I build my workflows with two things in mind:

1) that AI will be more advanced in the future

2) that the AI I am using will be worse in the future

dvfjsdhgfv•49m ago
I believe the current game everybody plays is:

* make sure the model maxes out all benchmarks

* release it

* after some time, nerf it

* repeat the same with the next model

However, the net sum is positive: in general, models from 2026 are better than those from 2024.

snek_case•36m ago
I guess there's a pretty clear incentive to nerf the current model right before the next model is about to come out.
chinathrow•14m ago
Wouldn't that amount to fraud?
_blk•11m ago
yup, after the token-increase from CC from two weeks ago, I'm now consistently filling the 1M context window that never went above 30-40% a few days ago. Did they turn it off? I used to see the Co-Authored by Opus 4.6 (1M Context Window) in git commits, now the advert line is gone. I never turned it on or off, maybe the defaults changed but /model doesn't show two different context sizes for Opus 4.6

I never asked for a 1M context window, then I got it and it was nice, now it's as if it was gone again .. no biggie but if they had advertised it as a free-trial (which it feels like) I wouldn't have opted in.

Anyways, seems I'm just ranting, I still like Claude, yes but nonetheless it still feels like the game you described above.

cush•1h ago
You could so easily build your own /schedule. This is hardly a feature driving lock-in
mikepurvis•1h ago
> I want to pick up and move to another harness and/or model with minimal fuss. Buying in to things like this would make that much harder.

Yes, I expect that is very much the point here. A bunch of product guys got on a whiteboard and said, okay the thing is in wide use but the main moat is that our competitors are even more distrusted in the market than we are; other than that it's completely undifferentiated and can be swapped out in a heartbeat for multiple other offerings. How do we do we persuade our investors we have a locked in customer base that won't just up-stakes in favour of other options or just running open source models themselves?

throwup238•18m ago
I think they really knee capped themselves when they released Claude for Github integrations, which allows anyone to use their Claude subscription to run Claude Code in Github actions for code reviews and arbitrary prompts. Now they’re trying to back track that with a cloud solution.
sunnybeetroot•48m ago
Isn’t that what LangChain/LangGraph is meant to solve? Write workflows/graphs and host them anywhere?
tiku•32m ago
I believe it doesn't matter, other companies will copy or improve it. The same happend with clawdbot, the amount of clones in a month was insane.
ahmadyan•19m ago
> I'm not going to build my business or my development flows on things I can't replicate myself.

but you can replicate these yourself! i'm happy that ant/oai are experimenting to find pmf for "llm for dev-tools". After they figure out the proper stickyness, (or if they go away or nerf or raise prices, etc) you can always take the off-ramp and implement your own llm/agent using the existing open-source models. The cost of building dev-tools is near zero. it is not like codegen where you need the frontier performance.

JohnMakin•7m ago
This is a similar sentiment I heard early on in the cloud adoption fever, many companies hedged by being “multi cloud” which ended up mostly being abandoned due to hostile patterns by cloud providers, and a lot of cost. Ultimately it didn’t really end up mattering and the most dire predictions of vendor lock in abuse didn’t really happen as feared (I know people will disagree with this, but specifically speaking about aws, the predictions vs what actually happened is a massive gap. note I have never and will never use azure, so I could be wrong on that particular one).

I see people making similar conclusions about various LLM providers. I suspect in the end it’ll shake out about the same way, the providers will become practically inoperable with each other either due to inconvenience, cost, or whatever. So I’ve not wasted much of my time thinking about it.

verdverm•1h ago
One gripe I have with Claude Code is that the CLI, Desktop app, and apparently the Webapp have a Venn Diagram of features. Plugins (sets of skills and more) are supported in Code CLI, maybe in Cowork (custom fail to import) but not Code Desktop. Now this?

The report that they are 90% Ai code generated seems more likely the more I attempt to use their products.

bottlepalm•24m ago
Their source code leak showed how badly vibe coded Claude Code is, despite it being one of the best AI assistants.

But yea there's some annoying overlap here with Cowork which also has scheduled tasks, in Cowork the tasks can use your desktop, browser and accounts which is pretty useful - a big difference from these Claude Code Routines.

dispencer•1h ago
This is massive. Arguably will be the start of the move to openclaw-style AI.

I bet anthropic wants to be there already but doesn't have the compute to support it yet.

dpark•26m ago
What’s massive about cron jobs and webhooks? I feel like I’m missing something. This is useful functionality but also seems very straightforward.
jcims•1h ago
Is there a consensus on whether or not we've reached Zawinski's Law?
senko•1h ago
I've had an AI assistant send me email digests with local news, and another watching a cron job, analyzing the logs and sending me reports if there's any problem.

I'd say that counts as yes.

(For clarity: neither are powered by Claude Code Routines. Rather, Claude Code coded them and they're simple cron jobs themselves.)

verdverm•1h ago
TIL email is what I'm missing in my personal development (swiss army) tool
dispencer•1h ago
This wild, one of the pieces I was lacking for a very openclaw-esque future. Now I think I have all the mcp tools I need (github, linear, slack, gmail, querybear), all the skills I need, and now can run these on a loop.

Am I needed anymore?

brcmthrowaway•1h ago
No
srid•1h ago
I just used this to summarize HN posts in last 24 hours, including AI summaries.

This PR was created by the Claude Code Routine:

https://github.com/srid/claude-dump/pull/5

The original prompt: https://i.imgur.com/mWmkw5e.png

egamirorrim•1h ago
I wish they'd release more stuff that didn't rely on me routing all my data through their cloud to work. Obviously the LLM is cloud based but I don't want any more lock-in than that. Plus not everyone has their repositories in GitHub.
taw1285•1h ago
I have a small team of 4 engineers, each of us is on the personal max subscription plan and prefer to stay this way to save cost. Does anyone know how I can overcome the challenge with setting up Routines or Scheduled Tasks with Anthropic infra in a collaborate manner: ie: all teammates can contribute to these nightly job of cleaning up the docs, cleaning up vibe coding slops.
hallway_monitor•1h ago
My team was doing this until recently but I think in February, Anthropic made team accounts available for subscription instead of API billing. Assuming that is the cost you mentioned.
teucris•1h ago
My only real disappointment with Claude is its flakiness with scheduling tasks. I have several Slack related tasks that I’ve pretty much given up trying to automate - I’ve tried Cowork and Claude Code remote agents, only to find various bugs with working with plugins and connectors. I guess I’ll give this a try, but I don’t have high hopes.
sminchev•1h ago
Everything is big race! Each company is trying to do as much as possible, to provide as many tools as possible, to catch the wave and beat the concurrency. I remember how Antropic and OpenAI made releases in just 10-15 minutes of difference, trying to compete and gain momentum.

And because they use AI heavily, they produce new product every week. So fast, that I have no time to check, does it worth or not.

This one looks interesting. I have some custom commands that I execute manually weekly, for monitoring, audits, summary, reports. It it can send reports on email, or generate something that I can read in the morning with my coffee, or after I finish with it ;) it might be a good tool.

The question is, do I really want to so much productive? I am already much better in performance with AI, compared with the 'old school' way...

Everything is just getting to much for me.

tills13•1h ago
> react to GitHub events from Anthropic-managed cloud infrastructure

Oh cool! vendor lock-in.

comboy•1h ago
Unrelated, but Claude was performing so tragically last few days, maybe week(s), but days mostly, that I had to reluctantly switch. Reluctantly because I enjoy it. Even the most basic stuff, like most python scripts it has to rerun because of some syntax error.

The new reality of coding took away one of the best things for me - that the computer always just does what it is told to do. If the results are wrong it means I'm wrong, I made a bug and I can debug it. Here.. I'm not a hater, it's a powerful tool, but.. it's different.

pacha3000•41m ago
I'm the first to be tired of everyone, for every model, that says "uuuh became dumber" because I didn't believe them

... until this week! Opus is struggling worse than Sonnet those last two weeks.

comboy•13m ago
Pretty reassuring to hear that. I was skeptical too, there's a lot of variables like some crap added to memory specific skill or custom instructions interfering with the workflow and what not. But now it was like a toddler that consumes money when talking.
oxag3n•37m ago
Are they going to mirror every tool software engineers were used to for decades, but in a mangled/proprietary form?

I think to become really efficient they'll have to invent new programming language to eliminate all the ambiguity and non-determinism. Call it "prompt language", with ai-subroutines, ai-labels and ai-goto.

causal•28m ago
Haven't Github-triggered LLMs already been the source of multiple prompt injection attacks? Seems bad.