frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Software 3.1? – AI Functions

https://blog.mikegchambers.com/posts/software-31-ai-functions/
37•aspittel•2h ago

Comments

aspittel•2h ago
AWS just shipped an experimental library through strands-labs, AI Functions, which execute LLM-generated code at runtime and return native Python objects. They use automated post-conditions to verify outputs continuously. Unlike generate-and-verify approaches, the AI-generated code runs directly in your application.
xiphias2•1h ago
This looks like Symbolica, except the great thing of what they are doing is that they are setting new ARC-AGI records.

https://www.symbolica.ai/blog/arcgentica

waynesonfire•1h ago
Obvisouly you have never built software. English is a terrible programming language, you cannot have ambiguity in defining your computation.
squeefers•1h ago
> you cannot have ambiguity in defining your computation

nobody except for maybe nasa would make software in this scenario.

PhunkyPhil•1h ago
Product owners and business people request code in vague English all the time. It's our job to parse it to code using our own judgement.
bilekas•1h ago
> Now consider a different arrangement. The LLM generates code that actually runs inside your application – at call time, every time the function is invoked.

I'm sure there's a lot of effort put into this, god knows why, but I pray I never have to have this in a production environment im on.

moffers•1h ago
Could you do this with erlang’s term to binary functionality?
Stromgren•1h ago
I use Tidewave as my coding agent and it’s able to execute code in the runtime. I believe it’s using Code.eval_string/3, but you should be able to check the implementation. It’s the project_eval tool.

In my experience it’s a huge leap in terms of the agent being able to test and debug functionality. It’ll often write small code snippets to test that individual functions work as expected.

exfalso•1h ago
This is a terrible idea
furyofantares•1h ago
People did this 3 or more years ago. It's funny, but no less dumb now than it was then.
re-thc•1h ago
It's in the title. Software 3.1 (years ago).
leoedin•1h ago
I can't even imagine how many joules would be used per function call!

As an experiment, it's kind of cool. I'm kind of at a loss to what useful software you'd build with it though. Surely once you've run the AI function once it would be much simpler to cache the resulting code than repeatedly re-generate it?

Can anyone think of any uses for this?

re-thc•1h ago
> run the AI function once it would be much simpler to cache the resulting code than repeatedly re-generate it?

Surely, you'll run a function that does an AI call to cache the resulting code.

ryancoleman•1h ago
The initial version on GitHub does not implement caching or memorization but it's possible and where the project will likely head. (FYI I'm on the Strands Agents team).
amelius•1h ago
You just tell the AI: use as little energy as possible, by whatever means necessary!
pphysch•1h ago
Anthropic announces deal to buy 100% of Idaho's potato crop, in return for options, in new energy efficiency push
ryancoleman•1h ago
They're handy for situations where it would be impractical to anticipate the way your input might vary. Like say you want to accept invoices or receipts in a variety of file formats where the data structure varies but you can rely on the LLM to parse and organize. AI Functions lets you describe how that logic should be generated on-demand for the input received, with post-conditions (another Python function the dev write) which define what successful outcomes look like. Morgan wrote about the receipt parser scenario here: https://dev.to/morganwilliscloud/the-python-function-that-im... (FYI I'm on the Strands Agents team)
simsla•1h ago
I've used stuff like this for a hobby project where "effort to write it" vs "times I'm going to use it" is heavily skewed [0]. For production use cases, I can only see it being worth it for things that require using an ML model anyway, like "summarize this document".

[0] e.g. something like the below which I expect to use maybe a dozen times total.

Main routine: In folder X are a bunch of ROM files (iso, bin, etc) and a JSON file with game metadata for each. Look for missing entries, and call [subroutine] once per file (can be called in parallel). When done, summarise the results (successes/failures) based on the now updated metadata.

Subroutine: (...) update XYZ, use metacritic to find metadata, fall back to Google.

zeckalpha•1h ago
There were people doing this sort of thing 2-3 years ago. What are they doing now?
blibble•1h ago
apparently still writing blog posts on it and posting them to HN
throwup238•1h ago
Haven’t we been seeing libraries that implement this pattern going on two years now? Take the docstring and monkey patch the function with llm generated code, with optional caching against an AST hash key.

The reason it hasn’t take off is that it’s a supremely bad and unmaintable idea. It also just doesn’t work very well because the LLM doesn’t have access to the rest of the codebase without an agentic loop to ground it.

kingstnap•1h ago
The real reason its bad is because its not really easier to be more productive doing this:

> You write a Python function with a natural language specification instead of implementation code. You attach post-conditions – plain Python assertions that define what correct output looks like.

Vs

> You write a Python function with ~~a natural language specification instead of~~ implementation code.

In many cases.

Kuinox•1h ago
It may seems that a terrible idea, but I think that's good to run quick scripts. It means you can delegate some uninteresting parts the AI is likely to succeed at.

For example, connecting to endpoints, etc... then the logic of your script can run.

nglander•1h ago
Apparently we have blogging-3.0 as well, since the article is littered with AI-isms.

These attempts at generating code that adheres to a whatever spec in Python of all languages are futile and just please investors.

There is a reason that really proving adherence to a spec or making arguments that the spec is reasonable in the first place is hard.

But hey, thinking is hard, let's go AI shopping.

stackghost•1h ago
I'm normally pessimistic about LLMs but I'll be the contrarian here and suggest there's actually a potential use case for what TFA proposes and it's programmatic/procedural generation for large game worlds.
renegade-otter•1h ago
There is a use for everything. The problem is, people will try to use this to create CRUD apps for no goddamned reason.
stackghost•1h ago
>There is a use for everything.

Eventually, perhaps. I've yet to see a use case for blockchains that isn't merely a worse facsimile of something already existing.

But the electron was useless when it was discovered, so maybe one day

bpavuk•1h ago
so, this idea looks like follows: expose programmatic access to your program, which potentially operates in destructive manner (no Undo button) on potentially sensitive data; give a sloppy LLM (sloppy - due to its sheer unpredictability and ability to fuck up things a sober human with common sense never ever would) a Python interpreter; then let it run away with it and hope that your boundaries are enough to stop it at the edges YET don't limit the user too much?

nah, I'm skipping this update.

manofmanysmiles•1h ago
I'd like to see this with a proper local "instruction cache."

It might even be fun that the first call generates python (or other langauge), and then subsequent calls go through it. This "otpimized" or "compiled" natural langauge is "LLMJitted" into python. With interesting tooling, you could then click on the implementation and see the generated cod, a bit like looking at the generated asssembly. Usually you'd just write in some hybrid pytnon + natural language, but have the ability to look deeper.

I can also imagine some additional tooling that keeps track of good implementations of ideas that have been validated. This could extend to the community. Package manager. Through in TRL + web of tust and... this could be wild.

Really tricky functions that the LLM can't solve could be delegated back for human implementation.

snowhale•1h ago
the jit angle is actually the most principled framing here -- generate once, cache the compiled artifact, treat it like any other build output. the problem with the naive "call LLM every time" version isn't just cost, it's that you lose referential transparency. same function signature, different behavior on tuesday vs wednesday when the model updates. at least a jit'd artifact is reproducible within a build.
mtw14•30m ago
I'm wondering if the post-condition checks change the perspective on this at all, because yes the code is nondeterministic and may execute differently each time. That is the problem this is trying to solve. You define these validation rules and they are deterministic post-condition checks that retry until the validation passes (up to a max retry number). So even if the model changes, and the behavior of that model changes, the post-condition checks should theoretically catch that drift and correct the behavior until it fits the required output.
falcor84•57m ago
Nice! I can almost see your vision. In terms of tooling, I think this could be integrated with deep instrumentation (a-la datadog) and used to create self-improving systems.
amelius•1h ago
Why even return Python data structures? You might even return things like "A list that contains in order 1 ... 10, except the number 5".
chaboud•1h ago
Why stop there? Just call the LLM with the data and function description and get it to return the result!

(I'll admit that I've built a few "applications" exploring interaction descriptions with our Design team that do exactly this - but they were design explorations that, in effect, used the LLM to simulate a back-end. Glorious, but not shippable.)

ryancoleman•57m ago
That's basically how it works! (with human authored functions that validate the result, automatically providing feedback to the LLM if needed)
falcor84•50m ago
Because you often need the result not as a standalone artifact, but as a piece in a rigid process, consisting with well-defined business logic and control flow, with which you can't trust AI yet.
mtw14•28m ago
What was the gap you discovered that made it not shippable? This is an experimental project, so I'm curious to know what sorts of problems you ran into when you tried a similar approach.
fd-codier•1h ago
Is there at least a single benefit using this ?
kaspermarstal•1h ago
I’m quite sure that’s the en state of software except without the software around it. There will only be an AI and interface. For now, though, while tokens cost a non-trivial amount of energy, I think you can do something more useful if you have the LLM modify the program at runtime because it’s just may orders of magnitude cheaper. Fx, use the BEAM, it’s actor model, hot code reloading, and REPL introspection and you can build a program that an LLMs can change, e.g. user says “become a calculator” and “become a pdf to html converter”.

I’m not just making this stuff up of course, got the idea yesterday after reading Karpathy’s tweet about Nanoclaws contribution model (don’t submit PRa with features, submit PRs that tell an llm how to modify the program). Now I can’t concentrate on my day job. Can’t stop thinking about my little elixir beam project.

renegade-otter•1h ago
This has big "let's do this because we can" energy.

What is the BENEFIT of all this?

Let's use Blockchain instead of a database - because we can.

Let's create a maze of microservices - because we can.

Let's make every function a lambda function - because we can.

Let's make AI write code, run it, verify it, fix it, then run it again - because we can.

Let's burn untold amounts of energy to do simple things - because we can.

marginalia_nu•1h ago
Because we can? More like because I have equity in a company that sells this stuff.
gdulli•1h ago
Discretion will be the better part of the tech industry, if we ever reach that maturity level.
1-6•49m ago
To you, what's the point of spending countless billions on space exploration?
renegade-otter•43m ago
You can make that argument about every single thing that is wasteful but can be justified as "research".

Sure, every bit of f--ing around is research, but ROI is far from constant.

gdulli•27m ago
Good comp. Working with expensive materials and stuff that can explode while people are inside by necessity forces a greater scrutiny of good vs. bad ideas. You don't get that ideal balance between experimentation and wisdom when anyone can type anything into an editor at no cost.
otikik•1h ago
Why would I want to do that?
khalic•1h ago
Is this satire?
vjerancrnjak•59m ago
Funny how pydantic is used to parse and not validate but then there are post conditions after parsing which you should parse actually or which can be enforced with json schema and properly implemented constrained sampling on the LLM side.
kkukshtel•59m ago
I wrote about something along these lines 3 years ago, but used the name "Heisenfunctions," which I think is better :)

https://kylekukshtel.com/incremental-determinism-heisenfunct...

A lot of this was also inspired by Ian Bicking's work here:

https://ianbicking.org/blog/2023/01/infinite-ai-array.html

bwestergard•58m ago
The "Grace" language is based on the same idea, but lets you get the full benefit of specifying static types.

https://github.com/Gabriella439/grace

It's still probably not a great idea.

alecco•57m ago
This is why RAM is 5x.
bilater•27m ago
Had a similar idea a couple of years ago but I think this is still tied to the old way of doing things. More like software 2.9 rather than 3.1.

Apache NetBeans 29 Released

https://netbeans.apache.org/front/main/download/nb29/
1•birdculture•2m ago•0 comments

Boogiebench: Evaluating models' ability to write music

https://www.boogiebench.com
1•tintinnabula•2m ago•0 comments

NPR Finds 53 Missing 'Trump' Pages – The DOJ Has No Explanation

https://www.mediaite.com/opinion/epstein-files-npr-finds-53-missing-trump-pages-the-doj-has-no-ex...
4•Betelbuddy•3m ago•1 comments

The Rejection of Artificially Generated Slop (Rags)

https://406.fail/
1•signa11•3m ago•0 comments

Show HN: GhostVM – native macOS VMs for secure dev and isolated agent workflows

https://github.com/groundwater/GhostVM
1•JacobDivbyzero•4m ago•0 comments

The paradox of Bangladesh's democratic rebirth

https://globalvoices.org/2026/02/14/the-paradox-of-bangladeshs-democratic-rebirth-a-critical-anal...
1•PaulHoule•5m ago•0 comments

Show HN: Idea Reality MCP – Pre-build reality check for AI coding agents

https://github.com/mnemox-ai/idea-reality-mcp
1•mnemoxai•5m ago•0 comments

Show HN: Tag Promptless on any GitHub PR/Issue to get updated user-facing docs

2•prithvi2206•5m ago•0 comments

Show HN: Emdash – Open-source agentic development environment

https://github.com/generalaction/emdash
2•onecommit•6m ago•0 comments

Pentagon, Musk's xAI reach agreement to use Grok in classified systems

https://www.aa.com.tr/en/americas/pentagon-musk-s-xai-reach-agreement-to-use-grok-in-classified-s...
1•Betelbuddy•7m ago•0 comments

Former Norwegian premier hospitalized after suicide attempt amid Epstein charges

https://www.aa.com.tr/en/europe/former-norwegian-premier-hospitalized-after-suicide-attempt-amid-...
2•Betelbuddy•9m ago•0 comments

Jira Ticket Analysis Web App (Free)

https://jiralens.com/
1•thebitvader•10m ago•1 comments

Myelin repair promoted by clemastine fumarate in nonhuman primate model

https://www.pnas.org/doi/10.1073/pnas.2520161123
2•bikenaga•10m ago•0 comments

One workspace for inspiration, intelligence, and creation

https://www.inspoai.io
1•sendnow•11m ago•0 comments

Show HN: Unthumb – Replace YT thumbnails with frames from the video

https://chromewebstore.google.com/detail/unthumb-hide-and-replace/ihibeclkodckpjfiihkcdhejpcielpcl
1•philcunliffe•11m ago•0 comments

STARC framework for Bank-Fintech risk management

https://www.independentbanker.org/w/starc-framework-for-bank-fintech-risk-management
1•petethomas•12m ago•0 comments

Emissaries – Constitutional principles for personal agents

https://commontask.org/emissaries/
2•durakot•12m ago•0 comments

Querying 3B Vectors

https://vickiboykis.com/2026/02/21/querying-3-billion-vectors/
1•mooreds•12m ago•0 comments

Finding Hidden Cloud Savings

https://newsletter.masterpoint.io/p/finding-hidden-cloud-savings
1•mooreds•12m ago•0 comments

Anthropic accuses China of 'industrial scale' attempt to steal Claude

https://www.neowin.net/news/anthropic-accuses-china-of-industrial-scale-attempt-to-steal-claudes-...
2•bundie•12m ago•0 comments

Least Privilege Manifesto

https://www.osohq.com/post/least-privilege-manifesto
2•boristane•13m ago•0 comments

Show HN: LoMux – Lightweight FFmpeg GUI in Rust (3MB Binary)

https://github.com/zblauser/LoMux
1•selectedambient•13m ago•0 comments

Sonic Attack on a Silent Vigil

https://earshotngo.substack.com/p/sonic-attack-on-a-silent-vigil
2•moxifly7•16m ago•0 comments

Re-thinking candidate take-homes in the AI Era: transcripts over code

https://rootly.com/blog/re-thinking-candidates-take-homes-in-the-ai-era-transcripts-over-code
1•jjtang1•17m ago•0 comments

1Password Raising Prices ~33%

6•iamben•19m ago•2 comments

Workaholic open source developers need to take breaks

https://www.theregister.com/2026/02/23/open_source_devs_column/
1•CrankyBear•19m ago•0 comments

Tritone Substitution

https://www.johndcook.com/blog/2026/02/23/tritone-sub/
1•ibobev•21m ago•0 comments

Giant Steps

https://www.johndcook.com/blog/2026/02/23/giant-steps/
1•ibobev•21m ago•0 comments

Formal determination of deidentification under California law

https://www.johndcook.com/blog/2026/02/23/copy-and-paste-law/
1•ibobev•21m ago•0 comments

Takeaways of building an MCP Server for my app

https://tagstack.io/blog/mcp-for-tagstack
1•greatNespresso•22m ago•0 comments