frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Coding agent in 94 lines of Ruby

https://radanskoric.com/articles/coding-agent-in-ruby
153•radanskoric•6mo ago

Comments

rbitar•6mo ago
RubyLLM has been a joy to work with so nice to see it’s being used here. This project is also great and will make it easier to build an agent that can fetch data outside of the codebase for context and/or experiment with different system prompts. I’ve been a personal fan of claude code but this will be fun to work with
radanskoric•6mo ago
Author here. The code I made took me 3 hours (including getting up to speed on RubyLLM). I also intentionally DIDN'T use a coding assistant to write it (although I use Windsurf in my regular work). :D

It's clearly not a full featured agent but the code is here and it's a nice starting point for a prototype: https://github.com/radanskoric/coding_agent

My best hope for it is that people will use it to experiment with their own ideas. So if you like it, please feel free to fork it. :)

RangerScience•6mo ago
This is very cool, somewhat inspiring, and (personally) very informative: I didn't actually know what "agentic" AI use was, but this did an excellent job (incidentally!) explaining it.

Might poke around...

What makes something a good potential tool, if the shell command can (technically) can do anything - like running tests?

(or it is just the things requiring user permission vs not?)

tough•6mo ago
> What makes something a good potential tool, if the shell command can (technically) can do anything - like running tests?

Think of it as -semantic- wrappers so the LLM can -decide- what action to take at any given moment given its context, the user prompt, and available tools names and descriptions.

creating wrappers for the most used basic tools even if they all pipe to terminal unix commands can be useful.

also giving it speicif knowledge base it can consult on demand like a wiki of its own stack etc

notpushkin•6mo ago
Also it’s safer than just giving unrestricted shell access to an LLM.
tough•5mo ago
that too, ideally autonomous agents will be only spawnrd in their own secure environments using docker or vm’s or posix / unix security

but yeah

radanskoric•6mo ago
Thanks, sharing my learnings on how coding agents work was my main intention with the article. Personally I was a bit surprised by how much of the "magic" is coming directly from the underlying LLM.

The shell command can run anything really. When I tested it, it asked me multiple times to run the tests and then I could see it fixing the tests in iterations. Very interesting to observe.

If I was to improve this to be a better Ruby agent (which I don't plan to do, at least not yet), I would probably try adding some Rspec/Minitest specific tools that would parse the response and present it back to the LLM in a cleaned up format.

elif•6mo ago
Why stop there? Give it a capybara tool and make it a full TDD agent
radanskoric•6mo ago
That's a very neat idea, maybe even add something like browser-use to allow it to implement a Rails app and try it out automatically. I think you should try it. :)

I'm being serious. This sounds like a fun project but I have to turn my attention to other projects for the near future. This was more of an experiment for me, but it would be cool to see someone try out that idea.

RangerScience•6mo ago
Do you know of examples of other agents with more defined tools, to use as inspiration/etc?

(Like - what would it look like to clean up test results for an LLM?)

fullstackwife•6mo ago
This reminds me about PHP hello world programs which would take a string from GET, use it as a path, read a file from this path, and return the content in the response. You could make a website while not using any knowledge about websites.

Agents are the new PHP scripts!

zeckalpha•5mo ago
RCE as a service!
Mystery-Machine•6mo ago
Just out of curiosity, I never understood why people do `ENV.fetch("ANTHROPIC_API_KEY", nil)` which is the equivalent of `ENV["ANTHROPIC_API_KEY"]`. I thought the whole point of calling `.fetch` was to "fail fast". Instead of assigning `nil` as default and having `NoMethodError: undefined method 'xxx' for nil` somewhere random down the line, you could fail on the actual line where a required (not optional) ENV var wasn't found. Can someone please explain?
jaredsohn•6mo ago
There might be code later that says that if the anthropic api key is not set, then turn off the LLM feature. Wouldn't make sense for this LLM-related code but the concept makes sense for using various APIs from dev.
riffraff•6mo ago
But if you do ENV[xxx] the value is also set to nil.

Using .fetch with a default of nil is what's arguably not very useful.

IMO it's just a robocop rule to use .fetch, which is useful in general for exploding on missing configuration but not useful if a missing value is handled.

radanskoric•6mo ago
Author here. You're actually right here.

I took the code from RubyLLM configuration documentation. If you're pulling in a lot of config options and some have default values then there's value in symmetry. Using fetch with nil communicates clearly "This config, unlike those others, has no default value". But in my case, that benefit is not there so I think I'll change it to your suggestion when I touch the code again.

zeckalpha•5mo ago
This may be a Pythonism, where the exception raising convention is inverse of Ruby's.

{}["key"] # KeyError in Python

sagarpatil•6mo ago
I don’t understand the hype in the original post.

OpenAI launched function calls two years ago and it was always possible to create a simple coding agent.

radanskoric•6mo ago
Author here. The part about coding agents that wasn't clear to me was how much of the "magic" is in the underlying LLM and how much in the code around it making it into an agent.

When I realised that it's mostly in the LLM I found that a bit surprising. Also, since I'm not an AI Engineer, I was happy to realise that my "regular programming" skills would be enough if I wanted to build a coding agent.

It sounds like you were aware of that for a while now, but I and a lot of other people weren't. :)

That was my motivation for writing the article.

ColinEberhardt•6mo ago
Great post, thanks for sharing. I wrote something similar a couple of years ago, showing just how simple it is to work with LLMs directly rather than through LangChain, adding tool use etc …

https://blog.scottlogic.com/2023/05/04/langchain-mini.html

It is of course quite out of date now as LLMs have native tool use APIs.

However, it proves a similar point to yours, in most applications 99% of the power is within the LLM. The rest is often just simple plumbing.

radanskoric•6mo ago
Thanks for sharing this. The field moves so yes, it's out of date, but it's useful to see how the tools concept evolved. Especially since I wasn't paying attention at that area of development back when you wrote your article. Very interesting.
thih9•6mo ago
> Claude is trained to recognise the tool format and to respond in a specific format.

Does that mean that it wouldn’t work with other LLMs?

E.g. I run Qwen3-14B locally; would that or any other model similar in size work?

simonw•6mo ago
Qwen3 was trained for tool usage too. Most models are these days.

https://qwenlm.github.io/blog/qwen3/#agentic-usages

radanskoric•6mo ago
It would work with most other Tool enabled LLMs. RubyLLM abstracts away the format. Some will work better than the others, depending on the provider, but almost all have tool support.

Claude is just an example. I pulled the actual payloads by looking at what is actually being sent to Claude and what it is responding. It might vary slightly for other providers. I used Clause because I already had a key ready from trying it out before.

thih9•6mo ago
> return { error: "User declined to execute the command" }

I wonder if AIs that receive this information within their prompt might try to change the user’s mind as part of reaching their objective. Perhaps even in a dishonest way.

To be safe I’d write “error: Command cannot be executed at the time”, or “error: Authentication failure”. Unless you control the training set; or don’t care about the result.

Interesting times.

radanskoric•6mo ago
If a certain user is susceptible to having the LLM convince them to run an unsafe command, I fear we can't fix that by trying to trick the LLM. :D

Either the user needs to be educated or we need to restrict what the user themselves can do.

johnisgood•6mo ago
I am leaning towards the former. Please let us have nice things despite the people unwilling to learn.
radanskoric•6mo ago
Why are people always the reason why we can't have nice things... :D
johnisgood•6mo ago
Side-note: I do not understand the inclusion of "N lines of X". You import a library, which presumably consists of many lines. I do not see the point. It would be true that this is only 94 lines of Ruby if and only if there was no "require "ruby_llm/tool"" at the top.
monooso•6mo ago
Given that this post is a response to an article about achieving the same in "N lines of Go" (also using a library), it seems like an appropriate title.
johnisgood•6mo ago
The original post uses "github.com/anthropics/anthropic-sdk-go", the Ruby uses a different library, does it not? If they are two different libraries, then the comparison does not make too much sense.
radanskoric•6mo ago
I didn't put the number into the title to make it a competition. LoC is a poor metric. I put it to communicate to the reader that they won't have to spend a lot of time reading the article to get a full understanding.

I always put extra effort into trying to make my blog posts shorter without sacrificing the quality. I think good technical writing should transfer the knowledge while requesting the least amount of time possible from the reader.

johnisgood•6mo ago
I know, I was only responding to the comment, I was not trying to claim that you are attempting to make it a competition, my bad if it came across as such.
radanskoric•6mo ago
It's good that you commented. I see more than a few people are getting caught up on the number of lines so it's good that I clarify.
monooso•5mo ago
I simply meant that I don't consider the title misleading; it's in keeping with the format of the original title and both solutions use third-party libraries to communicate with the external service.
johnisgood•5mo ago
Fair enough. :)
zoky•6mo ago
That actually is exactly the point. It has to do with the expressiveness of the language as well as how much you can do with the available toolset. If I showed you a 200-line program to play hangman written in C and a 2000-line equivalent program written in assembly, it wouldn’t really be useful to take into account the 15 million lines of code in the C compiler when trying to compare the two languages.
johnisgood•6mo ago
I do not think it is any meaningful. If you have such a library in C, or Common Lisp, or Forth, then using that library is probably always going to be just a few lines of code. The library just has to have a good enough API.
radanskoric•6mo ago
It depends on the flexibility of the API. If you're making an API for just one specific use case, you can make it a one liner in any language, even assembler: just push the exact specific functionality into the one function.

Language expressiveness is more about making the interface support more use case while still being as concise. And Ruby is really good at this, better than most languages.

johnisgood•6mo ago
I don't disagree, I do find Ruby readable, and it was the first language that caught my eye back when I was a kid, precisely because of its readability and expressiveness.

I suppose we have to define expressiveness (conciseness, abstraction power, readability, flexibility?), because Ruby, for example, has human-readable expressiveness, Common Lisp has programmable expressiveness, and Forth has low-level expressiveness, so they all have some form of expressiveness.

I think Ruby, Crystal, Rebol 3, and even Nim and Lua have a similar form or type of expressiveness.

radanskoric•6mo ago
Yes, exactly, Ruby has the human readability expressiveness.

If you say that expressivity is the ability to implement a program in less lines of code then Ruby is more expressive than most but less than for example Clojure. Well written Clojure can be incredibly expressive. However, you can argue that for most people it's going to be less readable than a comparable Ruby program.

It's hard to talk about these qualities as there's a fair amount of subjectivity involved.

johnisgood•5mo ago
I think I would be able to read Ruby better than Clojure.

But yeah, you are right, there is too much subjectivity involved in all of this. :)

Anyways, I hope you know I did not mean to use any of my comments against you, I was just wondering.

radanskoric•5mo ago
No worries, I didn't think that's the case. :)

It's an interesting conversation.

radanskoric•6mo ago
I put the lines of code into the title to communicate to the reader that they can get a good understanding just by reading this article.

Basically, what I wanted to say was: "Here is an article on building a prototype coding agent in Ruby that explains how it works and the code is just 94 lines so you'll really be able to get a good understanding just by reading this article."

But that's a bit too long for a title. :)

When understanding a certain concept, it's very useful to be able to see just the code that's relevant to the concept. Ruby language design enables that really well. Also, Ruby community in general puts a lot of value on readability. Which is why with Ruby it's often possible to eliminate almost all of the boilerplate while still keeping the code relatively flexible.

melvinroest•6mo ago
The way I'd create extra functionality is to give command-line access with a permission step in between. I'd then create a folder of useful scripts and give it permission to execute those.

You can make it much more than just a coding agent. I personally use my personal LLMs for data analysis by integrating it with some APIs.

These type of LLM systems are basically acting as a frontend now that respond to very fuzzy user input. Such an LLM can reach out to your own defined functions (aka a backend).

The app space that I think is interesting and that I'm working on is creating these systems combined with some solid data creating advicing/coaching/recommendation systems.

If you want some input on building something like that, my email is in my profile. Currently I'm playing around with an LLM chat interface with database access that gives study advice based on:

* HEXACO data (personality)

* Motivational data (self-determination theory)

* ESCO data (skills data)

* Descriptions of study programs described in ESCO data

If you want to chat about creating these systems, my email is in my profile. I'm currently also looking for freelance opportunities based on things like this as I think there are many LLM applications to which we've only scratched the surface.

elif•6mo ago
Thank you for showing off why ruby is useful not just in the current year, but particular to the current time and AI situation. When you're dealing with code written with hallucinations, you want an easy to understand quickly language (of which ruby is S tier) where out of place behavior cannot hide in code so repetitive and unnecessary that your mind tries to skip over it.
radanskoric•6mo ago
That's an excellent point.

Code was always read more than written. With AI it shifts even more towards reading so language readability becomes even more important. And Ruby really shines there.

dontlaugh•5mo ago
Ruby, the language famous for lots of difficult to understand runtime “magic”?
radanskoric•5mo ago
It's a sharp knife. You can create a messy nightmare or a clean super readable codebase, it's up to how good the author is.
dontlaugh•5mo ago
So Ruby isn’t actually a good target for stochastic parrots, after all.

You’d want the opposite, a language with automatically checked constraints that is also easy to read.

hoipaloi•5mo ago
I'll take ruby, give me the sharp knife, I know what I'm doing...
matt_s•6mo ago
Wow, so that RubyLLM gem makes writing an agent more about basic IO operations. I have somehow thought there needed to be deep understanding of LLMs and/or AI APIs to build things like this where I would need to research and read a lot of docs, stay up to date on the endless updates the various AI systems have, etc. The example from the article is about files and directories, this same concept could apply to any text inputs, like data out of a Rails app.
radanskoric•6mo ago
That was my misunderstanding as well. That's why I wrote the article.

Btw, it's not even about the RubyLLM gem. The gem abstracts away the calling of various LLM providers and gives a very clean and easy to use interface. But it's not what gives the "agentic magic". The magic is pretty much all in the underlying LLMs.

Seeing all the claims made by some closed source agent products (remember the "world's first AI software engineer"?) I thought that a fair amount of AI innovation is in the agent tool itself. So I was surprised when I realised that almost all of the "magic" parts are coming from the underlying LLM.

It's also kind of nice because it means that if you wanted to work on an agent product you can do that even if you're not an AI specialised engineer (like I am not).

Britain's railway privatization was an abject failure

https://www.rosalux.de/en/news/id/53917/britains-railway-privatization-was-an-abject-failure
234•robtherobber•2h ago•192 comments

Checkout.com hacked, refuses ransom payment, donates to security labs

https://www.checkout.com/blog/protecting-our-merchants-standing-up-to-extortion
327•StrangeSound•6h ago•172 comments

GitHub Partial Outage

https://www.githubstatus.com/incidents/1jw8ltnr1qrj
30•danfritz•42m ago•15 comments

Blender Lab

https://www.blender.org/news/introducing-blender-lab/
67•radeeyate•2h ago•25 comments

Kratos - Cloud native Auth0 open-source alternative (self-hosted)

https://github.com/ory/kratos
34•curtistyr•1h ago•10 comments

Android developer verification: Early access starts

https://android-developers.googleblog.com/2025/11/android-developer-verification-early.html
1200•erohead•15h ago•539 comments

SIMA 2: An Agent That Plays, Reasons, and Learns with You in Virtual 3D Worlds

https://deepmind.google/blog/sima-2-an-agent-that-plays-reasons-and-learns-with-you-in-virtual-3d...
8•meetpateltech•17m ago•1 comments

Denx (a.k.a. U-Boot) Retires

https://www.denx.de/
24•synergy20•1h ago•2 comments

European Nations Decide Against Acquiring Boeing E-7 Awacs Aircraft

https://defensemirror.com/news/40527/European_Nations_Decide_Against_Acquiring_Boeing_E_7_AWACS_A...
5•saubeidl•17m ago•0 comments

We cut our Mongo DB costs by 90% by moving to Hetzner

https://prosopo.io/blog/we-cut-our-mongodb-costs-by-90-percent/
21•arbol•29m ago•6 comments

Seed. LINE's Custom Typeface

https://seed.line.me/index_en.html
66•totetsu•6h ago•31 comments

A Challenge to Roboticists: My Humanoid Olympics

https://spectrum.ieee.org/humanoid-robot-olympics
10•quapster•1w ago•2 comments

Switching from GPG to Age

https://luke.hsiao.dev/blog/gpg-to-age/
50•speckx•1w ago•28 comments

Heartbeats in Distributed Systems

https://arpitbhayani.me/blogs/heartbeats-in-distributed-systems/
16•sebg•2h ago•1 comments

Android 16 QPR1 is being pushed to the Android Open Source Project

https://grapheneos.social/@GrapheneOS/115533432439509433
191•uneven9434•11h ago•88 comments

Human Fovea Detector

https://www.shadertoy.com/view/4dsXzM
323•AbuAssar•14h ago•75 comments

Steam Machine

https://store.steampowered.com/sale/steammachine
2403•davikr•21h ago•1119 comments

Telli (Voice AI – YC F24) is hiring engineers in Berlin

https://hi.telli.com/eng
1•sebselassie•6h ago

Continuous Autoregressive Language Models

https://arxiv.org/abs/2510.27688
79•Anon84•1w ago•5 comments

Homebrew no longer allows bypassing Gatekeeper for unsigned/unnotarized software

https://github.com/Homebrew/brew/issues/20755
284•firexcy•17h ago•223 comments

GPT-5.1: A smarter, more conversational ChatGPT

https://openai.com/index/gpt-5-1/
461•tedsanders•20h ago•556 comments

Randomness Testing Guide

https://random.tastemaker.design/
27•user070223•1w ago•8 comments

Project Euler

https://projecteuler.net
514•swatson741•22h ago•126 comments

Reverse Engineering Yaesu FT-70D Firmware Encryption

https://landaire.net/reversing-yaesu-firmware-encryption/
78•austinallegro•8h ago•13 comments

Shader Glass

https://github.com/mausimus/ShaderGlass
43•erickhill•4d ago•7 comments

Steam Frame

https://store.steampowered.com/sale/steamframe
1722•Philpax•21h ago•616 comments

Transpiler, a Meaningless Word (2023)

https://people.csail.mit.edu/rachit/post/transpiler/
84•jumploops•6d ago•73 comments

Valve is about to win the console generation

https://xeiaso.net/blog/2025/valve-is-about-to-win-the-console-generation/
476•moonleay•16h ago•370 comments

Tesla Is Recalling Cybertrucks Again. Yep, More Pieces Are Falling Off

https://www.popularmechanics.com/cars/hybrid-electric/a69384091/cybertruck-lightbar-recall/
6•2OEH8eoCRo0•9m ago•0 comments

Mergiraf: Syntax-Aware Merging for Git

https://lwn.net/SubscriberLink/1042355/434ad706cc594276/
132•Velocifyer•1w ago•35 comments