Might poke around...
What makes something a good potential tool, if the shell command can (technically) can do anything - like running tests?
(or it is just the things requiring user permission vs not?)
Think of it as -semantic- wrappers so the LLM can -decide- what action to take at any given moment given its context, the user prompt, and available tools names and descriptions.
creating wrappers for the most used basic tools even if they all pipe to terminal unix commands can be useful.
also giving it speicif knowledge base it can consult on demand like a wiki of its own stack etc
Agents are the new PHP scripts!
also the article:
> it uses RubyLLM
(which itself is a lot more code added to 400 lines claimed by the author).
> Ruby leads to programmer happiness...
Am I the only one who finds Ruby code cryptic and less clear than JS, Python, and Rust?
As a person who's dabbled a bit in Rust and Ruby, I'm surprised people find Ruby intuitive and simple. For example:
> def execute(path:)
why the "incomplete" colon?
> module Tools
> class ListFiles < RubyLLM::Tool
what's with the "<"?
> param :command, desc: "The command to execute"
again, the colons are confusing...
Some of the complaints here are just about Ruby syntax, which: shrug.
> why the "incomplete" colon?
Just ruby's syntax to denote 'keyword arguments', i.e. when calling this method, you must explicitly use the parameter name, like so: execute(path: "/some/path")
> class ListFiles < RubyLLM::Tool
> what's with the "<"?
Just ruby's way of denoting inheritance i.e. class A < B means class A inherits from B
> param :command, desc: "The command to execute"
> again, the colons are confusing...
The colons can be confusing, but you get used to them. The first one in :command is denoting a symbol. You could think of it as just a string. Any method that expects a symbol could be rewritten to uses a string instead, but symbols are a tad more elegant (one less keypress). The second one, desc:, is a keyword argument being passed to the param method. You could rewrite it as param(:command, desc: "The command to execute")
I sometimes wonder what ruby would be like without symbols, and part of me thinks the tradeoff would be worth it (less elegant, since there'd be "strings" everywhere instead of :symbols, but it would be a bit easer to learn/read.
1. They refer to the same object in memory, so multiple usages of a symbol by the same name (e.g. `:param`) not not add additional memory overhead.
2. Symbols are also immutable in that they cannot be mutated in any way once they are referenced/created.
These properties can be useful in some contexts, but in practice they're effectively used as immutable constant values.
IMO it's because they mostly just don't think about the chaos they're doing. You encounter a function call that ends in a hash. Does the function being called use that as a hash or does it explode the hash to populate some extra trailing parameters? You have no way to know without going and looking. To me that's super dumb. To them it's just another day in unexpected behavior land.
Pretty hard to grow and learn new concepts if you immediately label anything you don’t understand yet as “super dumb”
I assume this is referring to passing hashes in as parameters to methods. Ruby 3 made this more explicit; you must use ** to convert a hash into positional arguments and you must use {} around your key/values if the first argument is a hash.
The stated optimization from Matz (who created Ruby) is “developer happiness”
The important optimization for me is “fidelity to business logic”, eg less cruft and ruby syntactic sugar means you could sit at your computer and read your business rules (in code) out loud in real time and be understood by a non-dev
You say you "dabbled" a bit in Ruby and then proceed to demonstrate your lack of understading of basic Ruby syntax.
Ruby symbols (and atoms in Lisp / Erlang / Elixir) are a performance hack, they are basically "interned" strings. They're immutable and are mostly used for commonly-repeated values, e.g. names of keyword arguments, enum values etc. Unlike strings, they're supposed to be treated as a single, opaque value, with no other properties beyond the name they represent. Most languages don't give you APIs to uppercase or slice symbols, though you can usually convert to strings and back if you really need to.
The performance advantage of symbols/atoms is that they're represented as pointers or offsets into a global pool, which contains their names. This means repeated instances of the same symbol take up much less memory than they would as ordinary strings, as each instance is just a single pointer, and the actual name it represents is only stored once, in the global pool. Comparing symbols is also much faster than strings, as two symbols representing the same name will always have the same offset, turning an O(n) character-by-character comparison into a simple O(1) comparison of pointers.
It's really strange to me that so many high-level languages, which prize themselves on "developer happiness" and "mental compression" force you to think about strings in this way, out of all things.
I think there's something to be said for having a single character that can be used to represent identifiers as strings. Giving your internal string type an interned representation (with automatic interning of literals and a method to intern arbitrary strings) is also an interesting idea, though I know far too little about interpreter design to have an opinion on how much performance difference it would make. Conflating those two ideas together just seems like a really bad deal to me.
Useless metric.
As CTO at Redo do you demand everything written in assembly? Does your entire company’s code run in one file or do you use abstraction to simplify?
I’m not quite clear on how expressing a really complex and paradigm shifting approach of agents in a more concise way is a bad thing
rbitar•6h ago