I think it's a good example of the kind of internal tools the article is talking about. I would not have spent the time to build this without claude making it much faster to build stand-alone projects and I would not have the agent to do the english -> policy output with LLMs.
This has been my main use case for AI. I have lots of ideas for little tools to take some of the drudgery out of regular work tasks. I'm a developer and could build them but I don't have the time. However, they're simple enough that I can throw them together in a basic script form really quickly with Cursor. Recently I built a tool to analyse some files, pull out data, and give me it in the format I needed. A relatively simple python script. Then I used Cursor to put it together with a simple file input UI in an electron app so I could easily share it with colleagues. Like I say, I've been developer for a long time but never written python or packaged an electron app and this made it so easy. The whole thing took less than 20mins and it was quick enough that I could do it as part of the task I was doing anyway rather than additional work I needed to find time to do.
I'm not disagreeing with the overall post, but from closely observing end users of LLM-backed products for a while now, I think this needs nuance.
The average joe, be it a developer, random business type, a school teacher or your mum, is very bad at telling an llm what it should do.
- In general people are bad at expressing their thoughts and desires clearly. Frontier LLMs are still mostly sycophantic, so in absence of clear instructions they will make up things. People are prone to treating the LLM as a mind reader, without critically assessing if their prompts are self-contained and sufficiently detailed.
- People are pretty bad at estimating what kind of data an LLM understands well. In general data literacy, and basic data manipulation skills, are beneficial when the use case requires operating on data besides natural language prompts. This is not a given across user bases.
- Very few people have a sensible working model of what goes on in an autoregressive black box, so they have no intuition on managing context
User education still has a long way to go, and IMO is a big determining factor in people getting any use at all from the shiny new AI stuff that gets slathered onto every single software product these days
Free form chat is pretty terrible. People just want the thing to (smartly) take actions. One or two buttons that do the thing, no prompting involved, is much less complicated.
There are like thousands wrappers around LLMs masquerading as AI apps for specialized usecases, but the real performance of these apps is really only bottlenecked by the LLM performance, and their UIs generally only get in the way of the direct LLM access/feedback loop.
To work with LLMs effectively you need to understand how to craft good prompts, and how to read/debug the responses.
and everyone's being sold on this tech being super magic but to some questions there is an irreducible complexity that you have to deal with, and that still takes effort.
So this is nice
> productionizing their proof-of-concept code and turning it into something people could actually use.
because it's so easy to glamorize research, while ignoring what actually makes ideas products.
This is also the problem. It's a looking back perspective and it's so easy to be miss the forest from the trees when you're down in the weeds. I'm talking from experience and it's a feeling I get when reading the post.
In the grand scheme of things our current "AI" will probably look like a weird detour.
Note that a lot of these perspectives are presented (and thought) without a timeline in mind. We're actually witnessing timelines getting compressed. It's easy to see the effects of one track while missing the general trend.
This take is looking at (arguably "over") LLM timeline, while missing everything else that is happening.
It’s easy now to get something good enough for use by you, friends, colleagues etc.
As it’s always been, developing an actual product is at least one order of magnitude more work. Maybe two.
But both internal tools and full products are made one order of magnitude easier by AI. Whole products can be made by tiny teams. And that’s amazing for the world.
No. Not at all. Many things maybe got easier but a lot of things got magnitudes harder. Maintaining bug bounty programs for example, or checking the authenticity and validity of written content on blogs.
Calling LLMs are a huge win for humanity is incredibly naive given we dont know the long term effects these tools are having on creativity in online spaces, authenticity of user bases, etc etc.
> AI tools like KNNs are very limited but still valuable today.
I've seen discussions calling even feed-forward CNNs, monte-carlo chains, or GANs "antiquated" because transformers and diffusion have surpassed their performance on many domains. There is a hyper-fixation on large transformers and a sentiment that it somehow replaces everything that came before in every domain.
It's a tool that unlocks things we could not do before. But it doesn't do everything better. It does plenty of things worse (at-least taking power and compute into account). Even if it can do algebraic now (as is so proudly proclaimed in the benchmarks), wolfram alpha remains and will continue to remain far more suited to the task. Even if it can write code; it does NOT replace programming languages as I've seen people claim in very recent posts on here on HN.
nh23423fefe•2h ago
this list isnt learnings.
sodapopcan•2h ago
jampa•1h ago
EDIT: Removed the video because a bug in Substack causes the space bar to play the video instead of scrolling down. Sorry for the unintentional jumpscare.
CamperBob2•1h ago
stronglikedan•1h ago
jampa•1h ago
sodapopcan•1h ago
SeanAnderson•1h ago
jay_kyburz•1h ago