Analyzing TypeScript Errors
I'm currently focused on the actions/user.ts file, where I've identified a few issues. Specifically, I need to address a "Cannot find name" error related to UpdateUserDetailsFormState, and confirm that the intended target is UpdateUserDetailsFormInput. There might be some other discrepancies in the file that I need to resolve.
Debugging the Import
I've been trying to pinpoint the Cannot find name 'UpdateUserDetailsFormState' error. The type is definitely defined and imported, but it's not being recognized in the updateUserDetails function's arguments. I'm suspecting a scoping issue with the import statement within actions/user.ts. I also need to verify that UpdateUserDetailsFormState is correctly defined with the fieldValues property as optional as per the schema.
Definitely a downgrade over the old version, though really it’s just Google deciding to offer less for free.
As an experiment I just asked it to "recreate the early RPG game Pedit5 (https://en.wikipedia.org/wiki/Pedit5), but make it better, with a 1970s terminal aesthetic and use Imagen to dynamically generate relevant game artwork" and it did in fact make a playable, rogue-type RPG, but it has been stuck on "loading art" for the past minute as I try to do battle with a giant bat.
This kind of thing is going to be interesting for teaching. It will be a whole new category of assignment - "design a playable, interactive simulation of the 17th century spice trade, and explain your design choices in detail. Cite 6 relevant secondary sources" and that sort of thing. Ethan Mollick has been doing these types of experiments with LLMs for some time now and I think it's an underrated aspect of what they can be used for. I.e., no one is going to want to actually pay for or play a production version of my Gemini-made copy of Pedit5, but it opens up a new modality for student assignments, prototyping, and learning.
Doesn't do anything for the problem of AI-assisted cheating, which is still kind of a disaster for educators, but the possibilities for genuinely new types of assignments are at least now starting to come into focus.
The unsolved issue is scale. 5-10 minute Q&As work well, but are not really doable in a 120 student class like the one I'll be teaching in the fall, let alone the 300-400 student classes some colleagues have.
https://aistudio.google.com/app/prompts?state=%7B%22ids%22:%...
- AI Studio: "the fastest place to start building with the Gemini API"
- Firebase Studio: "Prototype, build, deploy, and run full-stack, AI apps quickly"
"this is brilliant! I'll assign multiple teams to the same project. Let the best team win! And then the other teams get PIP'd"
…if you do it before publicly releasing and spending marketing budget on both products, giving them a full software lifecycle and a dedicated user-base that no longer trusts you to keep things running.
Honestly, even in that case it sucks to be a developer there knowing there’s a 50% chance that the work you did meant nothing.
Does it have to mean nothing? If there is a review at the end of the exercise, good parts from each of the teams can be explored for integration to build the best final product. Of course all these things are probably as much political as technical so it is always complicated.
Canvas: "the fastest place to start building with the Gemini APP"
Also, did you hear about Jules?
Why does Google suck so much at product management?
AI Studio: Mostly a playground for building mini-apps that integrate with the Gemini APIs. A big sell seems to be that you don't need an API key, instead you just build your app for testing, and the access is injected somehow. The UI is more stripped down than an IDE and I assume you'd only use it to prototype basic things. I don't know why there are "deployment" options in the UI, frankly.
Firebase Studio: Mostly a sales funnel for firebase I assume, but this is a "tradition" prototyping/development tool that uses AI to make a product. It supports front end and backend code. This also has a chat bot, but it's more of a web-IDE than a chat-first interface.
Gemini Canvas: This is gemini-the-chatbot writing mini-web-apps in a side-panel. The use case seems to be visualization and super basic prototyping. I've used it to make super simple bespoke tools like a visualizer for structured JSON objects for debugging, or an API tester. The HTML is served statically from a google domain, and you can "remix" versions created by others with your own prompts.
Jules - Experimental tool that writes code in existing codebases by handling full "tickets" or tasks in one go. Never used it, so i don't know the interface. I think it's similar to Codex though.
Gemini Code Assist - their version of a copilot. I think its also integrated or cross-branded with
Vertex AI API, Gemini API - these are just APIs for models.
"Te harsh jolt of the cryopod cycling down rips you"
"ou carefully swing your legs out"
I find this really interesting that it's like 99% there, and the thing runs and executes, yet the copy has typos.
I don't need it inserting console.logs and alert popups with holocaust denials and splash screens with fake videos of white genocide in my apps.
Some Indian twitterers found a way to get it to utter Hindi profane words, that's probably the most controversial thing I know about it.
https://www.theguardian.com/technology/2025/may/14/elon-musk...
Running LLMs costs a stupid amount of money, beyond just the stupid amount of money to train them. They have to recoup that money somewhere.
But be sure to connect Studio to Google Drive, or else you will lose all your progress.
All the copying and pasting is killing me.
What kind of smooth brain hears, “they train AI on your ideas and code, humans read your ideas and code, and you agree not to compete back against this thing a multi trillion dollar company just said can do everything, which competes with you,” and says yes? Oh, the smooth brain who doesn’t even realize, because it’s all buried in “additional” legal CYA documents.
ChatGPT still dominates the reach test since I can at least opt out of model training without losing chat logs, even though I have to agree not to compete with the thing that competes with me. Google is like a corporate version of a gross nerd you tolerate because they’re smart, even though they stalk you weirdly.
What a disgrace, we all ought to be sick about creating a legalese-infused black mirror dystopia, racing to replace our own minds with the latest slop-o-matic 9000 and benefit the overlords for the brief instant they count their money while the whole ecosphere is replaced by data centers
It’s like somehow the most magical tech in history (LLMs) comes along and gets completely shafted by elite grifter-tier scumbaggery. Chat bot speech is more protected than human speech these days, Google doesn’t give a crap about privacy, it’s all about the slimy hidden-opt-out of getting everything fed into the slop machine, and break everything if you do.
“Gee, should the app break if the user doesn’t want us to read their shit?” “Yeah, that sounds good, ain’t nobody got time to categorize data into two whole buckets, ship it!”
“How about we make a free better app that doesn’t even have the option of us not reading their shit?” “Oh, yeah, that’ll really help the KPIs!”
Sorry guys, yes, Claude is the best model, but your lack of support for structured responses left me no choice.
I had been using Claude in my Saas, but the API was so unreliable I'd frequently get overloaded responses.
So then I put in fallbacks to other providers. Gemini flash was pretty good for my needs (and significantly cheaper), but failed to follow the XML schema in the prompt that Claude could follow. Instead I could just give a pydantic schema to constrain it.
The trouble is the Anthropic APIs just don't support that. I tried using litellm to paper over the cracks but no joy. However, OpenAI does support pydantic.
So i was left with literally needing twice as many prompts to support Gemini and Anthropic, or dropping Anthropic and using Gemini with OpenAI as a fallback.
It's a no-brainer.
So you guys need to pull your fingers out and get with the programme. Claude being good but also more expensive and not being compatible with other APIs like this is costing you customers.
Shame, but so long for now...
pjmlp•1mo ago
This is exactly what I see coming, between the marketing and reality of what the tool is actually able to deliver, eventually we will reach the next stage of compiler evolution, directly from AI tools into applications.
We are living through a development jump like when Assembly developers got to witness the adoption of FORTRAN.
Language flamewars are going to be a thing of the past, replaced by model wars.
It migth take a few cycles, it will come nonetheless.
xnx•1mo ago
I'm hoping this will allow domain experts to more easily create valuable tools instead of having to go through technicians with arcane knowledge of languages and deployment stacks.
cjbgkagh•1mo ago
That said it seems like both domain expertise and the ability to create expert systems will be commoditized at roughly the same time. While domain experts may be happy that they don’t need devs they’ll find themselves competing against other domain experts who don’t need devs either.
glitchc•1mo ago
Obligatory video (sound familiar?): https://www.youtube.com/watch?v=oLHc7rlac2s
cjbgkagh•1mo ago
I wouldn’t call expert systems AI even though the early use of AI referred to symbolic reasoners used in expert systems.
If you are capturing domain knowledge from an expert and creating a system around it, what would you call that? I think modern AI will help deliver on the promise of expert systems, and I don’t think modern AI obviates the utility of such systems. Instead of a decision support system for human users it’s a decision support system for an AI agent. The same AI agent can interface with human users with a more familiar chat interface - hence acting as a bridge.
Most users will not be able to write Multidimensional Expressions or SPARQL queries and with an AI intermediary they won’t need to.
glitchc•1mo ago
tl;dr, expert systems as a concept was hot! As an actual implementation, it was a colossal failure. The new AI hype train has always contained echoes of expert systems, and I was giddy with excitement to see someone complete the loop. The serpent eats its own tail after all. Which just goes to show that folks would be considerably more enlightened if they just read a bit more history.
cjbgkagh•1mo ago
It was in general a library science thing, like search engines were, even today I wish I could disambiguate my search queries to be able to specify which word overloading I’m referring to, be it Java the language, the city, or the coffee.
I’ve spent a decent amount of time trying to introduce these concepts to regular people and have long considered it generally hopeless. I went on to work in ML and I’ve long thought it would be easier to teach a computer to use these systems than regular people. At least AI is at a point now where it can act as a bridge.
mark_l_watson•1mo ago
suddenlybananas•1mo ago
Sounds like an absolute nightmare for freedom and autonomy.
Keyframe•1mo ago
bdangubic•1mo ago
pjmlp•1mo ago
delfinom•1mo ago
I have never seen an entire profession race to make itself entirely unemployable and celebrate it.
Too many people are hoping they'll be one of the lucky ones still employed and doing little work while talking to a LLM ;)
Anon1096•1mo ago
pjmlp•1mo ago
Cloud is only the rebranding of timesharing, clear with different technology stacks, however the approach to development is exactly the same as working on an UNIX shop back in 1975 - 1990's.
nightski•1mo ago
hooverd•1mo ago
neom•1mo ago
matt3D•1mo ago
neom•1mo ago
They're developing some super interesting ways of the os developing itself as you use the device, apps building themselves, stuff like that. Super early days, but I have a really really good feeling about them (I know, everyone else doesn't and I'm sure thinks I'm nuts saying this).
nwienert•1mo ago
com2kid•1mo ago
Directly driving a user's device (or a device hooked up to a user's account at least) means an AI can do any task that a user can do, tearing down walled gardens. No more "my car doesn't allow programmatic access so I can't heat it up in the morning without opening the app."
Suddenly telling an agent "if it is below 50 outside preheat my car so it is warm when I leave at 8am" becomes a simple to solve problem.
hooverd•1mo ago
NewsaHackO•1mo ago
com2kid•1mo ago
Complete AI control over a personal phone. Anything a user can do the AI can figure out how to do.
That is the end game for everyone right now - a new class of ambient AI powered personal computing.
johanbcn•1mo ago
com2kid•1mo ago
The idea is a fully personal AI that can control ones devices to accomplish complex tasks. Rabbit is working on this through their rabbitOS project, lots of other players are doing the same thing. OpenAI is trying, and lots of open source projects. Even homekit has initial support for LLM integration.
IMHO controlling a phone directly is the best path forward. Google and Apple are best situated to exploit this, but they may be unable to do so due to structural issues within the companies.
matt_heimer•1mo ago
I feel that the lower level you go the more you want knowledgeable human experts in the loop. There is so much nuance in OS development that I think it'll be a while before I trust AI do have free rein over my devices.
But at the current speed of AI innovation I won't be that surprised if that day comes faster than I expect.
aquova•1mo ago
neom•1mo ago
j_w•1mo ago
odo1242•1mo ago
Plus, why a separate device and not a mobile app?
neom•1mo ago
mrheosuper•1mo ago
63stack•1mo ago
"No, not the scam part"
MrDarcy•1mo ago
neom•1mo ago
magicalist•1mo ago
Is this different from other recent models trained eg for tool calling? Sounds like they fine tuned on their SDK. Maybe someday, but it's still going to be limited in what it can zero shot without you needing to edit the code.
> Language flamewars are going to be a thing of the past, replaced by model wars.
This does seem funny coming from you. I feel like you'll still find a way :P
bgwalter•1mo ago
candiddevmike•1mo ago
simonw•1mo ago
You could define your rules in Prolog if you wanted - that's just as effective a way to communicate them to an LLM as English.
Or briefly describe some made-up DSL and then use that.
For coding LLMs the goal is to have the LLM represent the logic clearly in whatever programming language it's using. You can communicate with it however you want.
I've dropped in screenshots of things related to what I'm building before, that works too.
geraneum•1mo ago
Ironically, for something like the parent suggested i.e. a rules engine, this is the main work.
stickfigure•1mo ago
Context windows are still tiny by "real world app" standards, and this doesn't seem to be changing significantly.
CuriouslyC•1mo ago
sepositus•1mo ago
jacob019•1mo ago
stickfigure•1mo ago
vineyardmike•1mo ago
stickfigure•1mo ago
Is a messy 1.5M lines of tightly coupled code best practice? Of course not. But it evolved over about 20 years and processes tens of billions of dollars of financial transactions. In my experience, it is archetypical of real-world software for a large successful company.
I use LLMs where I can and they're incredibly useful. But their limits are severe compared to a good human software developer and the shortcomings mostly revolve around their tiny context. Human neuroplasticity is still champion.
yyhhsj0521•1mo ago
ignoramous•1mo ago
I work regularly with AOSP code (~3mn). While LLMs (Copilot w/ Claude Sonnet 3.7, in my case) cannot gobble all of it up, they have no trouble answering my queries, for the most part, from submodules. If nothing else, using LLMs has dramatically reduced the time it takes to understand a new code file / submodule / a range of commits.
stickfigure•1mo ago
anthonypasq•1mo ago
mirsadm•1mo ago
NicoJuicy•1mo ago