Analyzing TypeScript Errors
I'm currently focused on the actions/user.ts file, where I've identified a few issues. Specifically, I need to address a "Cannot find name" error related to UpdateUserDetailsFormState, and confirm that the intended target is UpdateUserDetailsFormInput. There might be some other discrepancies in the file that I need to resolve.
Debugging the Import
I've been trying to pinpoint the Cannot find name 'UpdateUserDetailsFormState' error. The type is definitely defined and imported, but it's not being recognized in the updateUserDetails function's arguments. I'm suspecting a scoping issue with the import statement within actions/user.ts. I also need to verify that UpdateUserDetailsFormState is correctly defined with the fieldValues property as optional as per the schema.
Definitely a downgrade over the old version, though really it’s just Google deciding to offer less for free.
As an experiment I just asked it to "recreate the early RPG game Pedit5 (https://en.wikipedia.org/wiki/Pedit5), but make it better, with a 1970s terminal aesthetic and use Imagen to dynamically generate relevant game artwork" and it did in fact make a playable, rogue-type RPG, but it has been stuck on "loading art" for the past minute as I try to do battle with a giant bat.
This kind of thing is going to be interesting for teaching. It will be a whole new category of assignment - "design a playable, interactive simulation of the 17th century spice trade, and explain your design choices in detail. Cite 6 relevant secondary sources" and that sort of thing. Ethan Mollick has been doing these types of experiments with LLMs for some time now and I think it's an underrated aspect of what they can be used for. I.e., no one is going to want to actually pay for or play a production version of my Gemini-made copy of Pedit5, but it opens up a new modality for student assignments, prototyping, and learning.
Doesn't do anything for the problem of AI-assisted cheating, which is still kind of a disaster for educators, but the possibilities for genuinely new types of assignments are at least now starting to come into focus.
The unsolved issue is scale. 5-10 minute Q&As work well, but are not really doable in a 120 student class like the one I'll be teaching in the fall, let alone the 300-400 student classes some colleagues have.
https://aistudio.google.com/app/prompts?state=%7B%22ids%22:%...
- AI Studio: "the fastest place to start building with the Gemini API"
- Firebase Studio: "Prototype, build, deploy, and run full-stack, AI apps quickly"
"this is brilliant! I'll assign multiple teams to the same project. Let the best team win! And then the other teams get PIP'd"
…if you do it before publicly releasing and spending marketing budget on both products, giving them a full software lifecycle and a dedicated user-base that no longer trusts you to keep things running.
Honestly, even in that case it sucks to be a developer there knowing there’s a 50% chance that the work you did meant nothing.
Does it have to mean nothing? If there is a review at the end of the exercise, good parts from each of the teams can be explored for integration to build the best final product. Of course all these things are probably as much political as technical so it is always complicated.
Canvas: "the fastest place to start building with the Gemini APP"
Also, did you hear about Jules?
"Te harsh jolt of the cryopod cycling down rips you"
"ou carefully swing your legs out"
I find this really interesting that it's like 99% there, and the thing runs and executes, yet the copy has typos.
I don't need it inserting console.logs and alert popups with holocaust denials and splash screens with fake videos of white genocide in my apps.
But be sure to connect Studio to Google Drive, or else you will lose all your progress.
All the copying and pasting is killing me.
What kind of smooth brain hears, “they train AI on your ideas and code, humans read your ideas and code, and you agree not to compete back against this thing a multi trillion dollar company just said can do everything, which competes with you,” and says yes? Oh, the smooth brain who doesn’t even realize, because it’s all buried in “additional” legal CYA documents.
ChatGPT still dominates the reach test since I can at least opt out of model training without losing chat logs, even though I have to agree not to compete with the thing that competes with me. Google is like a corporate version of a gross nerd you tolerate because they’re smart, even though they stalk you weirdly.
What a disgrace, we all ought to be sick about creating a legalese-infused black mirror dystopia, racing to replace our own minds with the latest slop-o-matic 9000 and benefit the overlords for the brief instant they count their money while the whole ecosphere is replaced by data centers
It’s like somehow the most magical tech in history (LLMs) comes along and gets completely shafted by elite grifter-tier scumbaggery. Chat bot speech is more protected than human speech these days, Google doesn’t give a crap about privacy, it’s all about the slimy hidden-opt-out of getting everything fed into the slop machine, and break everything if you do.
“Gee, should the app break if the user doesn’t want us to read their shit?” “Yeah, that sounds good, ain’t nobody got time to categorize data into two whole buckets, ship it!”
“How about we make a free better app that doesn’t even have the option of us not reading their shit?” “Oh, yeah, that’ll really help the KPIs!”
pjmlp•7h ago
This is exactly what I see coming, between the marketing and reality of what the tool is actually able to deliver, eventually we will reach the next stage of compiler evolution, directly from AI tools into applications.
We are living through a development jump like when Assembly developers got to witness the adoption of FORTRAN.
Language flamewars are going to be a thing of the past, replaced by model wars.
It migth take a few cycles, it will come nonetheless.
xnx•7h ago
I'm hoping this will allow domain experts to more easily create valuable tools instead of having to go through technicians with arcane knowledge of languages and deployment stacks.
cjbgkagh•7h ago
That said it seems like both domain expertise and the ability to create expert systems will be commoditized at roughly the same time. While domain experts may be happy that they don’t need devs they’ll find themselves competing against other domain experts who don’t need devs either.
glitchc•38m ago
Obligatory video (sound familiar?): https://www.youtube.com/watch?v=oLHc7rlac2s
suddenlybananas•6h ago
Sounds like an absolute nightmare for freedom and autonomy.
Keyframe•4h ago
bdangubic•3h ago
hooverd•1h ago
neom•5h ago
matt3D•5h ago
neom•5h ago
They're developing some super interesting ways of the os developing itself as you use the device, apps building themselves, stuff like that. Super early days, but I have a really really good feeling about them (I know, everyone else doesn't and I'm sure thinks I'm nuts saying this).
nwienert•5h ago
com2kid•4h ago
Directly driving a user's device (or a device hooked up to a user's account at least) means an AI can do any task that a user can do, tearing down walled gardens. No more "my car doesn't allow programmatic access so I can't heat it up in the morning without opening the app."
Suddenly telling an agent "if it is below 50 outside preheat my car so it is warm when I leave at 8am" becomes a simple to solve problem.
hooverd•1h ago
aquova•5h ago
neom•5h ago
j_w•5h ago
odo1242•4h ago
Plus, why a separate device and not a mobile app?
neom•42m ago
MrDarcy•3h ago
neom•3h ago
DonHopkins•1h ago
$30,000,000 AI Is Hiding a Scam:
https://www.youtube.com/watch?v=NPOHf20slZg
Rabbit Gaslit Me, So I Dug Deeper:
https://www.youtube.com/watch?v=zLvFc_24vSM
"I uncover scams, fraudsters and fake gurus that are preying on desperate people with deceptive advertising. If you have to ask... it’s probably too good to be true." -Coffeezilla
https://en.wikipedia.org/wiki/Coffeezilla
>In October 2024 Andrew Tate was sent a series of questions by Coffeezilla about his meme coin DADDY. In response, Tate doxxed Coffeezilla by leaking his email address and encouraged his supporters to email abusive content to Coffeezilla, specifically requesting that they call him "gay."
Anyone who Andrew Tate doxes and tells his abusive incel followers to harass and call gay is OK in my book. Do you own a lot of DADDY meme coins too?
So what exactly is your motive for such blatant and fraudulent hyperbolic shilling? Are you trying to recoup your bad investments in NFTs by publicly debasing yourself by astroturfing for Rabbit? Care to prove what you're saying by posting a video of yourself using it, and it actually doing what it claims it can do, that it wouldn't do for Coffeezilla?
After going down that "rabbit hole", I have to ask you personally: how gullible and shameless can you possibly be to shill for Rabbit like you do?
neom•1h ago
DonHopkins•1h ago
I already told you who Coffeezilla is, and gave you quotes and citations and links. So read those before demanding I explain to you who he is after I just did exactly that, and trying to dismiss him just because you claim to have never heard of him and refuse to learn, watch his videos, or read the wikipedia page about him and his track record that I already linked you to. That's just intellectually dishonest cultivated ignorance. Because I know if I give you any more evidence, you'll just ignore it just like you did the evidence I already gave you.
So explain why you're shamelessly shilling and bullshitting for somebody who ran a huge multi-million dollar fraudulent NFT scam, and then blatantly lied through his teeth about it?
Yes, you're a shill, and I don't appreciate listening to you shill, then deny the obvious well documented facts, while refusing to look at the evidence. That's called gaslighting and fraud, and it destroys your reputation in this community.
magicalist•5h ago
Is this different from other recent models trained eg for tool calling? Sounds like they fine tuned on their SDK. Maybe someday, but it's still going to be limited in what it can zero shot without you needing to edit the code.
> Language flamewars are going to be a thing of the past, replaced by model wars.
This does seem funny coming from you. I feel like you'll still find a way :P
bgwalter•5h ago
candiddevmike•4h ago
simonw•3h ago
You could define your rules in Prolog if you wanted - that's just as effective a way to communicate them to an LLM as English.
Or briefly describe some made-up DSL and then use that.
For coding LLMs the goal is to have the LLM represent the logic clearly in whatever programming language it's using. You can communicate with it however you want.
I've dropped in screenshots of things related to what I'm building before, that works too.
geraneum•2h ago
Ironically, for something like the parent suggested i.e. a rules engine, this is the main work.
stickfigure•4h ago
Context windows are still tiny by "real world app" standards, and this doesn't seem to be changing significantly.
CuriouslyC•3h ago
sepositus•2h ago
jacob019•2h ago
stickfigure•1h ago