These kids hammer H100s for 30+ hours a week but will revolt at ads or the idea of paying money.
C.ai probably only exists at its current size because Noam had access cheap access to TPUs and people who can scale inference on them at the earliest stages of their growth (and obviously because he could raise with his pedigree, but looking at things others deal with)
Eventually if the unit economics start to work they can always roll this back, but I think people are underestimating how much of a positive this is for them
Something tells me that this ain't gonna work. Kids and teens are more inventive than they probably realize.
In any case, there is a general problem on the internet regarding how to allow people to remain reasonably anonymous while keeping children out of certain services.
The additional context here is that Google acquired them to get back Noam Shazeer and took some other members of their technical staff.
So the current company is pretty much shambling along after having served its purpose to all stakeholders, and Google probably doesn't really care more than avoiding any sort of liability.
Although it'd still be lame if it's so easy to break that someone can share the key, and then the kids don't need to learn anything.
If a kid thinks like an adult, behaves like an adult and can't be distinguished from an adult from their online presence, let them use the chatbot. On the other end, I wish they'd flag immature adults as kids as well.
18 is an arbitrary number, and if we have more appropriate ways to judge if someone is ready or not (assuming their check is worth its salt), it should be fine to defer to that.
It's not like they're going to a bar to do tequila shots or scam retirees for insurance money.
In both cases they went nuclear in a way that implies they actually don't care if the current product survives as long as C.ai (read: Google) isn't exposed to the ongoing risk
https://news.ycombinator.com/item?id=45733618
On a similar note, I was completing my application for YC Startup School / Co-Founder matching program. And when listing possible ideas for startups I straight out explicitly mentioned I'm not interested in pursuing AI ideas at the moment, AI features are fine, but not as the main pitch.
It feels like at least for me the bubble has popped, I have talked also recently about the way in which the bubble might pop would be due to legal liability collapse in the courts. https://news.ycombinator.com/item?id=45727060
This added with the fact that AI was always a vague folk category of software, it's being used for robotics, NLP and fake images, I just don't think it's a real taxon.
Similar to the crypto buzz from the last season, the reputable parties will exit and stop associating, while the grifters and free-associating mercenaries will remain.
Even if you are completely selfish, it's not even hugely beneficial to be in the "AI" space, at least in my experience, customers come in with huge expectations, and non-huge budgets. Even if you sell your soul to implement a chatbot that will replace 911 operators, at this point the major actors have already done so, or not, and you are left with small companies that want to be able to fire 5 employees and will pay you 3 months of employee salary if you can get it done by vibe code completing their vibe coded prototype within a 2-3 deadline.
My personal favorite story is when I talk my youngest aunt about a videogame my cousin wanted. She said no absolutely not. Then proceeded to buy the game for ehr 10 year old. A game she was carded for. A game that has that it's M rated and has adult themes and whatnot on the box. She called me later in horror about how inappropriate this game I told her about 2 weeks earlier was. How could they make games like that for children she says about the game she was carded for because it's only for adults.
I use her as an example but that situation is a lot of parents. I personally think that it's not the government's place to say how much exposure I want to give my child to the internet, but I have rules and boundaries around that with my kid. Many of her friends have free access and and have always had it since toddlers. People say it's parents not being savvy, but honestly it's parents not caring. Parent controls have been around over 30 years and they have always been dead simple. But they do increase the whining in your life from your kids and that means if parents can allow it a high quantity will. I have no faith that a law will stop significantly more kids than no law. I know too many parents who allow their kids to do things they know are harmful to their kid because "I don't want them to feel left out" or they don't want to deal with whining.
Your kid is on the fucking computer all day building an unhealthy relationship with essentially a computer game character. Step the fuck in. These companies absolutely have to make liability clear here. It's an 18+ product, watch your kids.
You're more optimistic than I am. Their announcement is virtue signaling at best. Nothing will come from this. Kids will figure out a way around their so-called detection mechanisms because if there were any false positives for adults they would lose adult customers _and_ kids customers.
The universe of " I didn't know the kids would make deep fakes of their classmates ", is yet to come. Some parents going straight to fucking jail. Talk to your kids, things are moving at a dangerous pace.
> [Dr. Nina Vasan] said the company should work with child psychologists and psychiatrists to understand how suddenly losing access to A.I. companions would affect young users.
I hope there’s a more gradual transition here for those users. AI companions are often far more available than other people, so it’s easy to talk more and get more attached to them. This restriction may end up being a net negative to affected users.
https://news.ycombinator.com/item?id=44723418
It is also highly compatible with the internet both in terms of technical/performance scalability and utility scalability (you can use it for just about any information verification need in any kind of application).
Every time I hear about some dumb approach to age verification (conversation analysis...really?) or a romance scam story because of a fraudster somewhere in Malaysia..I have the need to scream...THERE IS A CORRECT SOLUTION.
My proposal is here: https://news.ycombinator.com/item?id=45141744
1. backups and account recovery: We’re working with humans here. They will lose their keys in great numbers, sometimes into the hands of malicious actors. How do users then recover their credentials in a quick and reliable manner?
2. Fragmentation: let’s be optimistic and say digital credentials for drivers licenses are given out by _only_ 50 entities (one per State). Assuming we don’t have a single federal format for them (read: politically infeasible national id) how does facebook, let alone some rando startup, handle parsing and authenticating all these different credential formats? Oh and they can change at any time, due to some rando political issue in the given state.
OP, you clearly know all this, so I’m just reminding you as someone down in the identity trenches.
2.The data format issue is (or was) indeed a concern though it was never insurmountable. A data dictionary would have been the most straight forward approach to address it: https://cipheredtrust.com/doc/#data-processing
I say data format discernment
was a concern because as faith would have it, we now have the perfect tech to address that, LLMs. You can shove any data format into an LLM and it will spit out a transformation into what you are looking for without the need to know the source format.Browsers are integrating LLM features as APIs so this type of use would be feasible both for front and back end tasks.
I literally circumvent website blocking using VPN as a kid, no one can stop anyone from going "online" in 2025
What if a child is at school where there are Chromebooks and teachers aren’t as tech savvy as the majority of hacker news?
What if a child is at a library that has Chromebooks you can take out and use for homework?
Wha if a child is at an older cousins place?
What if a child is a park with their friends and uses a device?
Should parents be next to their child helicopters parenting 24/7?
Is that how you remember your childhood? Is that how you currently parent giving zero atonony to children?
Blaming parents is ridiculous. Lot of parents aren’t tech savvy and are too dumb to be tech savvy and stay on top of the latest tech thing
Plus you're just setting kids up for failure by keeping them from understanding the adult world. You can't keep them ignorant and then drop them in the deep end when they turn 18, and expect good outcomes.
And these days internet integration in school is far stronger, my 6 year old's daily homework is entirely online.
I'd like to believe that most actual people want to protect kids.
It's easy to write off corporations and forget that they are founded by real people and employ real people... some with kids of their own or with nieces or nephews etc, and some of them probably do really care.
Not saying character.ai is driven by that but I imagine the times they've been in the news were genuinely hard times to be working there...
ChrisArchitect•14h ago
Teen in love with chatbot killed himself – can the chatbot be held responsible?
https://news.ycombinator.com/item?id=45726556
nopurpose•10h ago
sosodev•10h ago
From what I understand, some inputs from the user will trigger a tool call that searches the memory database and other times it will search for a memory randomly.
With that said, I think people started falling in love with LLMs before memory systems and they probably even fell in love with chatbots before LLMs.
I believe that the simple, unfortunate truth is that love is not at all a rational processes and your body doesn't need much input to produce those feelings.
ambicapter•9h ago
hackernewds•6h ago
PlunderBunny•8h ago
0. https://www.theguardian.com/technology/2023/jul/25/joseph-we...
goopypoop•1h ago