Not even close! At best it's a small subset of the internet + published books. The vast majority of human knowledge isn't even in the training sets yet.
I would question the use of a model fed everything, though.
It's fuzzy and plastic and complex, but the brain has functional areas, there is intelligence more local to specific sensors, pipelines where fusion happens, governors and supervisors, specific numeric limits to certain tasks, etc.
This is a bit akin to your "listing every possible item", in a way, in the sense that there are definitely finite structures tuned toward the application of being human.
This interplay via our supposed "AGI" and what is "cached" in our also not static but evolving hardware is really one of the most fascinating aspects of biology.
One good bet based on Waymo's decision to expand is that the amount of supervision each robotaxi needs keeps going down, so supervision is not tightly coupled to fleet size.
While there is plenty of classical robotics code in our planner, I wouldn't want people to assume that we don't use neural networks for planning.
Just because we don't deploy end-to-end models (e.g., sensors to controls), but have separate perception and planning components doesn't mean there isn't ML in each part. Having the components separate means we can train and update each individually, test them individually, inject overrides as needed, and so on. On the flip side, it's true that because it's not learned end-to-end today that there might exist a vastly simpler or higher quality system.
So we do a lot of research in this area, like EMMA (https://waymo.com/research/emma/) but don't assume that our planning isn't heavily ML based. A lot of our progress in the last couple of years has been driven by increasing the amount of ML used for planning, especially for behavior prediction (e.g., https://waymo.com/research/wayformer/)
Removed that "manually" world so now it describes exactly what you would have to do to train an end to end neural network.
NNs don't get information from nothing, you would have to subject them to the exact same obstacles, geometries and behaviors you coded on the manual version.
Big edge cases have little edge cases that require their own code / and those edge cases have smaller edge cases with yet more code.
My shorthand is "the real world is a fractal of edge cases".
Maybe all this setup means that completing surgical tasks doesn't counter as dexterity.
Even if biomimicry turns out to be a useful strategy in designing general purpose robots, I would bet against humans being the right shape to mimic. And that's assuming general purpose robots will ever be more useful than robots designed or configured for specific tasks.
Handling heavy boxes? Baking a cake? Operating a circular saw? Assembling a PC? Performing surgery? Loading a ream of paper into a printer? Playing a violin? Opening a door? You can do it all with two five-fingered hands.
levocardia•13h ago
It seems most likely that this sort of boring domain randomization will be what works, or works well enough, for solving contact in this generation of robotics, but it would be much more exciting if someone figures out a better way to learn contact models (or a latent representation of them) in real time.
sho_hn•12h ago
Figuring out physical interaction with the environment and traversal is truly one of the most stunning early achievements of life.
hahaxdxd123•12h ago
Basically the solve rate was much lower without the use of a Bluetooth sensor, and they did a bunch of other things that made the result less impressive. Still a long way to go here.
rapjr9•12h ago
More generally, continuous learning in real-time is something current models don't do well. Retraining an entire LLM every time something new is encountered is not scalable. Temporary learning does not easily transfer to long term knowledge. Continuous learning still seems in its infancy.
pixl97•10h ago
marcosdumay•7h ago
kulahan•7h ago
Imagine if the tip of your finger could just bend back. It would be way harder to know what you’re touching!
beau_g•11h ago