That's not to say that the ball doesn't have spin. I play with a lot of spin, and a recent level up in my game was that I realized that when I put spin on the ball and my opponent returns it without slicing, I've got to account for the spin I put on it because it's still there.
You can get pretty dang far with a primarily defensive style, though. I was recently at an informal local tournament where the 2nd place guy had basically one serve and an insane ability to return shots. A lot of the times he scored in the final game were where his opponent hit a shot with such insane speed and spin that when when he returned it, his opponent couldn't defend the residual speed/spin of his own shot.
> A key point of difference is that our agent learns the control policies and perception system, whereas the Forpheus agent uses a model-based approach. More specifically, Forpheus leverages rebound and aerodynamics models in order to identify the optimal configuration of the robot so as to return the ball to a target position. The Omron system represents a highly engineered system that cannot easily be customized to new players, environments, or paddles
But I think they're stretching a bit to claim that a model-based design can't be easily customized. Many of us would consider it much easier to plug in a new air viscosity or coefficient of restitution value into a model than to re-train a physical robot.
Where we are going the robot retrains itself.
Had this been some high school kids I'd be impressed. But Google? C'mon.
Edit: yep here's a better performing robot from 6 years ago based on ballistics: https://m.youtube.com/watch?v=u3L8vGMDYD8&pp=0gcJCYQJAYcqIYz...
If DeepMind wanted to emphasize the learning aspect of it (and they should) then it should be in the title. E.g. "Novel learning algorithm leads to competitive robotic table tennis"
> To date, the Omron Forpheus robot [73], [62] has the closest capabilities to the agent presented in this work, demonstrating sustained rallies on a variety of stroke styles with skilled human players. A key point of difference is that our agent learns the control policies and perception system, whereas the Forpheus agent uses a model-based approach. More specifically, Forpheus leverages rebound and aerodynamics models in order to identify the optimal configuration of the robot so as to return the ball to a target position. The Omron system represents a highly engineered system that cannot easily be customized to new players, environments, or paddles.
> While there have been many demonstrations of robots playing table tennis against human players in the past, we believe this research is one of the first human-robot interaction studies to be conducted with full competitive matches against such a wide range of player skill levels.
Are people not good enough?
As a little side project I'm working on a table tennis AI for VR which works by imitating real players, which is a much simpler problem since you're allowed to "fake" a lot of things in a game. I think VR holds more promise in the short to medium term for practicing TT than robotics.
A world-class player could beat this thing left-handed with one eye closed.
There are a significant number of players worse than the robot, and a significant number of players better. That's fact, and that's all "intermediate" means.
"Human level competitive", "solidly amateur-level human performance", "beat 100% of beginners and 55% of intermediate players". That robot would definitely win some games in your local club league, except that it doesn't serve, and unless it's cheating in ways the announcement glosses over like extra cameras - DeepMind have some history here so I reserve the right to be skeptical.
The only thing I'd take issue with in the abstract is "Table tennis... requires human players to undergo years of training to achieve an advanced level of proficiency." While that sentence is true, it's irrelevant to this robot since this robot only plays at intermediate proficiency, a level reachable by a moderately athletic human with some practice.
By contrast, the AlphaGo [0] AlphaZero [1] and AlphaStar [2] papers claim "mastery", "superhuman", "world champion level", "Grandmaster-level", "human professional" ability - all defensible claims given their performance and match conditions in the respective games.
[0] https://www.researchgate.net/publication/292074166_Mastering...
Definitely not. If you go beyond the cherry-picked videos where some longer sequences happened, the longer match videos reveal how bad the robot is. It makes really bad mistakes and loses most points against players not even on intermediate player in any local club.
There are pretty much too distinct classes of players. Those that occasionally play for fun, typically at stone tables found in parks and open-air baths.
And then there are those that play and train at least once a week in indoor halls with wooden tables, and often try to learn proper stroke techniques, and often participate in leagues.
The robot is pretty good fit for the first category, and that's already a pretty impressive achievement.
In the second category, it'd lose to anybody playing for more than a year or two, so it would be on par with the lowest tier players there.
The robot looks like it would be competitive with most of those players. Maybe my club is uniquely weak.
What history of cheating is there? I hadn't seen anything sneaky, but I don't follow everything. Do share.
The original AlphaStar announcement was also based on having serious advantages over its human opponents: it got a feed of the whole map, where humans could only view a section at a time, and the ability to perform an unrealistic number of actions per minute.
The equivalent in table tennis? Maybe having an additional high speed camera on the other side of the table, or a sensor in the opponent's bat. Actually, why is the opponent playing with a non-standard bat with two black rubbers? Presumably that's an optimization where the robot's computer vision has been tuned only for a black bat. But if that's so, it means none of the opponents got to use their own equipment, they used a bat which was unfamiliar and perhaps chosen to be easy for the robot to play against.
I've skimmed through the match highlights video, and it struck me that there wasn't really any engagement with spin.
You can hear from the relatively high pitch of the ball contact that these were rubbers with very thin sponges or no sponge at all, and likely not enough grip to cause any real rotation on the ball.
Even in the lowest leagues (here in Germany, at least), people use backspin to prevent attacks, and varying the spin is half the game.
Also, human players often have different rubbers on the forehand and backhand sides, which is why rules demand that both rubbers have very distinct colors. Using black rubbers on both sides kinda demonstrates their non-engagement with spin.
That said, kudos for making a robot that people enjoy playing with. That's not a given. Players anticipate the trajectory of the ball based on the opponent's body movement, which a robot could totally subvert.
Physical limitations also play a real role in table tennis, for example the forehand flick is much harder to play than the backhand flick due to the way our hand joint works. A robot wouldn't be subjected this particular limitation.
i-Sim2Real: Reinforcement Learning of Robotic Policies in Tight Human-Robot Interaction Loops
https://sites.google.com/view/is2r
The sim-2-real gap is a real obstacle in adopting RL for rotobics and anything that pushes the envelope is worth the trouble. On the other hand, I can't tell how well this approach will work outside of Table Tennis.
Note that on top of the RL work there is, AFAICT a metric shit ton of good, old-fashioned engineering to support the decision-making ability of learned policies. E.g. the "High Level Controller" (HLC) that selects "Low Level Controllers" (LLC) using good, old-fashioned AI tools, tree search with heuristics, seems to me to be hand-crafted rather than learned, and involves a load of expert knowledge driving information gathering. So far, no bitter lessons to taste here.
Oh and of course the HLC is a symbolic component while the LLCs are learned by a neural net. Once more DeepMind sneakily introduces a neuro-symbolic approach but keeps quiet about it. They've done that since the days of AlphaGo. No idea why they're so ashamed of it, since it really looks like it's working very well for them.
BugsJustFindMe•9mo ago
How are we defining amateur here? The presented video shows the human intentionally volleying with the robot, barely putting any force at all behind the returns. But it says the robot won 55% of matches against intermediate players? That requires being able to return much harder shots than shown.
push0ret•9mo ago
BugsJustFindMe•9mo ago
spongebobstoes•9mo ago
repeekad•9mo ago
hooloovoo_zoo•9mo ago
eviks•9mo ago
repeekad•9mo ago
yusina•9mo ago