This is the thread where we make fun of the robot, and someone digs this up within a few of years to make fun of us!
There is more to it, though. Is it okay to go on a tangent?
Machines have always excelled at brute force. It's been a long time since anyone was last able to challenge a computer at chess (calculating power) and in physical terms, well, should I mention weightlifting?
But what happens when we can't program the thing to do any better, without investing enormous resources for marginal gains?
I assume most people are familiar with
AlphaGo Zero's groundbreaking exploits, in a field where "in 20 years time" sounded like a conservative estimate for man-instructed software to catch up with the best players. So much for the "human" element...
Even when it comes to well-controlled environments - still using chess as an example - if any doubt remained as to what a couple of decades spent fine-tuning algorythms to absurd levels is worth against AI,
this probably settles it (I won't get into the whole hardware debate, regarding that match; it makes no difference, at least in the longer-term).
So my not-so-concealed point is this: I assume we have the tools to make this robot fast enough to reach anything thrown at it. Sensors will improve, and so on with regards to all things hardware. But I can't imagine how in a million years we could instruct the thing to read spin, among other things, by feeding it lines of code.
... Can we get two of these to play against each other? By that I mean both robots having access to every little piece of data / input / output for either of them. Nicely controlled. Robot-1 hits the ball from position x, at speed y and in direction z, with a 90° bat angle. Robot-2 receives in a similar controlled way, and observes the result. Robot-1 hits the ball from position x, at speed y and in direction z, with a 89° bat angle. Robot-2 receives in the same way as the first time, and observes the result. Rinse and repeat till the end of times. Of course this dumb process can be optimised, which is also what humans do; but you get the point. Not so long ago I would have suggested a hybrid using what that fella in the posted video already knows, but I'm not convinced anymore.
Soon enough the thing will have to adjust to the non-linear behaviour of the ball, when compared to the stroke; it may pick up on its unexpected behaviour upon impact on the table; it may even learn to differentiate and adapt to new equipment, using a pre-determined set of strokes; or none of the above. I don't really have a clue, and that's beside the point. It's going to be clumsy, but what it learns, it never forgets and never fails to reproduce. Not to mention how that entire knowledge can then be replicated in the blink of an eye.
It's maybe unrealistic just now, and it's absurd if we only consider table tennis as a purpose. But the day machine-learning convincingly applies to the "real", physical world is when a whole lot of futuristic / borderline fantasy stuff from the time the steam-engine was invented to T-800, will come into shape. To me, it feels like the hardest part has already been done!