World's Best Table Tennis Robot vs TableTennisDaily's Dan!

I'd like to disagree. To move the racket fast enough is the easiest part. It can move so much quicker. The hardest part imho in this setup is to adjust the racket angle corectly to counter the spin and speed. We do it by adjusting our distance to the table, the force of our stroke and counterspinning the ball with full swing topspin. The robot is more or less blocking. So angle is crucial and subtle miscalculation can result in the ball flying of the table...

Sent from my SM-G935F using Tapatalk

No. To beat a world champion, the easiest and cheapest part is is probably the racket angle actually.

The motor required to change racket angle is extremely simple, small, fast and accurate. They are also relatively cheap. Also the calculation in determining the racket angle is relatively simple and quick to perform. Actually calculations to perform any of the tasks required to beat a world champion would be fast enough on any reasonable consumer grade GPU from the current or previous architectures.

To hit back a shot from the optimal position for a sequence of wide balls to differing parts of the table (or just fast balls from 1 point in space to another different point in space over a long distance), the entire racket must move from one exact position to another exact position metres away in three domensional space within a very short period of time. Accelerating and decelerating at great rates reliably spanned over multiple motors gets exponentially more difficult and expensive as the distance increases. If they are to keep the current set up (which I don't think they will), the frame needs to increase massively in size in all 3 dimensions, the issue here is that the lever lengths also increase therefore the accuracy of the motor must also increase and then calibration and tolerances start to become the issue. Some motors from the exact same production run simply won't be able to work with each other due to tolerances alone.

The motor setup required to do this are large, expensive and as far as I'm aware nothing exists right now to do something like that that is priced within reason, but it is Omron, so what would I know.

Also make note that Dan is aware of the robot's current deficiencies, moving to wide angles, quickly, hence why he never hits it wide and fast when possible, except maybe once.

Also, I'm surprised the robot actually needs sensors on the racket. Since cnn video rec is good and fast enough to be able to determine velocity, path and angle, I would say this must be done due to time constraints on the project since they're focusing mainly on the mechatronics side of this as opposed to the machine learning side of things. Well, I hope so.
 
Last edited:
  • Like
Reactions: Dr Evil and jawien
This user has no status.
This user has no status.
Well-Known Member
Dec 2017
1,144
619
2,637
When Dan gave the serves before the match, it returned them like one of those wily old penholders who puts the racket at the correct angle and just moves the racket forward a little to use the spin on the ball. :) hahaha.

I wonder what style would machine choose, if given the opportunity to test different setups ... and what blades or rubbers ... ; ) Would it start EJing ... ? ha ha ; )
 
  • Like
Reactions: UpSideDownCarl
I wonder what style would machine choose, if given the opportunity to test different setups ... and what blades or rubbers ... ; )

This is a reasonable question and an answer definitely exists and is possible to find. Off the top of my head without too much though, each machine would have an optimal equipement set up based on its own capabilities and the opponent and opponent's equipment.
 
This user has no status.
This user has no status.
Well-Known Member
Jul 2017
1,763
836
2,919
I doubt a robot like that will ever be affordable to any that isn't rich. A national team will probably be able to afford one will probably not buy one because people will be better. Why?

People are more flexible and faster.
I cannot see a TT robot replacing a good chopper.
The machine will require lots of maintenance.
People will be cheaper.

The robot is fast but Dan was not really trying to hit the ball in a deceptive way or very fast.
Dan was putting on a good show.

There are some comments above that are very close to seeing the future.
Right now the goal is to make a robot that can be the best.
This is not necessary to have a huge affect on your life.
What if the robots are able to be better than 50% of the people at doing a particular task? Then what? What do 50% percent of the people do? Making a robot that can perform better than 90% is not hard.

There are two industries I am very familiar with. Both involve scanning raw material and then removing defects or cutting the material to get the most out of of the raw material. This has been happening since the 1980s. I have seen lots of people be replaced by machines that are much faster and accurate than people. There is no way for people to compete.
 
This user has no status.
This user has no status.
Well-Known Member
Dec 2017
1,144
619
2,637
This is a reasonable question and an answer definitely exists and is possible to find. Off the top of my head without too much though, each machine would have an optimal equipement set up based on its own capabilities and the opponent and opponent's equipment.

Well they said, the robot is learning so they probably incorporated some modern neural network. Since network can learn, it will basically 'try' to figure out the best approach. The only problem I see is that i.e. Alpha Zero (the chess AI) was able to 'teach' itself. It was basically playing against itself with an incredible speed thus achieving a great expertise in the end. With table tennis they need a human player to learn from him. The learning process might be much slower I guess ...
 
Well they said, the robot is learning so they probably incorporated some modern neural network. Since network can learn, it will basically 'try' to figure out the best approach. The only problem I see is that i.e. Alpha Zero (the chess AI) was able to 'teach' itself. It was basically playing against itself with an incredible speed thus achieving a great expertise in the end. With table tennis they need a human player to learn from him. The learning process might be much slower I guess ...

1. Video recognition of different players. If the tournaments are recorded with suitable equipment for the next year, it is more than enough data.
2. And simulation.

It does not need to hit against itself to be able to learn. In fact, if it did need to hit against itself to learn, it would take an eternity to optimise. Az learnt against itself by running many games in parallel and at warp speed.
 
This user has no status.
This user has no status.
Well-Known Member
Dec 2017
1,144
619
2,637
1. Video recognition of different players. If the tournaments are recorded with suitable equipment for the next year, it is more than enough data.
2. And simulation.

Dan's 'partner' was probably using several different cameras for extracting body mechanics, I'm not sure if this could be done from the 2d videos with a good enough accuracy ... then you would have to model to provide to the AI machine what spin/speed and ball trajectory a specific body movement produces. Theoretically ... but still not sure how productive such learning process would be.

It does not need to hit against itself to be able to learn. In fact, if it did need to hit against itself to learn, it would take an eternity to optimise. Az learnt against itself by running many games in parallel and at warp speed.

Playing against itself would be counterproductive imho, because it's purpose was to figure out how human body movement 'affects' the ball ...
 
Dan's 'partner' was probably using several different cameras for extracting body mechanics, I'm not sure if this could be done from the 2d videos with a good enough accuracy ... then you would have to model to provide to the AI machine what spin/speed and ball trajectory a specific body movement produces. Theoretically ... but still not sure how productive such learning process would be.



Playing against itself would be counterproductive imho, because it's purpose was to figure out how human body movement 'affects' the ball ...

No. The body is not taken into consideration. The racket has sensors on it and there are multiple cameras. Additionally the trajectory, spin and velocity of the ball would be calculated within 2mm of the ball leaving the rubber. The machine would have estimated it alone from the racket sensors and the incoming (to the human) ball parameters.

If it is learning to develop winning strategies then playing against itself (in a simulated 'software' environment) is exactly how it would learn. It's actually the only way. I suspect that the way this machine works is not based on machine learning though, I think the movements are logically hardcoded. Having sensors on the bat make that obvious. Kind of like deepblue as opposed to alphazero. I believe it has a small training component which has the ability to optimise in certain circumstances, whether or not it does it on the fly, is anyone's guess. And I doubt that component is any more powerful than what you find in your smartphone.
 
Last edited:
This user has no status.
This user has no status.
Well-Known Member
Dec 2017
1,144
619
2,637
No. The body is not taken into consideration. The racket has sensors on it and there are multiple cameras. Additionally the trajectory, spin and velocity of the ball would be calculated within 2mm of the ball leaving the rubber. The machine would have estimated it alone from the racket sensors and the incoming (to the human) ball parameters. [...]

Ok, yeah I agree the sensors are for calculating the impact the racket has on the ball, you are right the body movement might not be taken into account when calculating the trajectory.

If it is learning to develop winning strategies then playing against itself (in a simulated 'software' environment) is exactly how it would learn. It's actually the only way. I suspect that the way this machine works is not based on machine learning though, I think the movements are logically hardcoded. Having sensors on the bat make that obvious. Kind of like deepblue as opposed to alphazero. I believe it has a small training component which has the ability to optimise in certain circumstances, whether or not it does it on the fly, is anyone's guess.

Machine movements are probably hardheaded too, although I could imagine a learning process where movements are not modeled but 'taught'. The learning for sure might be used to predict ball's trajectory and placement after human hits it and most likely to respond in a most efficient way ...
 
although I could imagine a learning process where movements are not modeled but 'taught'. The learning for sure might be used to predict ball's trajectory and placement after human hits it and most likely to respond in a most efficient way ...

I'm not saying it's impossible but in general we need millions of data and also training iterations to actually reduce the error and optimise the network. That's why for it to receive 1 feedback learning occurance and optimise based on this is unrealistic. Plus it requires time to actually do so plus that particular data is also going to affect other mechanics of the robot. That's if the model is based on learning, which I'm almost certain it isn't.

There are ways to do it, like constraining the parameters to very specific events. As in if it misses the return on a serve, then it would only adjust its movement when a very similar serve is done from a very similar spot. But this can be viewed as a bandaid and not what we strive for in machine learning. Bandaids add complexity and complexity adds time to calculation.

Like I mentioned before, the more I think about it now the more it becomes obvious that their focus is predominantly in hardcoded mechatronics and not machine learning. It's really a deepblue not an alpazero.
 
I doubt a robot like that will ever be affordable to any that isn't rich. A national team will probably be able to afford one will probably not buy one because people will be better. Why?

People are more flexible and faster.
I cannot see a TT robot replacing a good chopper.
The machine will require lots of maintenance.
People will be cheaper.

The robot is fast but Dan was not really trying to hit the ball in a deceptive way or very fast.
Dan was putting on a good show.

There are some comments above that are very close to seeing the future.
Right now the goal is to make a robot that can be the best.
This is not necessary to have a huge affect on your life.
What if the robots are able to be better than 50% of the people at doing a particular task? Then what? What do 50% percent of the people do? Making a robot that can perform better than 90% is not hard.

There are two industries I am very familiar with. Both involve scanning raw material and then removing defects or cutting the material to get the most out of of the raw material. This has been happening since the 1980s. I have seen lots of people be replaced by machines that are much faster and accurate than people. There is no way for people to compete.

You are 100% correct. Machines are faster, more accurwte, better than humans. Machine learning is smarter, faster, more accurate than humans. We, as humans, should embrace it and use it to make our world a better place.

Right now the machines for this project are more expensive than humans, but not for long. Don't rule out a world champion beating robot in the near future and an 'affordable' perfect training partner shortly after that.
 
This user has no status.
This user has no status.
Well-Known Member
Dec 2017
1,144
619
2,637
I'm not saying it's impossible but in general we need millions of data and also training iterations to actually reduce the error and optimise the network. That's why for it to receive 1 feedback learning occurance and optimise based on this is unrealistic. Plus it requires time to actually do so plus that particular data is also going to affect other mechanics of the robot. That's if the model is based on learning, which I'm almost certain it isn't.

There are ways to do it, like constraining the parameters to very specific events. As in if it misses the return on a serve, then it would only adjust its movement when a very similar serve is done from a very similar spot. But this can be viewed as a bandaid and not what we strive for in machine learning. Bandaids add complexity and complexity adds time to calculation.

Like I mentioned before, the more I think about it now the more it becomes obvious that their focus is predominantly in hardcoded mechatronics and not machine learning. It's really a deepblue not an alpazero.

'Arm' mechanics are hard-coded for sure, I was just imagining. So where do you think learning they mentioned would come into play?
 
Last edited:
'Arm' mechanics are hard-coded for sure, I was just imagining. So where do you think learning they mentioned would come into play?

I think it's the bandaid stuff I mentioned. I can't think of any other way it can do it on the fly. But bandaids come at the cost of computational efficiency. It's like having a different set of rules for every situation. And that number of rules is 'exactly' infinity.

The problem is whenever a mistake happens, the model will slightly adjust, or converge to the minimum possible error, but the machine won't actually know if the model has moved in the right direction (the direction of minimum error) until the machine sees the same example again. And it won't just need to see it again, it'll need to see it thousands more times for it to 'learn' or to converge to the minimum achievable error. It is unlikely that the machine will make an error and just correct it straight away. It is quite difficult for it to learn this way based on machine learning techniques because if it is to make such a large change it will most likely overcompensate. Their robot is far from this level of autonomy, though.

A simpler explanation would be this. The term 'AI' is very broadly applied by the Omron guy. It is possible that when a mistake is made it is assessed by the sensors (incoming and outgoing spin, velocity, trajectory etc) and then if it can calculate why the error occurred, it can make suitable adjustments for these situations. The thing is, some would no longer consider this 'machine learning'.

I would actually think also since it is basically hardcoded, any mistake is logged and reviewed by their research group to see whether it can be adjusted within the mechanical limitations to eliminate the mistake. It also gives them an idea of where their calculations are going wrong.
 
This user has no status.
This user has no status.
Well-Known Member
Jul 2017
1,763
836
2,919
I'm not saying it's impossible but in general we need millions of data and also training iterations to actually reduce the error and optimise the network.
It won't take millions. A few hundred should be plent.

That's why for it to receive 1 feedback learning occurance and optimise based on this is unrealistic.
1 case is not enough.
1000000 balls of the same trajectory and spin will not teach the computer much.
100 different balls will teach the computer a lot. Much depends on the input.

Plus it requires time to actually do so plus that particular data is also going to affect other mechanics of the robot. That's if the model is based on learning, which I'm almost certain it isn't.
There is plenty of time between rallies to update rules.

There are ways to do it, like constraining the parameters to very specific events. As in if it misses the return on a serve, then it would only adjust its movement when a very similar serve is done from a very similar spot. But this can be viewed as a bandaid and not what we strive for in machine learning. Bandaids add complexity and complexity adds time to calculation.
Yes, a more general case is best.

Like I mentioned before, the more I think about it now the more it becomes obvious that their focus is predominantly in hardcoded mechatronics and not machine learning. It's really a deepblue not an alpazero.
Agree. I think the learning is very simple. If the ball goes too high or long then adjust the impact speed or the angle of the paddle by a little. That isn't what I call AI.

@purpletiesto, I thought you might like this info.
I recently bought a Dell I9 with a GTX1080 with 2560 CUDA cores.
CUDA cores are simple computers that can run in parallel. They are used in video processing but can be used for any parallel process. I downloaded StockFish. Wow it is fast. I got crushed.
I am not surprised that the Omron TT robot can do very fast video processing now.
If have talked to people that have used many NVidea cards just to take advantage of the CUDA cores to do parallel processing.

I have yet to download Leela Chess 0. LC0 is a open source version of Alpha Zero. It doesn't use a brute force tree search like Stockfish or Deepblue.
There is a YT channel by a King Crusher that evaluates games between the top chess programs.

I also know someone that works for Omron that is an expert in motion control. We are actually competitors. I will ask him if he knows anything about the Omron TT robot. I think he would share some knowledge if he knows anything.
 
This user has no status.
This user has no status.
Member
Dec 2018
401
427
1,398
Read 1 reviews
No need for machine learning to solve physics equations. They're probably using it to model the information input from imperfect sensors and to optimize the output possible given the mechanical limitations of the robot. Completely different problem domains from games like chess or go.
 
It won't take millions. A few hundred should be plent.


1 case is not enough.
1000000 balls of the same trajectory and spin will not teach the computer much.
100 different balls will teach the computer a lot. Much depends on the input.


There is plenty of time between rallies to update rules.


Yes, a more general case is best.


Agree. I think the learning is very simple. If the ball goes too high or long then adjust the impact speed or the angle of the paddle by a little. That isn't what I call AI.

@purpletiesto, I thought you might like this info.
I recently bought a Dell I9 with a GTX1080 with 2560 CUDA cores.
CUDA cores are simple computers that can run in parallel. They are used in video processing but can be used for any parallel process. I downloaded StockFish. Wow it is fast. I got crushed.
I am not surprised that the Omron TT robot can do very fast video processing now.
If have talked to people that have used many NVidea cards just to take advantage of the CUDA cores to do parallel processing.

I have yet to download Leela Chess 0. LC0 is a open source version of Alpha Zero. It doesn't use a brute force tree search like Stockfish or Deepblue.
There is a YT channel by a King Crusher that evaluates games between the top chess programs.

I also know someone that works for Omron that is an expert in motion control. We are actually competitors. I will ask him if he knows anything about the Omron TT robot. I think he would share some knowledge if he knows anything.

Thanks for the info. I have a phd in deep learning.

I would think that for video processing, 8gb vram wouldn't be enough, I guess you have around 128gb ram, though. My previous servers each had 4x1080, but now we're using 4x1080ti servers for the additional 3gb vram / gpu. I'm not spanning networks over multiple cards, though, just using to train multiple at once. Usually, anyway.
 
Last edited:
No need for machine learning to solve physics equations. They're probably using it to model the information input from imperfect sensors and to optimize the output possible given the mechanical limitations of the robot. Completely different problem domains from games like chess or go.

Honestly I think he is just using the term "AI" very liberally. I'd be surprised if any machine learning is being used here.
 
This user has no status.
This user has no status.
Member
Jan 2019
17
1
18
Spinny serves are probably the easiest thing for it to handle because calculating the rotation on the ball is quite easy with front on sensors alone, not even including the ones on the bat.

The thing most difficult is to hit back anything fast and wide due to the limitation of how quickly the arm can move to position.

To beat a world champion... Depends on how much they're willing to invest. I think we have the technology to do this already actually. The frame and movement capability need to be way bigger, the electric motors need to be way faster, the machine learning capabilities to develop and implement actual winning strategies already exist, though. But having a machine to accurately move large distances in that speed and time is probably the most difficult and expensive part. Developing an incredibly good bordering on impossible to return serve should be one of the easier to do, but would most likely need a specific motor capabilities just for serving to actually implement it.
According to you, it is much easier to develop a tennis robot since in a tennis game the robot could have much more time to move and react (I believe it could move much faster than human). Actually I am curious that does a tennis robot already exist and how well it performed.
 
This user has no status.
This user has no status.
Member
Jan 2019
17
1
18
Spinny serves are probably the easiest thing for it to handle because calculating the rotation on the ball is quite easy with front on sensors alone, not even including the ones on the bat.

The thing most difficult is to hit back anything fast and wide due to the limitation of how quickly the arm can move to position.

To beat a world champion... Depends on how much they're willing to invest. I think we have the technology to do this already actually. The frame and movement capability need to be way bigger, the electric motors need to be way faster, the machine learning capabilities to develop and implement actual winning strategies already exist, though. But having a machine to accurately move large distances in that speed and time is probably the most difficult and expensive part. Developing an incredibly good bordering on impossible to return serve should be one of the easier to do, but would most likely need a specific motor capabilities just for serving to actually implement it.

I'd like to disagree. To move the racket fast enough is the easiest part. It can move so much quicker. The hardest part imho in this setup is to adjust the racket angle corectly to counter the spin and speed. We do it by adjusting our distance to the table, the force of our stroke and counterspinning the ball with full swing topspin. The robot is more or less blocking. So angle is crucial and subtle miscalculation can result in the ball flying of the table...

Sent from my SM-G935F using Tapatalk
I think you are right. A robot could move much faster than human. The most difficult part would be learning to calculate the spin and drop point according to the arcs of the ball and the motion of its opponent. It feels like this is a very difficult work since there is so little time in pingpong for a man(robot) to react. I think it would be much easier to develop a tennis robot. And I believe developing a snooker robot which could beat a human world champion would not be a challege.
 
Top