This is not exactly robotics but I thought this project might interest a few folks on the forum (especially during the Pi Day Sale) :
This project results from a challenge my son gave me when I was teaching him the basics of computer programming making a simple text based Rock-Paper-Scissors game in Python. At that time I was starting to experiment with computer vision with a Raspberry Pi and an old USB webcam so my son naively asked me:
“Could you make a Rock-Paper-Scissors game that uses the camera to detect hand gestures?”
I accepted the challenge and about a year and a lot of learning later, I completed the challenge with a functional game.
Overview of the game
The game uses a Raspberry Pi computer and Raspberry Pi camera installed on a 3D printed support with LED strips to achieve consistent images.
The pictures taken by the camera are processed and fed to an image classifier that determines whether the gesture corresponds to “Rock”, “Paper” or “Scissors” gestures.
The image classifier uses a Support Vector Machine, a class of machine learning algorithm. The image classifier has been priorly “trained” with a bank of labeled images corresponding to the “Rock”, “Paper”, “Scissors” gestures captured with the Raspberry Pi camera.
How it works
The image below shows the processing pipeline for the training of the image classifier (top portion) and the prediction of gesture for new images captured by the camera during the game (bottom portion).