Carebot is a robot based on a BrickPi that can do all kinds of interesting and useful stuff. It’s supposed to be for a hospital-environment. The initial goal was to follow a line with high contrast against the ground. We thought this wasn’t very easy to implement in hospitals so we challenged ourselves to come up with other ways to solve this problem.
- Computer vision
- Person follower
- Manual control
- Line follower with obstacle avoidance
This all has to be based on a Raspberry Pi 3; we knew computer vision was going to be slow and cpu heavy, though we did give it a try. We also wanted to integrate a voice assistant since that’s very useful in hospital environments; switching lights has never been easier. The option we picked was the Google Assistant service. As if that wasn’t enough, we also wanted facial recognition to alarm if intruders are detected.
The GitHub repository can be found here.
Computer vision
We used OpenCV to process the images taken by a cheap €2,- camera with integrated microphone. The faces were recognised using a Haar cascade, though this process was so slow we decided to take it out of the main program.
The person follower assumes that the person to be followed is wearing a bright red LED at shoe height. The image is filtered with a red layer and according to the position of the led we determine how fast each motor has to be spinning.
The obstacle avoidance is done by determining color differences in the image. Since the camera is slighly tilted down this works perfectly. Feet, legs and walls are detected perfeclty.
Line follower
The line is being followed by one color sensor and a tweaked PID loop. Crossings are detected by the use of one extra light sensor. When that is the case, and bluetooth is enabled, the user can choose which direction to go to.
When an object is detected, either with the camera or distance sensor, the robot will drive around it and continue its navigation.
Manual control
This is done by an ESP32 with TFT screen, joystick and 433Mhz sender. On the display all posible operation modes are visible. They can be chosen with the joystick, when manual control is chosen, the robot is controlled by the joystick and the inputs are visible on screen. The camera is also actively scanning the environment and reporting back where objects are blocking the way; this is reported to the user on the TFT-screen as red stripes.
Google Assistant
The robot also has a microphone and speaker connected to it. These are used for the Google Assistant library. This makes controlling the 433Mhz light possible. It’s also possible to start automatic navigation, follow me mode and manual control mode.