self-driving Picar: A deep-learning robotic car project
I developed a self-driving R/C car integrating TensorFlow, Google Coral, and Python with Raspberry Pi, employing advanced machine learning and sensor fusion for autonomous navigation.
The Self-driving Car uses rgb transformation to see the contrast lane, set direction by determining the lane angle, and recognize speed signs and objects in its path.
Features:
Real-time image analysis at 7 frames/second
Full self-steering capability
Lane Following
Object Recognition
Upcoming Plan:
Solar Power Integration
Throttle Control
Automated stop sign recognition
Objective
My passion for combining different engineering discipline (system integration, mechanical, and AI) drove me to start the "Self-Driving PiCar" project. The project is a blend of AI programming, comprehensive system integration, and hands-on mechanical engineering. The goal of the project is to build a car that can drive itself while combining the system integration of camera, servos, motors, and software that sets the car in motion. The results are quite surprising & amazing to me, as the car can stay inside a lane and steer itself!
Detailed Project Breakdown
Hardware Assembly
For the Self-Driving PiCar, I selected the following key components for their reliability and functionality:
Raspberry Pi 3 Model B+: Chosen for its powerful processing capabilities, this model serves as the brain of the car. It features a 1.4GHz 64-bit quad-core processor, essential for handling the computing demands of autonomous navigation.
SunFounder PiCar-V Kit: This kit forms the main body of the car. It includes motors, wheels, and a frame, providing a solid foundation for the project. The kit was selected for its compatibility with the Raspberry Pi and the ease of assembly.
Google Edge TPU USB Accelerator: An essential addition for enhancing the Pi's capability to process deep learning models. This accelerator allows for real-time processing, crucial for tasks like object detection and decision-making.
64 GB Micro SD Card: Used for storing the Raspberry Pi’s operating system and all software. The choice of 64 GB offers ample space for software, data, and video recordings for analysis.
18650 Batteries and Charger: These rechargeable batteries power the PiCar, ensuring sufficient energy for prolonged operations.
USB Camera: A critical component for the car's vision, it captures live footage for processing and decision-making.
Miscellaneous Accessories: Including wires, sensors, and connectors necessary for assembling the various components into a cohesive unit.
hardware Challenges:
Sensor Calibration: Fine-tuning the USB camera and distance sensors was critical. Initially, the camera's field of view wasn't aligning with the car's trajectory. Through iterative testing and adjustments, I calibrated the camera for optimal alignment and focus, ensuring accurate image capture for processing.
Power Management: The initial setup faced power inconsistencies, leading to system reboots during operation. I resolved this by optimizing the power distribution and employing 18650 batteries, which provided a more stable and enduring power source.
Motor Control Precision: Achieving precise control over the motor speeds was challenging. I utilized the PWM (Pulse Width Modulation) control feature of the Raspberry Pi, enabling finer control over the motors for smooth and responsive navigation.
(and more challenges were encountered. This page is still Work in Progress.)
Software Setup
In the Self-Driving PiCar project, the software plays a crucial role in integrating the hardware components and enabling the car's autonomous capabilities. Here's an overview of the software tools I used, the challenges encountered, and how I solved those problems:
Python: The primary programming language used for writing the car's software, given its extensive support for machine learning and image processing.
OpenCV: A powerful open-source computer vision library, crucial for processing and analyzing the images captured by the car's camera.
TensorFlow: Google's open-source platform for machine learning, used for developing and running the deep learning models necessary for autonomous navigation.
Raspberry Pi OS: The operating system running on the Raspberry Pi, providing the necessary infrastructure for running the software.
Challenges Encountered:
1. Django Application Error: Encountered a TypeError: a bytes-like object is required, not 'str'
in a Django web application running on the Raspberry Pi.
Solution: The error was traced to a line of code in the
views.py
file of the Django application, which was attempting to perform string operations on a bytes-like object. Modifying the code to decode the bytes-like object to a string before performing string operations resolved the issue.
2. Color Detection Adjustment in OpenCV: Needed to adjust the color detection range in the OpenCV code to detect the color purple instead of blue.
Solution: Converted the provided HSV values for the purple color to OpenCV's scale and updated the lower and upper bounds in the code to define the range for purple color. This adjustment allowed the software to accurately detect purple colors in the given range.
3. Integration of Line Segmentation in OpenCV: Encountered a challenge in integrating the detect_line_segments
function into the existing detect_edges.py
script to display line segments in a new window.
Solution: The
detect_line_segments
function was successfully integrated into thedetect_edges.py
script. Modifications were made to the main loop to include the function and visualize the line segments. This required creating a new image to draw the line segments on and updating the display function to show the detected line segments in a separate window.
4. Superimposing Line Segments onto Original Frame: Required a method to overlay the detected line segments onto the original video frame, but faced difficulties in implementation.
Solution: Modified the script to create a copy of the original video frame and then drew the detected line segments onto this copy. This allowed for displaying the original video feed with the line segments superimposed, providing a clear visualization of the detection results.
5. Network Connectivity Issue: Encountered difficulties in establishing a connection to the Raspberry Pi via the network. Attempts to ping the Raspberry Pi's IP address resulted in a 'Request timed out' and 'Destination host unreachable' error, indicating possible network configuration issues.
Solution: Verified the Raspberry Pi's IP address, checked network configurations, and ensured both devices were on the same network. Adjustments to router settings and a system restart were necessary to establish a stable connection.
Each of these challenges required a pragmatic approach to troubleshooting and problem-solving, demonstrating adaptability and technical proficiency in handling diverse software and networking issues.
Project Impact and Future Enhancements
The Self-Driving PiCar project has been a significant milestone in my journey as a rail system engineer. It has not only reinforced my technical skills but also provided valuable insights into the integration of mechanical systems and AI. This project is a testament to my potential in contributing to advanced engineering projects in the future.
Future enhancements include:
Solar Power Integration: Experiment with solar panels as a sustainable energy source to extend the PiCar's operational autonomy and reduce dependency on conventional power sources.
Throttle Control Optimization: Develop a more refined throttle control system for smoother acceleration and deceleration, improving the overall driving experience and safety.
Automated Stop Sign Recognition: Integrate an AI-based system to recognize stop signs, enabling the PiCar to autonomously halt at intersections, thereby increasing its adherence to traffic rules.