Self-Driving LEGO® Car Concept

AI/ML & GPS Navigation Powered by Your Smartphone

Siamak (Ash) Ashrafi
9 min readMay 1, 2024
Concept Image of Blocks with SmartPhone.

Build a Mini Autonomous Vehicle

This project explores the exciting concept of creating a self-driving LEGO® car with your smartphone acting as the brains of the operation. The car itself is constructed from LEGO® bricks with:

  • Smartphone as the AI/ML Engine: Leveraging the processing power of your phone, you’ll be able to run AI and machine learning algorithms to enable the car’s autonomous driving capabilities.
  • Unique Leveraging of Mobile Processing: By utilizing your phone’s front&Back Camera, AI/ML, LiDAR, accelerometer and GPS capabilities, you eliminate the need for expensive, dedicated hardware typically found in self-driving car projects.
  • Rust as the Control Center: Rust, a powerful and memory-safe programming language, will handle communication between the phone and the car’s hardware components, translating decisions into actions.

Affordable Construction:

LEGO® bricks are a perfect fit for building the car’s body and chassis, thanks to their versatility and accessibility. The estimated retail cost of the entire project falls comfortably under $250, making it ideal for families and classrooms.

Unleashing the Future of Play and Learning!

Please see the end of the article for a revolutionary learning experience that empowers children to build, code, and drive the future!

Resources

Background Learning

Learn the autonomous systems engineering skills you need to start or advance a career building self-driving cars and trucks, including Python, C++, ROS, Kalman filters, and more.

Udacity Self Driving Course

Overview of building a self driving car in 30min.

Pixelopolis

The self-driving LEGO® car concept project is based on the work from Google Pixelopolis.

Pixelopolis is an interactive installation that showcases self-driving miniature cars powered by TensorFlow Lite. Each car is outfitted with its own Pixel phone, which used its camera to detect and understand signals from the world around it. In order to sense lanes, avoid collisions and read traffic signs, the phone uses machine learning running on the Pixel Neural Core, which contains a version of an Edge TPU.

Pixelopolis in Action

Detailed Slides: https://docs.google.com/presentation/d/13_QPVxFPOVQp0Hk8_ldFVhmS1qv5FWSPtU0HBmPsZ0g/edit?usp=sharing

But we replace the 3D printed parts

with LEGO® car …

And the Pixelopolis City …

Pixelopolis but with Legos …

with a LEGO® city …

~~~

OpenBot

Excellent open source project. This is perfect for our project.

OpenBot is an open-source framework that turns a smartphone into the brain of a robot. It leverages the processing power, sensors, and communication capabilities of smartphones to control robots. With OpenBot, users can control robots remotely via Bluetooth controllers, smartphones, or even computers, and the data collected can be used to train the robot to navigate autonomously. OpenBot is open-source and relies on a community to propose new features and designs.

Our initiative at the OpenBot Foundation transcends the boundaries of technology, focusing on expanding outreach and making advanced robotics education accessible to all. By repurposing a modest $50 wheeled robot with the ubiquitous smartphone as its brain, we are not just creating affordable robotics; we are opening doors to innovation, learning, and discovery for individuals across the globe.

Amazing video of the unbelievable features of OpenBot!

Training the ML model

Diagram of Learning System

Training data contains single images sampled from the video, paired with the corresponding steering command (1/r). Training with data from only the human driver is not sufficient. The network must learn how to recover from mistakes. Otherwise the car will slowly drift off the road. The training data is therefore augmented with additional images that show the car in different shifts from the center of the lane and rotations from the direction of the road.

How the system was trained:

Off center camera give depth. Desired turn and error is back propagated to the CNN wights and biases.

27 million connections and 250 thousand parameters.

Features

The first layer of the network performs image normalization

The convolutional layers were designed to perform feature extraction and were chosen empirically through a series of experiments that varied layer configurations.

We use strided convolutions in the first three convolutional layers with a 2×2 stride and a 5×5 kernel and a non-strided convolution with a 3×3 kernel size in the last two convolutional layers.

Clean break between which parts of the network function primarily as feature extractor and which serve as controller

  • road type & weather condition
  • driver’s activity (staying in a lane, switching lanes, turning, and so forth …)

The video is dynamically shifted to follow the CNN commands. This allows the CNN to believe it is driving and allow the validation of the commands.

Basic Idea

Instead of saying is this an image of a cat or dog, you train on should you go left or right for each image shown and minimize the mean squared error between the place the car is driving and where the ML system places the car.

Evaluation — 3 hours 100 miles simulated driving.

Using a convolutional network that determined the “features” of a road the ML system can find a road and follow it . Learned by it’s self:

We have empirically demonstrated that CNNs are able to learn the entire task of lane and road following without manual decomposition into road or lane marking detection, semantic abstraction, path planning, and control. A small amount of training data from less than a hundred hours of driving was sufficient to train the car to operate in diverse conditions, on highways, local and residential roads in sunny, cloudy, and rainy conditions. The CNN is able to learn meaningful road features from a very sparse training signal (steering alone). The system learns for example to detect the outline of a road without the need of explicit labels during training.

Data Collection

Welcome to Udacity’s Self-Driving Car Simulator

Udacity’s method for “lane-keeping” data collection was the same concept used to train the Pixel Phone to understand lane detection.

The training data is very important. The first course did not have enough turns for the ML model to understand lane detection.

needed more turns to train the car.

When training the ML model on the Pixel phone the image confused the ML model until they used only the bottom 1/4 of the image

only use the bottom 25% of the image

The Neural Core was fast enough to process the images to

  • Steer the car
  • Read signs
  • Plan the route
  • Avoid obstacles in real time

Hand label the signs

We use object detection for two purposes. One is for localization. Each car needs to know where it is in the city by detecting objects in its environment (in this case, we detect the traffic signs in the city). The other purpose is to detect other cars, so they won’t bump into each other.

In real world Google Maps with the GPS would solve this. New Phones have LiDAR and Radar which can be used to help the car find it’s way around.

The Project also uses the NVIDIA self driving resources.

NVIDIA Self Driving Resources

Autonomous Vehicles are Born in the Data Center
NVIDIA’s infrastructure for autonomous vehicles encompasses the complete data center hardware, software, and workflows needed to develop safe autonomous vehicles — from neural network development and training to testing and validation in simulation.

Accelerating the Future of AI-Defined Vehicles

DRIVE Sim Scenario Reconstruction, Powered by Omniverse

new AI-based tools for NVIDIA DRIVE Sim that accurately replicate driving scenarios. These tools are enabled by breakthroughs from NVIDIA Research leveraging NVIDIA’s core technologies, including NVIDIA Omniverse and DRIVE Map.

Foretellix leverages NVIDIA Omniverse Cloud APIs to generate high-fidelity sensor simulations for autonomous vehicle development.

Software Repo

Detailed ML and Rust controller code coming soon …

TF Code / Rust Controller Code ... 

Referece Material

This could be a very interesting new product for LEGO®!

LEGO® Self-Driving Cars

Unleashing the Future of Play and Learning!

Imagine a world where children build, code, and drive the future — all with the power of LEGO® bricks!

This innovative concept merges classic LEGO® construction with cutting-edge self-driving car technology, igniting a passion for STEM learning in a fun and accessible way.

Here’s what sets this project apart:

  • Transformative Play: Goes beyond traditional LEGO® sets, introducing kids to AI, ML, and autonomous vehicles — a rapidly growing field.
  • Affordable Learning: Utilizes readily available LEGO® bricks and smartphones, making it a cost-effective way to introduce STEM concepts.
  • Empowering Open-Source Learning: This project goes beyond following instructions. Kids can delve deeper by exploring the open-source code, gaining a richer understanding of how the car functions. This fosters tinkering, problem-solving, and coding skills — valuable assets for future generations.

Two Ways to Play:

Download a User-Friendly App: A beginner-friendly app provides a quick and easy way to get started, perfect for immediate play. No coding required!

Build Their Own Code: For more advanced learners, the open-source code allows them to customize the car’s behavior, experiment with different algorithms, and unlock the full potential of self-driving technology.

This approach caters to various learning styles and skill levels, making it an inclusive and engaging learning experience for all.

Benefits for LEGO®:

  • Early Engagement: Positions LEGO® as a leader in introducing children to future technologies, fostering brand loyalty for years to come.
  • Expanded Play Value: Enhances the LEGO® experience, offering a whole new dimension of play and learning.
  • Educational Appeal: Appeals to parents and educators seeking engaging STEM learning tools for children.
  • Product Diversification: Opens doors for a new product line of LEGO® sets specifically designed for this project.

Building on Existing Innovation:

This project draws inspiration from Google’s Pixelopolis, showcasing self-driving miniature cars powered by smartphones. However, this concept replaces 3D printed parts with the versatility and familiarity of LEGO® bricks.

By this we can create a revolutionary learning experience that empowers children to build, code, and drive the future!

~Ash

--

--