AI. ML. AR. VR . In Your Pocket

Reference material for GDGSF Dec2020 Meetup

YLabZ
6 min readDec 25, 2023

Is there anything your smart phone can’t do?

My mobile coding/presentation spot … Can you guess where?

Description

Let’s end this crazy year with an entertaining & educational discussion about the power of today’s smartphones. We will look at cool projects being done in Artificial Intelligence (AI) and Augmented reality (AR) with a review of the technology that makes them work.

Smartphones are changing the world …

Part1 — AR

ARCore is Real to the Core!

We start with what is Augmented Reality and why it’s considered the next big thing in tech by exploring the history (Historic Tango Development), followed up by some mind-bending demos, and a reviewing the latest camera advancements (Samsung TOF [Time Of Flight] & iPhone LiDAR). After building a solid understanding of the what and why we look into the how of building a native Android AR app using Android Studio with Sceneform / Kotlin.

Part2 — AI/ML

Self Driving Car with Pixel 4’s “Neural Core” Edge TPU & TFLite

Wait … What ??? Google used an Android phone to build a self-driving car! Yes, in this section we will review how Google used a Pixel phone to build a “smartphone car” that can drive completely autonomously by using its camera/sensors to detect and understand signals from the world around it (sense lanes, avoid collisions, and read road traffic signs). https://blog.tensorflow.org/2020/07/p...

We will review the core concepts that were instrumental in building this amazing demo. This talk will be for anyone interested in AR/AI tech and not just mobile developers.

Meetup

Video

Reference Materail

Get Your AI in Gear with a JetBot AI Robot Kit
Driven by AI. Powered by NVIDIA® Jetson Nano™.

End to End Learning for Self-Driving Cars

https://www.udacity.com/course/self-driving-car-engineer-nanodegree--nd0013

Lane-keeping — LSTM and using multiple previous frames.

Paper —NVIDIA Corporation Holmdel, NJ 07735.

Car sensors needed to sefl drive:

MIT Self Driving Car Project

Training the ML model

Diagram of Learning System

Training data contains single images sampled from the video, paired with the corresponding steering command (1/r). Training with data from only the human driver is not sufficient. The network must learn how to recover from mistakes. Otherwise the car will slowly drift off the road. The training data is therefore augmented with additional images that show the car in different shifts from the center of the lane and rotations from the direction of the road.

How the system was trained

Off center camera give depth. Desired turn and error is back propagated to the CNN wights and biases.

27 million connections and 250 thousand parameters.

Features

The first layer of the network performs image normalization

The convolutional layers were designed to perform feature extraction and were chosen empirically through a series of experiments that varied layer configurations.

We use strided convolutions in the first three convolutional layers with a 2×2 stride and a 5×5 kernel and a non-strided convolution with a 3×3 kernel size in the last two convolutional layers.

Clean break between which parts of the network function primarily as feature extractor and which serve as controller

  • road type & weather condition
  • driver’s activity (staying in a lane, switching lanes, turning, and so forth …)

The video is dynamically shifted to follow the CNN commands. This allows the CNN to believe it is driving and allow the validation of the commands.

Basic Idea

Instead of saying is this an image of a cat or dog, you train on should you go left or right for each image shown and minimize the mean squared error between the place the car is driving and where the ML system places the car.

Evaluation — 3 hours 100 miles simulated driving.

Using a convolutional network that determined the “features” of a road the ML system can find a road and follow it . Learned by it’s self:

We have empirically demonstrated that CNNs are able to learn the entire task of lane and road following without manual decomposition into road or lane marking detection, semantic abstraction, path planning, and control. A small amount of training data from less than a hundred hours of driving was sufficient to train the car to operate in diverse conditions, on highways, local and residential roads in sunny, cloudy, and rainy conditions. The CNN is able to learn meaningful road features from a very sparse training signal (steering alone). The system learns for example to detect the outline of a road without the need of explicit labels during training.

Data Collection

Welcome to Udacity’s Self-Driving Car Simulator

Udacity’s method for “lane-keeping” data collection was the same concept used to train the Pixel Phone to understand lane detection.

The training data is very important. The first course did not have enough turns for the ML model to understand lane detection.

needed more turns to train the car.

When training the ML model on the Pixel phone the image confused the ML model until they used only the bottom 1/4 of the image

only use the bottom 25% of the image

The Neural Core was fast enough to process the images to

  • Steer the car
  • Read signs
  • Plan the route
  • Avoid obstacles in real time
Hand label the signs

We use object detection for two purposes. One is for localization. Each car needs to know where it is in the city by detecting objects in its environment (in this case, we detect the traffic signs in the city). The other purpose is to detect other cars, so they won’t bump into each other.

In real world Google Maps with the GPS would solve this. New Phones have LiDAR and Radar which can be used to help the car find it’s way around.

--

--