Upcoming Events:

IGVC

The IGVC committee is currently on pace to compete at the 25th annual Intelligent Ground Vehicle Competition held by Oakland University on June 1st, 2018.

This competition tasks students with the goal of building a vehicle that can autonomously navigate a moderately sized obstacle course, all the while staying within painted white lanes and maneuvering around a variety of construction barrels and simulated potholes.

Innovations

Using a previous iteration of the past year’s robot, we have decided that it would be most optimal to entirely rewrite our software this time around. Utilizing an open-source, meta-operating system known as ROS (Robotic Operating System), we have already implemented nodes for our VectorNav VN-200 IMU, Jet3 motor controllers, 1080p digital cameras, and wireless Xbox controller.

What is exactly is ROS?

ROS provides a versatile framework of these things called nodes. Each node can either subscribe to certain topics or publish to other topics, or sometimes do both. Topics are a series of pipelines where information can travel through. What is being sent through these pipes are the messages. Messages can carry anything from float values to strings. The following infographic shows a simple network of nodes communicating with one another through the ROS framework.

ROS Node Diagram

Retrieved from MathWorks.

Sensor Fusion

Currently, we are using a complementary filter to fuse raw IMU data. Eventually we expect that a rudimentary Kalman filter will replace the current filter, however a complementary filter appears to be working just fine. Here is a comparison of the Kalman Filter (green) next to the complementary filter (black) and as well as a second-order complementary filter (yellow). As you can see, they all appear to function within error of one another.

Kalman Filter Sample Graph

Retrieved from Robottini.

Lane Detection

Our cameras on the other hand take in visual information and publish this information onto the proper topic.

Car Lane Picture

Retrieved from car-finding-lane-lines on GitHub.

Using this information, we first convert the RGB color space into HSV/HSL color spaces as this improves detection of white color; the white lanes in our case.

Car lanes in HSV color space Car lanes in HSL color space

Above: HSV and HSL color spaces, respectively.

Next, we apply a grayscale filter to make things computationally less intensive. One value for each pixel ranging from zero to one will be easier to work with than three, which in this case are hue, saturation and value/lightness.

Car lanes with grayscale filter

We then apply a gaussian blur to remove significant noise, smoothing out edges in the image. Gaussian blurring helps to circumvent any issues regarding pixel-based irregularities.

Car lanes with gaussian blurred image

Once we have a smooth raw image, we are ready to use OpenCV’s canny edge detection. The canny edge detection works by looking at connected lines and using hysteresis thresholding.

Canny edge detection diagram

Retrieved from OpenCV.

Lines that fall within the range between minVal and maxVal values must have connected components beyond the maxVal to be classified as edges. All other lines not entirely above the maxVal will be discarded as non-edges. OpenCV allows us to tweak these min and max parameters as we see fit.

Car lanes with edges detected

Once edges have been established, a Hough line transform will process the image and provide us with the straightest edges in each image frame. Further testing will be required to obtain the proper thresholds and various kernel parameters that optimize our lane detection.

Car lanes with Hough transform on edges

And there we have it! Straight lines overlaid directly on top of where the lanes should be. Extrapolating this data, we can then establish lines that extend further out into the field of view of the robot.

All of this image processing can be done with the OpenCV library packages. The following image shows a rundown of our hough line transform code. Future revisions are pushed periodically to our GitHub repository.

Sample code

Future of IGVC

We are working on acquiring a LiDAR sensor. This will most likely be either the RPLidar A2M6 or Hokuyo URG-04LX. Once a LiDAR has been acquired we can begin testing various SLAM algorithms, most likely one of the following: Hector Mapping, GMapping, or EKF-SLAM as these methods have all been thoroughly explored.

Wrapping Things Up

Path planning will ultimately be the final step and we have plans to implement an A* pathfinding algorithm, taking in inputs from GPS, IMU yaw heading, localized positioning and a 2D occupancy grid established with a proper SLAM approach. There is still much to be done and I welcome you all to come stop by and see our work at the RAS office in EER 0.822!

Author: Ricky Chen