Upcoming Events:

IGVC - Mid Semester Update

With less than 12 weeks remaining before competition, we’re looking to move right along into the initial testing phases of our robot!

Everything from vision to path planning needs to be tested as accurately as possible so that we can fine tune our robot to perform at its peak during the actual competition. Here’s what we’ve been working on so far:

Encoders:

Recently the 2018 IGVC team purchased Hall effect magnetic encoders. By measuring the output quadrature signals from these encoders, we can tell not only which direction a shaft is rotating but also at what relative speed it is rotating at. In other words, its angular velocity! This information is great for logging odometry data, i.e. data that measures linear distances traveled.

quadrature encoder timing diagram

Retrieved from: allaboutcircuits.com

In previous years, very rough mounting hubs were used to mount Cruz Monrreal II’s (past UT RAS alumni) personal homemade encoders. We have opted this time around to get the AS5048A Hall effect magnetic encoders instead. One of our team members, Zahin Nambiar modeled a mounting hub that’ll position these encoders at a very specific height off the motor’s output shaft. Thanks Zahin for this mounting hardware!

encoder mount CAD encoder mount on motor encoder dimensions

If you have any questions about encoders. Feel free to message @pineapple on slack, aka Joseph Ryan. He’s been working on the drivers to run these encoders for a little while now and I personally am excited to see what he’ll be putting out over these next weeks.

CAD Models:

In an attempt to realign with my roots in mechanical engineering, I modeled a basic 1:1 assembly of the entire chassis.

frame CAD

The solid part and .stl files are all on our Google Drive and Github repo, which for those who are unaware, Github can display .stl files which is totally rad. This was more so just for fun, but it’s nice to see our actual robot in simulation software such as gazebo as opposed to just a box with wheels.

Vision:

For the more software-oriented folks, we also have had some luck integrating our lane detection software into laser scan messages. So far here’s a quick overview of what we have for lane detection:

processing flowchart

As you can probably tell, we have a lot going on in our vision code. Luckily for us, OpenCV does a lot of these steps for us. We just need to call these functions in a few lines of code.

However, we are by no means finished with vision. Calibration of cameras, more optimized algorithms, ahem replacing my failed attempt to sort through a large array efficiently, are all just a few things to work on when it comes to visual processing.

Path Planning:

This subsystem arguably needs the most work. If you recall from the previous blog post, we have been relying on GMapping SLAM, and the ready-made AMCL + move_base packages that ROS provides open-source. Although these are great to work with, we have no idea what is happening behind the scenes. Turtlebot has excellent tutorials for this but once you get stuck, it’s often very challenging to find your way out… Quite like the bots we have been simulating these past couple months.

So here’s what we’ve cooked up in response. Throw ourselves into the world of extended Kalman filtering and A* pathfinding until we can get a grasp of even the most basic concepts.

Kalman filter illustration A* pathfinding illustration

Retrieved from: rlabbe’s Github and Vel Sekelj’s Google Plus, respectively

Worst case scenario, we opt for a simpler, more reactive bot rather than one that actively implements closed-loop mapping (despite how cool that would be).

More than just Path Planning:

Here’s our plans for the remainder of the semester. This outline is subject to change accordingly depending on how we feel each subsystem is coming along.

As always, I highly recommend coming out to our weekly meetings on Fridays at 4 PM, EER 0.822.

Thanks for reading! Slack Channel: #igvc

Author: Ricky Chen