top of page
Camera on a tripod

​Self Stabilizing Camera System

Mechanical Skills Used: 

CAD 3D Modeling, Prototyping, Iterative Design, FDM 3D Printing, Mechanical Assembly 

 

Electrical and Programming Skills Used: 

Python, Circuit Design, IMU Systems, RF Communication, Arduino Coding, Electrical Circuit Analysis, Soldering

 

Software Used: 

Solidworks, Python, Arduino IDE

2023 Master's Capstone Project

During the pursuit of my Master's degree at UC Berkeley, I undertook a pivotal role in the capstone project for the innovative startup, Squishy Robotics. Squishy Robotics specializes in the development of tensegrity sensor robots that can be drone deployed into hazardous environments to autonomously gather real-time condition data without human exposure.

​

Now, what exactly is a tensegrity robot, you might ask? These robots possess a compliant structure comprising rigid rods and adjustable tension cables. Squishy Robotics' flagship creation is a mobile tensegrity robot that achieves movement by manipulating these cables, rolling in a manner akin to a ball. You can observe the robot's locomotion below.

​

When I joined the project, Squishy Robotics had already designed multiple iterations of the robot. However, these were primarily laboratory prototypes and fell short in terms of the features required to function effectively in real-world scenarios. The company tasked my team with refining the robot, improving communication reliability, creating user-friendly controls, and providing visual feedback to the operator.

​

I started by dedicating the initial month to familiarizing myself with Squishy's software suite and hardware components. This entailed understanding how to connect to the robot, programming using their libraries, and mastering robot locomotion policies. During this period, I identified a significant usability bottleneck - the startup procedure for the robot. Each time the robot was booted up, it required individual tensioning of its 24 cables to a precise length of 27 cm. Squishy Robotics' existing method involved manually measuring each cable with a ruler and adjusting it using software. This process consumed an average of 45 minutes, rendering it impractical for emergency deployments in real-world scenarios. Since the robot's encoders were not absolute, determining the position of the cable winding motors at startup was a challenge. Lacking sensors, I relied on the current limiting function of the motors as my feedback mechanism.

 

My solution involved running the motors to their extreme endpoints, recording the encoder positions at these extremes, and calculating the midpoint "tensioned" position of the cable. This autotensioning method was a novel approach designed by me and implemented by one of my team members, to allow them to learn programming. My prior programming expertise was needed to tackle more complex robot software tasks.

​

My initial contribution to the project included implementing a visualization system for the robot's autonomous locomotion using path planning. Prior to this enhancement, there was no means to view the robot's orientation and position as it moved.  My visualization was key in making robot autonomous testing easier. 

 

My most significant contribution was the development of a visual feedback system that enabled remote robot control without requiring a line of sight. Designing this system was far from straightforward. The robot's rolling locomotion would render an unstabilized camera feed unusable, as there would be no stable reference point or level horizon. To overcome this challenge, I designed a compact stabilization system capable of withstanding the rigors of being deployed from a drone.

​

My initial consideration involved a gyroscope-style mechanical stabilization system. This could be either passive, using a large weight at the end, or electromechanical, similar to a handheld camera gimbal. However, both of these ideas presented drawbacks, including space constraints and the difficulty of shockproofing external components for drone deployments.

 

The solution I ultimately adopted involved a combined software and hardware approach. Utilizing a widely available 360-degree camera (in this case, a GoPro 360 Max) to capture the robot's surroundings, I developed Python software code to stabilize the feed and maintain a level horizon as the robot locomotes. This approach proved compact and eliminated the need for external components, as all stabilization occurred through software.

 

Despite the budget constraints that limited us to using the GoPro 360 Max, I successfully worked with proprietary GoPro libraries to access video data from the camera. However, I encountered challenges with high latency and an unusable frame rate with the video stream. Consequently, I opted for still images, timing their capture to coincide with the completion of each full roll of the robot. This approach allowed users to initiate a roll, review the image feed, and repeat the process. Obtaining internal IMU (Inertial Measurement Unit) data from the camera was crucial for proper orientation. However, this data was inaccessible via the GoPro libraries. To circumvent this limitation, I designed a solution involving an external circuit attached to the GoPro, containing an IMU for orientation, a radio transmitter for IMU data transmission to the control computer, and a battery for portable power. This circuit, when combined with the GoPro, facilitated the transmission of visual and orientation data to the control computer.

 

As a team leader, I strategically allocated tasks to my team members by considering their interests and learning goals. I mentored them as they worked on mechanical design, programming, and electrical circuit design. I assigned circuitry hardware creation to one teammate and graphical user interface design to the other. Meanwhile, I assumed responsibility for designing the entire backend software which included developing a Python program that connected wirelessly to the GoPro, monitored robot connections, and facilitated visual feedback. This process included capturing images, transforming equirectangular 360-format images into perspective views, and orienting them using IMU data. The reoriented images were then integrated into the graphical user interface designed by my teammate.

 

This project represented a valuable learning experience, where I honed leadership skills, navigated an existing code base, and delivered an end-to-end designed and implemented product. The skills I acquired during this endeavor will undoubtedly prove invaluable in my future projects.
 

bottom of page