Ray-Quasar

Submitted by mariaraju on

Ray-Quasar has developed a novel autonomous racing algorithm atop a one-tenth scale car. Our team has synthesized LiDAR sensing and simple kinematics to create a reliable and fast logic for the optimal navigation of any race environment.
We employ a two-fold approach:

PrecisionFit

Submitted by mariaraju on

Everyday a majority of people begin their day by putting on a pair of shoes and end their day by taking off these shoes. We spend so much time wearing shoes and therefore it is very important that these shoes fit well. For this reason we created a smart sock with integrated pressure sensors designed to measure foot pressure at many locations in order to optimize shoe fit.

AutoTenth

Submitted by mariaraju on

The AutoTenth project is a student-led project centered around the design and development of a fully autonomous 1/10th-scale F1 race car. The autonomous vehicle is capable of navigating competitive race tracks with high-speed precision and efficiency.

radIoQ

Submitted by mariaraju on

With today's modern communication technology, the environment is saturated with an ocean of radio frequency signals. Our project aims to take advantage of those signals and their reflections to detect moving object. Most current radar technology requires transmitting a radio signals and waiting for an reflection, much like how bats use echolocation. However, by sending a signal into the environment, there is a high risk of broadcasting one's own location.

ROS2 Raiders

Submitted by mariaraju on

The ROS2 Raiders project centers around developing a fully autonomous 1/10th-scale race car built for the 2024–2025 F1TENTH competition. Our vehicle is equipped with the NVIDIA Jetson Orin Nano for onboard computation, the VESC MK6 for precise motor control, and a LiDAR sensor to enable real-time environmental perception. Using Robot Operating System (ROS) as the middleware, we implemented modular systems for sensing, processing, and control.

CViSion

Submitted by mariaraju on

CVision is a deep learning based Python program designed to assist the visually-impaired in navigating their surroundings. Real-time video from a portable camera is processed to generate directional sound cues to be heard on headphones. These sounds reflect the relative locations and distances of detected objects. Potential dangers such as very close objects are alerted to the user.

PulsIR

Submitted by mariaraju on

According to the CDC, 8.7 million Americans live with undiagnosed diabetes, highlighting the need for more accessible and proactive diagnostic tools. PulsIR uses a FLIR thermal camera in a compact, handheld device to generate real-time skin temperature measurements that reveal subtle disruptions in blood flow, providing doctors with physiological data not easily captured by traditional diagnostics.

HAWC

Submitted by mariaraju on

The Hybrid Autonomous Wayfinding Courier (HAWC) project is an interdisciplinary engineering effort developed for the Raytheon 2024/25 West Coast Autonomous Vehicle Competition. The project consists of two fully autonomous systems: a Scout Unmanned Aerial Vehicle (UAV) and an Unmanned Delivery Vehicle (UDV). The UAV's primary objective is to autonomously navigate a predefined 30-yard by 30-yard field, locate a designated ArUco marker, and transmit the marker's GPS coordinates to the UDV.

POGO

Submitted by mariaraju on

Technological advancement within the last half century can largely be attributed to the downscaling of transistors: Smaller transistors fit onto chips in greater numbers, directly increasing the amount of compute power we can achieve. ASML defines the frontier of this pursuit as the world's sole provider of nanometer-scale fabrication machines.

Viewpointe

Submitted by brianli on

Viewpointe ties into Alcon’s Ngenuity visualization system, working to collect stereo images from a microscope used by surgeons during cataract operations. Using an FPGA, the two image inputs will be processed into a display form suitable for a 3D monitor. The user will have the ability to choose between visual formats, including side-by-side, top-bottom, and traditional intersampled mosaic. The input type will also support both HDMI and DisplayPort. All processing will happen on the camera itself, without requiring an external computer.