REU 2022 – Participants
I am a rising senior at the University of Central Florida majoring in Computer Science and minoring in Mathematics. I had already been introduced to Computer Vision prior to this summer, but this REU gave me the opportunity to broaden my knowledge and skills in the subject. I have always wanted to learn about the academic research process involved in Computer Vision and this REU allowed me to gain a deeper understanding about it and immerse me in the field. Over the summer, I worked with Dr. Mitchell Hill on using Energy-Based Models with conditional MCMC sampling for supervised learning. Because standard supervised learning uses a feed-forward network that leaves models vulnerable to adversarial attacks, we implemented a novel method involving synthesizing labels from an energy surface conditional on an image. This allowed us to improve the robustness against adversarial attacks, such as the PGD attack. To view my progress and for more detailed information, you may refer to my weekly presentations as well as my report. If you have any further questions, please feel free to reach out to me at email@example.com.
I am a rising senior majoring in Computer Science at Old Dominion University. I would say I have a high level understanding of computer vision and machine learning. For this reason, I am excited to dive into this innovative, cutting-edge field as a participant in the REU program. In particular, I hope to have the opportunity to learn more about the inner workings of autonomous vehicles. I find the technology used to develop these vehicles fascinating. It is also practical in today's fast-paced world as transportation continues to rapidly evolve. My goal is to ultimately work on building and improving these systems. Previous research has shown that model stealing attacks pose a threat not only to the security of machine learning models, but to the privacy of the datasets used to train them. However, there is very little research regarding the defense of such attacks. For this reason, I spent the summer working with my mentor James Beetham to answer the question “Can we detect data-free model stealing (DFMS)?” To answer this question, we implemented a novel two-step approach which enabled us to determine when DFMS is occurring, as well as, when stolen models are used to attack a victim with high confidence. If you would like to learn more about this project, please feel free to view my work on the REU site or to contact me directly at firstname.lastname@example.org.
I am a senior earning my Bachelor's in Computer Science with a Minor in Intelligent Robotic Systems at the University of Central Florida. I fell in love with computer vision while taking the elective, and it was through the professor for that class that I learned about the REU. My dream would be to start working in the field while continuing my education into a Masters in Computer Vision / Machine Learning. I am very honored to be participating in the REU and will use this opportunity to hone my skills so I am prepared for whatever the future may hold! This summer, I worked with Dr. Gonzalo Vaca-Castano on creating an Attention Model using Spiking Neural Networks. Existing models developed by industry are ever increasing in size and complexity, requiring hundreds of GPU's to train. Spiking Neural Networks are being explored as a new energy efficient alternative that attempts to mimic the neuron activation of the human brain. Vision Transformers are able to provide state of the art results, so we created a hybrid model that uses both SNN neurons and traditional non-linear activation functions. We were able to achieve similar accuracies on the MNIST dataset and lost some accuracy on CIFAR10. We look to improve these results as well as explore if the image encoding unique to SNNs can allow for some robustness to PGD attacks. If you have any questions, you can email me at email@example.com or Dr. Vaca-Castano at firstname.lastname@example.org
I am a rising Junior at Western Michigan University majoring in computer science. I’ve always been interested in machine learning and artificial intelligence so I’m really excited to explore and learn about this field, as well as gain experience working with it over the summer.During this REU, I worked with Aisha Urooj Khan on exploiting spatio-temporal graphs generation for open-ended video question answering. VQA is challenging as it requires that the model is able to recognize complex, detailed components of a video. For this task, we use a graph based approach in which we focus on learning to predict raw actions and relationships from video frames that are then used to create spatio-temporal graphs that help our model answer the questions given. We evaluated this model against many baselines on a new VQA benchmark and achieved promising results! For more details, check out my presentations, poster, and report. If you have any questions, feel free to email me at email@example.com
I'm a rising junior computer science student at the University of Virginia. I've taken a computer vision class in high school, and I've done a fair amount of work with machine learning and computer vision. Some examples of projects I've worked on are a chess move tracker and a word search solver. This summer, I worked with Alec Kerrigan and Ishan Dave on Long Action Repetition Counting using Layer Distillation. Our goal was to extend the current capabilities of action repetition counting models, through a staged learning process that can theoretically create a model to count actions of arbitrary length. The current research requires workarounds such as speeding up videos to count longer actions, which ultimately hurt the performance of the model. Our approach unlocks the ability to count much longer actions, which is extremely beneficial to many of the applications for action repetition counting.
I am a sophomore at Washington State University majoring in computer engineering. I am interested in automation, facial recognition, and object detection. This REU will be my first official introduction into computer vision and machine learning and I am very excited to learn how these technologies work. This summer, I worked with Dr. Navid Kardan on comparing the image completion capabilities of Masked Autoencoder (MAE) and neural fields. For our image completion task, inpainting, we masked a block of pixels and had our model reconstruct the image. We compared a vanilla neural field, Neural Knitwork, and Masked Autoencoder. We reconstructed images from out-of-distribution datasets with the MAE and found it was able to generalize to these datasets very well. If you have any questions, feel free to contact me at firstname.lastname@example.org
I am a junior at Ohio State majoring in computer science and engineering with a minor in Russian. I like to rock climb, unicycle, and mountaineer. I am interested in computer vision and machine learning. This summer I worked with Rohit Gupta and Dr. Shah to boost the performance of drone detection models using synthetic data. We sourced drone images, and were able to beat the state of the art by 3 points.
I am a senior majoring in Computer Science with a Mathematics minor at the University of Central Florida's Burnett Honors College. I am entering this REU program with some machine learning and AI experience, including image classification using a neural network, and I look forward to learning much more about computer vision throughout the summer!This summer, I worked with Dr. Chinwendu Enyioha and Diego Benalcazar on Learning to Control Network Contagion. Our objective was to develop a distributed, decentralized method for optimal resource allocation to contain the spread of an epidemic. We used reinforcement learning to train multiple agents in a networked environment to accomplish this goal while also obeying a shared resource budget. For more detailed information, refer to my presentations, report, and poster. You can also email me at email@example.com.
I am rising senior completing my Bachelor's in Robotics engineering. I've done many projects at my native school in robotics. One of the major projects I'm currently working on is a humanoid, car driving robot. I'm confident in my knowledge of the control of robotic systems and the next valuable asset to my knowledge is to gain a deep understanding of computer vision and artificial intelligence in general. For these reasons I am excited to start research level work in the field of computer vision which will force me to know the material deeply. This will also help in further development of the humanoid robot mentioned previously. This summer I worked with Aakash Kumar on transformers for point cloud data. In our work we focused on exploring and extending work done on 3D object detection and classification using transformers. We focused extensively on the 3DETR model and extending it to the KITTI dataset. The model was extended to the KITTI dataset because it contains point clouds collected from a real LiDAR rather than the point sampled meshes found in the datasets the model was previously trained on. The main challenge in extending the model was in the coordinate transformation from 3D camera to LiDAR coordinates along with some pre and post processing. We were able to achieve detection on the KITTI dataset but due to the transformers scalability issues other methods are being explored for multi modal detection. If you have any questions, you can contact me at firstname.lastname@example.org
I am a rising Sophomore at Cornell University majoring in Computer Science and I am originally from South Florida. I am interested in computer vision, machine learning, and artificial intelligence. Through the REU program, I am excited to learn and develop my skills in computer vision as well as immerse myself within interesting research projects. Overall, I am most excited to delve deeper into machine learning and explore how it is applicable in computer vision.This summer I worked with Rajat Modi, Dr. Shruti Vyas, and Dr. Yogesh Rawat on Tiny Action Detection in Videos. Because existing research has used datasets in which the actions are clearly visible, we are using a new dataset called UCF-MAMA. It contains videos from CCTV cameras where the actions are much smaller and multiple actors are present within each video. We worked to elevate the dataset on VidCapsNet, one of the models from our lab, and determine baseline scores for classification and detection for the dataset. We also worked to extend certain transformer-based networks to produce results on this new dataset. If you have any questions, feel free to reach out at email@example.com or to Rajat Modi at firstname.lastname@example.org
*All times are Eastern Standard Time (EST)