Aloimonos, Yiannis
Electrical and Computer Engineering
UMIACS
The Institute for Systems Research
Maryland Robotics Center
Brain and Behavior Institute
Professor Aloimonos holds a Ph.D. in Computer Science from the University of Rochester.
His research is devoted to the principles governing the design and analysis of real-time systems that possess perceptual capabilities, for the purpose of both explaining animal vision and designing seeing machines. Such capabilities have to do with the ability of the system to control its motion and the motion of its parts using visual input (navigation and manipulation) and the ability of the system to break up its environment into a set of categories relevant to its tasks and recognize these categories (categorization and recognition).
The work is being done in the framework of Active and Purposive Vision, a paradigm also known as Animate or Behavioral Vision. In simple terms, this approach suggests that Vision has a purpose, a goal. This goal is action; it can be theoretical, practical or aesthetic. When Vision is considered in conjunction with action, it becomes easier. The reason is that the descriptions of space-time that the system needs to derive are not general purpose, but are purposive. This means that these descriptions are good for restricted sets of tasks, such as tasks related to navigation, manipulation and recognition.
If Vision is the process of deriving purposive space-time descriptions as opposed to general ones, one is faced with the difficult question of where to star t (with which descriptions)? Understanding moving images is a capability shared by all "seeing" biological systems. It was therefore decided to start with descriptions that involve time. Another reason for this is that motion problems are purely geometric and understanding the geometry amounts to solving the problems. This led to a consideration of the problems of navigation. Within navigation, once again, one faces the same question: in which order should navigational capabilities be developed? This led to the development of a synthetic approach, according to which the order of development is related to the complexity of the underlying model. The appropriate starting point is the capability of understanding self-motion. By performing a geometric analysis of motion fields, global patterns of partial aspects of motion fields were found to be associated with particular 3D motion. This gave rise to a series of algorithms for recovering egomotion through pattern matching. The qualitative nature of the algorithms in conjunction with a nature of the well-defined input (the input is the normal flow, i.e. the component of the flow along the gradient of the image) makes the solution stable against noise.
Other problems, higher in the hierarchy of navigation, are independent motion detection, estimation of ordinal depth, and learning of space. To illustrate these topics, consider the case of ordinal depth. Traditionally, systems were supposed to estimate depth. Such metric information is too much to expect from systems that are supposed to just navigate successfully. Many tasks can be achieved by using an ordinal depth representation. Such a representation can be extracted without knowledge of the exact image motion or displacement. Recent studies on visual space distortion have triggered a new framework for understanding visual shape. A study of a spectrum of shape representations lying between the projective and Euclidean layers is currently underway.
The learning of space can be based on the principle of learning routes. A system knows the space around it if it can successfully visit a set of locations. With more memory available, relationships between the representations of different routes give rise to partial geocentric maps.
In hand-eye coordination, the concept of a perceptual kinematic map has been introduced. This is a map from the robot's joints to image features. Currently under investigation is the problem of creating a classification of the singularities of this map.
The work on active, anthropomorphic vision led to the study of fixation and the development of TALOS (TALOS), a system that implements dynamic fixation. Since fixation is a principle of Active Vision and fixating observers build representations relative to fixations, it is important to solve fixation in real time and demonstrate it in hardware. TALOS consists of a binocular head/eye system augmented with additional sensors. It is designed to perform fixation as it is moving, in real time.
The ideas of Purposive Vision have led to the study of Intelligence as a purposive activity. A four-valued logic is being developed for handling reasoning in a system of interacting purposive agents.
Links
The research of Professor Aloimonos is devoted to the principles governing the design and analysis of real-time systems that possess perceptual capabilities, for the purpose of both explaining animal vision and designing seeing machines. Such capabilities have to do with the ability of the system to control its motion and the motion of its parts using visual input (navigation and manipulation) and the ability of the system to break up its environment into a set of categories relevant to its tasks and recognize these categories (categorization and recognition). The work is being done in the framework of Active and Purposive Vision, a paradigm also known as Animate or Behavioral Vision.
Since the early 2000 he has been working on the integration of sensorimotor information with the conceptual system, bridging the gap between signals and symbols. This led to the introduction of language tools into the Robotics community. During the past five years his research is supported by the European Union under the cognitive systems program in the projects POETICON and POETICON++ , by the National Science Foundation under the Cyber Physics Systems Program in the project Robots with Vision that find objects and by the National Institues of Health in the project Human Activity Languages.
Here is an example of going from language to action (ask a robot to do something). Note how the robot announces that he has to think for a moment, before performing the action but does not reveal its thinking. Here, some of the thinking is revealed.
For the dual problem of going from action to language (observing an activity and describing in natural language what is going on), see our demos in the Telluride Neuromorphic Cognition Engineering workshops .
Research Awards
Research Posters
University of Maryland Has Strong Presence at ICRA 2024
Researchers detail advancements in navigation, trajectory planning.MRC Faculty, Researchers, and Students Present 24 Papers at ICRA 2023
MRC researchers have a strong showing at ICRA 2023.Congratulations May 2023 ISR graduates!
Here's a list of May graduates with ISR ties---at all degree levels.Chahat Deep Singh's robot bee work featured in BBC video
The PhD student of Yiannis Aloimonos talks about his work building ever-smaller robots that could one day help with pollination.UMD’s SeaDroneSim can generate simulated images and videos to help UAV systems recognize ‘objects of interest’ in the water
The suite’s simulated objects could fill a large dataset gap for the UAV-based computer vision systems used in searching for maritime objects.Levi Burner named a Future Faculty Fellow
The Clark School program prepares engineering and computer science Ph.D. students for careers in academia.Aloimonos, Sandini contribute chapter to MIT Press book, Cognitive Robotics
Their chapter defines and provides examples and explanations of 'computer vision.'ISR-affiliated graduates, December 2022
ISR congratulates these students, all advised by ISR faculty!Autonomous drones based on bees use AI to work together
The minute robots could one day provide backup to pollinators.'OysterNet' + underwater robots will aid in accurate oyster count
The ability to mathematically model and simulate oysters will help scientists more quickly and accurately detect the density of the bivalves in the Chesapeake Bay.The Modern Battle for Maryland’s Oysters
UMD researchers use AI and robotics to help revive a struggling industry.Chahat Deep Singh named recipient of Wylie Dissertation Fellowship
The computer science student also was named a Clark School Future Faculty Fellow this year.Nitin Sanket wins Drones 2021 Ph.D. Thesis Award
The international journal Drones focuses on UAVs, UAS and remotely piloted systems.Chahat Deep Singh named a Future Faculty Fellow
The program prepares Ph.D. students for academic careers in engineering and computer science.Congratulations to our December 2021 ISR graduates!
ISR advisors graduated 22 Ph.D., 2 M.S., and 4 M.S.S.E. new alumni in December!Alum Nitin Sanket wins Larry S. Davis Doctoral Dissertation Award
The Computer Science Department award recognizes technical, significant and impactful work.EVPropNet finds drones by detecting their propellers
This deep learning-based solution uses data collected by event camera sensors.Using underwater robots to detect and count oysters
This UMD project is one part of a larger USDA NIFA grant to modernize shellfish aquacultureBee drones featured on new Voice of America video
Active perception is an efficient, fast way for robots to complete task.Perception and Robotics Group creates hive of ideas for drones
Palm-sized autonomous robots could help bees pollinate flowers, search out survivors and watch patients for problemsComputer vision advances in contact-centered representations, models
Ego-OMG technology comes in first and second in international computer vision challenge.Congratulations, May 2021 ISR graduates!
Institute for Systems Research faculty advised PhD, MS, BS and 12 MSSE students who graduated in May 2021.Three ECE Professors Ranked Top Scientists in the World by Guide2Research
They join seven other UMD faculty members breaking into the top 1000 scientist rankings based on their prolific research output.UMD Researchers to Have a Strong Showing at ICRA 2021
University of Maryland researchers will present 15 papers at the IEEE International Conference on Robotis and Automation (ICRA 2021) to held on May 30 to June 5, 2021.Ganguly, Fiaz, Suryan are first MRC GRA recipients
The new program is open to mid-career UMD Ph.D. students with Maryland Robotics Center advisors.New undergraduate minor in robotics and autonomous systems
The Maryland Robotics Center will administer the program, which begins in Fall 2021.'MorphEyes' stereo camera system improves quadrotor UAV navigation
The Perception and Robotics Group is the first to use morphable design to achieve a variable baseline stereo vision system on a quadrotor UAV.MRC Faculty and Researchers to Present 16 Papers at International Robotics Conference - IROS 2020
International Conference on Intelligent Robots and Systems - From 25 October 2020 until 29 November 2020Maryland engineers receive $10M to transform shellfish farming
The team will help farmers tap the economic potential and environmental benefits of shellfish aquaculture.Zampogiannis, Ganguly, Aloimonos and Fermüller author "Vision During Action," chapter in new Springer book
Modelling Human Motion presents a comprehensive overview of human motion essential for robotics applications.Microrobots soon could be seeing better, say UMD faculty in Science Robotics
Commentary by Yiannis Aloimonos and Cornelia Fermüller notes this alternative to SLAM-reliant computer vision could one day permeate all robotics.Deep learning helps aerial robots gauge where they are
Aloimonos, Fermüller, Sanket and Singh develop a simple way to estimate an aerial robot's ego-motion/odometry using deep learning and onboard sensors.MRC Researchers to Present 16 Papers at ICRA 2020
International Conference on Robotics and Automation (ICRA) 2020 will be held online from May 31st to August 31st. MRC researchers will be presenting 16 papers at this conference.IFIG framework helps robots follow instructions
Through deep reinforcement learning, neural networks can be trained to map joint representations of observations and instructions directly to actions.ISR, ECE, CS, UMIACS faculty present 12 talks at Northrop Grumman University Research Symposium
ISR faculty's eight presentations are in machine learning, science of test, cybersecurity, and IoT sensing.New EVDodgeNet is a dynamic obstacle avoidance system for quadrotors
The solution was developed by UMD's Perception and Robotics Group.Hyperdimensional computer theory featured in Voice of America video
The video features the work of UMD faculty Yiannis Aloimonos and Cornelia Fermuller and their students.Helping robots remember
Hyperdimensional computing theory could change the way AI worksMaryland Robotics Center launches a new postdoctoral fellowship program
The program will foster multidisciplinary collaborations among Maryland Robotics Center faculty.GapFlyt helps aerial robots navigate more like birds and insects
Bio-inspired sensorimotor framework allows quadrotors to fly though unknown gaps without building 3D maps.Northrop Grumman contributes research funding for third consecutive year
Krishnaprasad, Srivastava, Aloimonos each receive $75K for individual research projects.Aloimonos gives keynote address at Bremen University Talks
The event theme was ' Cognition-Enabled Robotics: Democratizing a Disruptive Technology.'Maynord and Guha are Qualcomm Innovation Fellowship winners
They will receive a $100K fellowship for their project, "Feedback for Vision."New research will help cyber-physical systems understand human activities
NSF grant funds Fermüller, Baras and Aloimonos to develop a three-layer architecture.ARC Lab holds inaugural open house
Lab conducts research in aunonomy, robotics and cognition.Robots learn kitchen skills by watching YouTube videos
Autonomous robots can learn and perform complex actions via observationMartins, Gupta, Aloimonos speak at 'Fostering Excellence in Robotics'
Workshop at American Control Conference introduced high school students to robotics.Aloimonos interviewed by All Things Considered
Research answers the question, "Does an orchestra play better with a conductor?"Telluride newspaper writes about Neuromorphic Cognition Engineering Workshop
ISR faculty, staff, students key to the workshop's planning and organization.Ching Teo and Yezhou Yang win in Qualcomm Innovation Fellowship competition
Teo is advised by ISR affiliate faculty member Yiannis Aloimonos.ISR students Datta, Teo are part of finalist teams in Qualcomm competition
Qualcomm Innovation Fellowship competition recognizes outstanding Ph.D. students.Aloimonis receives NSF grant for robots with vision that find objects
The research will allow robots to detect and find objects with Active Vision.Yiannis Aloimonos becomes ISR affiliate faculty member
Professor's research interests are centered in active vision.Toshiba's Yosuke Okamoto begins six-month visit
Engineer will conduct computer vision, image procesing research.Honda Visiting Scientist Morimichi Nishigaki gives final presentation
July 2004—Engineer summarizes his research into 'Ego-Motion Estimation Using Fewer Image Feature Points.'- USDA NIFA: Transforming shellfish farming with smart technology and management practices for sustainable production
- Northrop Grumman $75K funding
- NSF CPS: MONA LISA—Monitoring and Assisting with Actions
- Robots with Vision that Find Objects
- NSF NeTS-NOSS: The BehaviorScope Project: Sensory Grammars for Sensor Networks