Skip to main content
University of Michigan Innovation Partnerships
University of Michigan Innovation Partnerships

Completed Research Project Proposals

Toyota Research Institute/U-M Partnership
Completed Research Project Proposals

Current Research Project Proposal Summaries

Task-Oriented Active Perception and Motion Planning for Manipulating Piles of Stuff

Project Abstract/Statement of Work:
We envision domestic assistance robots that are capable of performing many practical tasks, such as cooking, cleaning, and laundry, for elderly or disabled people. To be effective for these kinds of tasks a robot must be able to perceive and manipulate various types of objects in dense clutter. These objects vary in size, from small cups to large chairs; they may vary in structure, from rigid utensils to articulated home gadgets like can openers, even to fully deformable objects like clothing and blankets; they may be composed, such as stacks of cups; they may commingle, such as a collection of plates, glasses and utensils on a tray. The standard perceive-then-act approach, which registers CAD models of objects to sensor data before planning to manipulate, is impractical in such densely cluttered scenarios as some objects may be unknown, some can change shape, and many will be partially or fully occluded. Instead of the perceive-then-act approach, we propose to solve the problem of manipulating heterogeneous objects in dense clutter through task-oriented active perception and manipulation. The key insight of our approach is that task-oriented active perception allows us to probe only the task-relevant parts or properties of the environment, avoiding the complexity of fully-segmenting and registering all objects.

PI and Co-PI:
Dmitry Berenson
Jason Corso

Developing Bicycle-Related Corner Case Scenarios and a Bicyclist Model for Testing Self-Driving Cars Using Naturalistic Driving Data and Crash Data

Project Abstract/Statement of Work:
In this project we will develop bicycle-related corner case scenarios and a bicyclist behavioral model for testing self-driving cars using existing large-scale naturalistic driving data and crash data. This proposed work has three components: 1) we will examine the bicycle-related crash reports from the available crash databases (e.g., Michigan Police crash reports, FARS, and GES). 2) we will develop bicycle-related corner cases using an existing driving dataset from a large scale naturalistic driving study – Safety Pilot Model Deployment (SPMD), which was conducted by the University of Michigan Transportation Research Institute (UMTRI). 3) we will develop a quantitative bicyclist model that could predict bicyclists’ behaviors, such as sudden swerving to left, by using the previous information of the bicyclist, traffic, and road environment.

PI:
Shan Bao

Embedded General-Purpose Cognitive Computing for Sensory Processing

Project Abstract/Statement of Work:
With the proliferation of deployed sensors to collect data from many sources and modalities, from cameras and acoustic sensors, to MEMS and GPS, there is an increasing demand to extract meaningful information for intelligence and decision making. Extracting meaningful information from a massive amount of noise-like data from multiple sensory inputs is a major computational challenge. The common approach of building accelerators for each type of sensory input is becoming unscalable due to the increasing development and integration cost. There is a need for a general-purpose computing platform for efficiently encoding and classifying data from multiple sensory inputs, and combining them.

Neuro-inspired computing has emerged to be a strong contender for sensory data processing. Popular algorithms rely on deep (multilayer) feedforward convolutional neural networks (ConvNet) to project a sensory input to a set of specialized kernels (features) for detection and classification tasks. The powerful deep ConvNet algorithms demand intense computation for practical applications, and the training of deep ConvNets is especially painstaking and requires very large labeled datasets. Yet deep ConvNets still do not offer the assurance in dealing with unexpected environments and events in real-world driving scenarios. How to design versatile neuromorphic computing that can be efficiently trained, scaled up and adapted for multiple sensory processing in practice is a challenge.

Prior work has demonstrated solutions through chip-, package- and board-level integration and clever circuit designs to deliver massive-scale neuromorphic computing, including IBM’s TrueNorth, Stanford’s Neurogrid, and Manchester’s SpiNNaker. Despite these impressive strides, it is unclear how these architectures can be miniaturized to embedded platforms to carry out multi-sensory processing with a limited power source, and how they can be trained quickly.

In this pilot study, we plan to investigate algorithm, architecture, circuit and device co-optimizations in designing multimodal sensory processing hardware to achieve fundamental improvements in function, efficiency and scalability. The objective of this study is to develop general-purpose neuromorphic computing for extracting sparse representations of sensory inputs and fusing them. The work relies on the latest work in sparse coding to learn better features, improve classification, and join inputs from multiple sources. The sparse neuromorphic architecture will take advantage of sparsity to significantly reduce the workload to achieve an improved performance and energy efficiency for embedded platforms.

Through this pilot study, we will create sparse neuromorphic computing hardware architecture that provides three key capabilities for multi-sensory data processing: 1) to extract sparse representations of sensory inputs and fuse multiple sensory inputs; 2) to exploit sparsity for significant gain in computational performance and efficiency; and 3) to be easily configured and programmed for all types of sensory inputs, as opposed to using one accelerator for each sensor and function.

PI and Co-PI:
Wei Lu
Zhengya Zhang

The Study on Vehicle Drivers and Bicyclists Interactions and Communications

Project Abstract/Statement of Work:
In this project, we investigate and analyze how drivers interact and communicate with bicyclists on roadways, and examine how varying environmental and vehicle factors may influence such interactions. The focus of this study is on two most common types of driver-bicyclist interactions (1) drivers overtaking bicyclists, and (2) drivers yielding to bicyclist at or near intersections. Algorithms are also developed to predict bicyclists’ trajectories.

PI and Co-PIs:
Shan Bao
Fred Feng
Robert Hampshire

Accelerated Materials Design of Kirigami Optics Using Machine Learning

Project Abstract/Statement of Work:
Kirigami Nanocomposites and Their Relevance: This project will integrate Machine Learning (ML) and materials design of optical modulators inspired by the art of kirigami. Subwavelength-sized, three-dimensional (3D) kirigami nanocomposites (Fig. 1) add a literal new dimension to optical components needed for 3D cameras, acoustic, terahertz, radiofrequency, and laser radars (LIDARs). The nano-kirigami sheets make possible manipulation of the reflected and transmitted beams using periodic scattering patterns from the out-of-plane features defined by the periodic matrix of cuts. Such optical elements enable replacements of the heavy and costly optical components with patterned surfaces. Nano-kirigami composites can transform LIDARS and other remote sensors into flat, conformal, and modular units complying with functional, aerodynamic, and aesthetic constraints of autonomous cars and home robotics.

Problems To Be Solved:
The difficulties with transition of LIDAR, and similar technologies from expensive high-end systems to human-oriented everyday parts of a vehicle or a home are related, in large part, to the weight and cost of basic optical components: lenses, mirrors, polarizers, prisms, etc for eye-safe infrared (IR) wavelengths. Replacement of these components with flat light-weight optical components based on newest advances in optics is possible. One of such technologies is nano-kirigami composite optics developed at the University of Michigan.1,2 It utilizes space charge effects and out-of-plane surfaces patterns of composites made from plasmonic nanoparticles or nanocarbons patterned following the traditions of kirigami artists using state-of-the-art lithography. Such patterns make possible light modulation over the distances shorter than the wavelength of incident photons. Simultaneously, high elasticity and durability of nanocomposites enable their rapid reconfigurability and scalability. We have recently demonstrated that nano-kirigami composites enabled unusually wide angle of beams steering module, the essential element of LIDAR and other remote sensors.

PI and Co-PI:
Sharon Glotzer
Nicholas Kotov

A Naturalistic Bicycling Study in the Ann Arbor Area

Project Abstract/Statement of Work:
In the past few years much progress have been made in the self-driving technologies and related issues (e.g., legislation and regulation) by a variety of entities from automotive and tech industries, academic institutions, and government and organizations. However, there are still great challenges to be solved. One of the critical challenges is that the self-driving cars need to share the existing infrastructure with other non-motorized road users such as bicyclists and pedestrians. Given the complexity of the real-world road environment and the presumably high variability of the behaviors of the non-motorized road users, how the self-driving cars should be designed, tested, and tuned to share the road with bicyclists and pedestrians in a safe and efficient manner is a complicated and yet crucial question. One way to potentially help answer this question is to collect naturalistic data of people riding bicycles in their everyday trips on real-world roadways, and use the collected quantitative data to create guidelines, supports, and test scenarios to develop the artificial intelligence algorithms for self-driving cars in their ability to effectively interact with bicyclists in real-world environment.

PI and Co-PIs:
Shan Bao
Fred Feng
Anuj Pradhan
Dave LeBlanc
John Sullivan

Trust But Communicate: Implicit and Explicit AV Communications on Pedestrians’ Trust During a Real-World Experiment

Project Abstract/Statement of Work:
Despite the potential benefits of autonomous vehicles (AVs), public skepticism due to safety concerns remains a major barrier to their widespread adoption. This skepticism helps us understand why trust is a vital precursor to the promotion and acceptance of AVs. Trust is particularly important in the context of AVs and pedestrians for several reasons. First, unlike drivers, pedestrians have not made a conscious decision to subject themselves to the AV. Second, the interactions between pedestrians and AVs have the potential to lead to frequent small errors that could lead to accidents. Even in cases where small errors do not lead to major accidents, they are likely to degrade the driver’s and the non-AV road user’s confidence in the AV and lead to public mistrust. Mistrust in the AV can lead to underutilization or even complete abandonment of the AV.

To address this issue, we plan to leverage our work from the Year 1 TRI grant “Trust, Control and Risk in Autonomous Vehicles.” Our Year 1 preliminary results show that participants could statically differentiate between each type of AV driving behaviors.
For Year 2, we will examine the feasibility of conducting a real-world experiment at Mcity which seeks to understand the impact of explicit communications on a pedestrian’s trust in AV. Our overall goal is to be able to conduct a real-world experiment at Mcity which allows us to examine how an AV can effectively promote a pedestrian’s trust through explicit communications via message boards. Objectives include:
To identify and address all barriers to conducting a real-world experiment at Mcity aimed at understanding pedestrian’s trust in AV through explicit communications via message boards.

To understand the benefits and limitations of relying on a real-world experiment as an approach to understanding pedestrian’s trust in AV.

PI and Co-PIs:
Lionel Robert
Anuj K. Pradhan
Dawn Tilbury
Xi Jessie Yang

Control and Estimation for Provably Safe Interactive Driver Assistance

Project Abstract/Statement of Work:
This project will investigate parallel autonomy to improve vehicle safety. We propose to develop tools for obtaining provably-correct safety barriers offline, and methods for using them online in order to interact with and assist the human driver. We propose a contract-based modular design approach for computation of these safety barriers. The main technical novelties include: (i) incorporating sensing/perception capabilities, captured by a contract, while computing safety barriers, (ii) integrating estimation to identify valid, less conservative safety barriers online, and (iii) providing principled means to evaluate trade-offs between sensing/perception/estimation uncertainty and safety using multi-scale dynamic models. As a side benefit, interactive displays of safety barriers, e.g., via a speedometer with colored indicators similar to a temperature gauge, can make drivers aware of the shared control and the safety margin of their actions.

PI and Co-PIs:
Necmiye Ozay
Dimitra Panagou
Sze Yong

Developing a Personalized Guardian System To Assist Aging Drivers Through Machine Learning, Sensor Fusion and Data Mining

Project Abstract/Statement of Work:
We propose to investigate innovative technologies that can be used for building a Guardian system to assist aging drivers through the use of machine learning, sensor fusion and data mining. Driving is a complex operation that involves primary and often secondary driving cognitive and motor tasks. The rapid increase in the older adult population worldwide, many of whom will continue to drive even regardless of cognitive impairment and dementia, will require new approaches to help this population maintain driving safety, and novel ways to monitor and measure ongoing health status, especially while in remote areas. We propose to develop enabling technologies to support long-term, real time, in-vehicle monitoring, learning, and assessment of older adults’ driving behavior and physiological signatures (characterized by heart rates, respiration, and skin conductance) under a set of well-defined driver workload-related driving scenarios.

In this pilot study, we will focus on studying healthy older drivers and drivers diagnosed with Mild Cognitive Impairment (MCI). Research, including our own, show that 15-20 percent of people age 65 or older have measureable declines in cognitive function that are noticeable to the person and others, and people with MCI have poorer driving abilities that are generally related to the level of cognitive impairment. Research also shows that about 30-40 percent of people with MCI will develop dementia within 5 years. It has been estimated that up to one-third of older adults with dementia continue to drive and it is very likely they are also still driving as frequently as other drivers.

PI and Co-PIs:
David Eby
Bruno Giordani
Lisa Molnar
Yi Murphey

Robust Instruction Following Via Deep Reinforcement Learning

Project Abstract/Statement of Work:
Humans will interact with home robots at least in part through natural language instructions. At present robust instruction following by robots is not achievable. We propose innovations in DeepRL (combination of Deep Learning and Reinforcement Learning) to address the following challenges: 1) generalization of training on verb object-location instructions, e.g, pick up box, bring me a pencil from the living room, etc., to unseen combinations of verb-object-location pairings in test instructions, 2) generalization of training on tasks composed of sub-task sequences to tasks composed of unseen subtasks, 3) automatic hierarchical decomposition of high-level and complex task instruction into previously trained and untrained subtasks with verbal explanation of subtask goals to user for confirmation and feedback, 4) dialog with user for (sub)task clarification when needed/useful.

PI:
Satinder Singh

Developing Decision-Making Models for AV Movements at the Unsignalized Intersections

Project Abstract/Statement of Work:
This study aims to develop probability models of the behaviors of other vehicles and pedestrians to help AVs make decisions before encountering different types of unsignalized urban intersections. UMTRI IVBSS naturalistic field-operational-test driving database synchronized with videos will be used to create the models by observing (1) what a driver operating a vehicle manually will do when other drivers violate the right-of-way laws/rules under various unsignalized urban intersection scenarios, and (2) under what circumstance the drivers will actually violate the right-of-way laws/rules. With these models, an AV can better predict the likelihood that other vehicles give up the right-of-way, or violate the right-of-way rules, in order to better anticipate how to proceed into an intersection.

PI and Co-PIs:
James Sayer
Brian T. W. Lin

Formally Verified Guardians for Enhanced Driving: Emergency Braking, Swerving, and Combined Maneuvers

Project Abstract/Statement of Work:
Our research focuses on formal verification of cyber-physical systems, typically systems where software interacts with the physical world and its continuous dynamics. The objective of formal verification is to establish strong mathematical properties of a system, and to provide rigorous formal proofs of those properties. Furthermore, the formal proofs of those properties are not only proved by hand, but the proofs are checked by computer software, giving us the utmost level of assurance.

This project would like to establish and formally prove safety conditions for emergency braking, emergency swerving, and combined emergency braking-emergency swerving maneuvers. Emergency braking and adaptive cruise control have already received significant attention, but other maneuvers less so. It is also important to take into account combined maneuvers, for which turning capabilities of the vehicle may be reduced under heavy braking or accelerating.

Our work will start with emergency braking and emergency swerving, the two simplest maneuvers. For those maneuvers, we will study the relevant parameters and establish some candidate safety conditions. After establishing such conditions, as well as a formal model of the system, we will formally prove our safety conditions safe in the KeYmaera hybrid systems theorem prover. We will then extend our work to simultaneous braking (or accelerating) and swerving, and follow the same process: first establish some safety conditions and a model of the system, then prove the conditions safe in the KeYmaera theorem prover. Our proofs typically need significant computing resources, for which we will have access to the Conflux computer cluster.

PI:
Jean-Baptiste Jeannin

Extracting Traffic Primitives From Millions of Naturalistic Driving Encounters – A Synthesized Method Based on Nonparametric Bayesian and Deep Unsupervised Learning

Project Abstract/Statement of Work:
Encounters where multiple road users meet and coordinate with each other are the key challenges for self-driving and driving-assistance systems. Methods that can automatically process, cluster, and analyze driving encounters from a massive database becomes imperative to reduce development cost and duration. The noisy, incomplete, and unbalanced nature of existing databases gives a great challenge to existing auto-encoding methods.

It is estimated that one hour of data requires 800 human hours to label them manually. In recent years, the idea of segmenting a long-term time-series data into primitives has been applied to other research fields, such as such as human motion trajectory learning. Similarly, we believe that it is worthy to develop tools that can automatically extract primitives from millions of naturalistic driving encounters, thus being applicable to automated driving both for Guardian and Chauffeur.

In our previous research, we developed preliminary techniques to automatically extract traffic primitives and driving behavior primitives from large and multi-scale traffic data from a vision-based sensor (Mobileye) using nonparametric Bayesian learning.

In this project, we aim to synthesize the advantages of these two approaches in dealing with complex driving encounters and then develop a new learning-based approach that is versatile to use but also mathematically trackable.

Deliverables:
A new method to automatically study the driving encountering that inherits the modularity, uncertainty quantifiability, robustness and interpretability of the Bayesian approach, while retaining the deep unsupervised learning’s strength.

An automatically labeled database extracted from the UM database with estimated 10 million of driving encounters.

A pool of encountering primitives representing basic driver interaction patterns for design/test Guardian and Chauffeur systems

PI and Co-PI:
Ding Zhao
XuanLong Nguyen