The adherence to treatment in injectable therapies for chronic diseases (e.g. rheumatoid arthritis, multiple sclerosis, psoriasis, Crohn’s disease, etc.) is extremely low (45% - 60%) and in part due to the inconvenience and anxiety associated with using needles and syringes. Biological medicines treating those conditions cannot be formulated as pills and as such there is a huge opportunity for new technologies replacing needles and syringes to transform the perception, approachability and market penetration of such therapies. Portal has developed a next-generation needle-free drug delivery platform that is computer-controlled, easy to use and patient preferred. Real time injection tracking via cloud-based connectivity enables patients and their care teams to manage their condition better and take charge of their wellbeing. Issued from Professor Ian Hunter’s research at the MIT BioInstrumentation Lab, this technology leverages advances in multiple disciplines such as high-power density electromagnetic actuators, ARM-based micro-electronics and embedded software, and energy storage. The company is at the commercial stage, preparing to launch a drug/device combination product with Takeda Pharmaceuticals in the field of Inflammatory Bowel Diseases. A live demonstration of the Portal Device will be presented at the Symposium.
The traditional lecture and laboratory approach used in teaching science and engineering has dominated education at high schools and universities for centuries. Although classroom demonstrations are sometimes used to provide instructive and motivating examples of taught concepts, in large classes they are difficult to see and without direct “hands on” involvement of the students have limited effect. Our initiative to address this shortcoming is MICA (Measurement, Instrumentation, Control and Analysis) an educational approach designed for subjects in Science, Technology, Engineering and Mathematics (STEM). Students interact with an experimental workstation (MICA workstation) to conduct experiments, analyze data, undertake parameter estimation, and fit mathematical models, while learning the theory and relevant subject history under the guidance of a virtual tutor (MICA avatar). As students interact with the MICA workstations their skill level, rate of learning and progress is quantified. Based on these data, deep learning techniques and mathematical modelling are then used to generate an individualized model of a student’s state of knowledge which is augmented every time the student interacts with a MICA workstation. This ‘state of knowledge’ model is then used by the MICA tutor to personalize (and eventually optimize) the teaching pace as well as the way in which subject material is delivered.
We are in the midst of an impending step change, and again, schools like MIT are in competition to lead this change. It has led to an "arms race" in higher education that will shape the future people that work with/for you. Hundreds of millions of dollars are being spent by universities in a competition to create innovation ecosystems that produce technology innovators that have making + innovation skill sets.
You're going to want to know about these people, who is best at educating/creating them, and how to gain a competitive advantage in hiring them. In this talk, I'm going to help you figure that out.
An open question in artificial intelligence is how to endow agents with common sense knowledge that humans naturally seem to possess. A prominent theory in child development posits that human infants gradually acquire such knowledge through the process of experimentation. According to this theory, even the seemingly frivolous play of infants is a mechanism for them to conduct experiments to learn about their environment. Inspired by this view of biological sensorimotor learning, I will present my work on building artificial agents that use the paradigm of experimentation to explore and condense their experience into models that enable them to solve new problems. I will discuss the effectiveness of my approach and open issues using case studies of a robot learning to push objects, manipulate ropes, finding its way in office environments and an agent learning to play video games merely based on the incentive of conducting experiments.
Every team has top performers -- people who excel at working in a team to find the right solutions in complex, difficult situations. These top performers include nurses who run hospital floors, emergency response teams, air traffic controllers, and factory line supervisors. While they may outperform the most sophisticated optimization and scheduling algorithms, they cannot often tell us how they do it. Similarly, even when a machine can do the job better than most of us, it can’t explain how. In this talk I share recent work investigating effective ways to blend the unique decision-making strengths of humans and machines. I discuss the development of computational models that enable machines to efficiently infer the mental state of human teammates and thereby collaborate with people in richer, more flexible ways. Our studies demonstrate statistically significant improvements in people’s performance on military, healthcare and manufacturing tasks, when aided by intelligent machine teammates.
Our work addresses the planning, control, and mapping issues for autonomous robot teams that operate in challenging, partially observable, dynamic environments with limited field-of-view sensors. In such scenarios, individual robots need to be able to plan/execute safe paths on short timescales to avoid imminent collisions. Performance can be improved by planning beyond the robots’ immediate sensing horizon using high-level semantic descriptions of the environment. For mapping on longer timescales, the agents must also be able to align and fuse imperfect and partial observations to construct a consistent and unified representation of the environment. Furthermore, these tasks must be done autonomously onboard, which typically adds significant complexity to the system. This talk will highlight three recently developed solutions to these challenges that have been implemented to (1) robustly plan paths and demonstrate high-speed agile flight of a quadrotor in unknown, cluttered environments; and (2) plan beyond the line-of-sight by utilizing the learned context within the local vicinity, with applications in last-mile delivery. We further present a multi-way data association algorithm to correctly synchronize partial and noisy representations and fuse maps acquired by (single or multiple) robots, showcased on a simultaneous localization and mapping (SLAM) application.
Spatial perception has witnessed an unprecedented progress in the last decade. Robots are now able to detect objects, localize them, and create large-scale maps of an unknown environment, which are crucial capabilities for navigation and manipulation. Despite these advances, both researchers and practitioners are well aware of the brittleness of current perception systems, and a large gap still separates robot and human perception. While many applications can afford occasional failures (e.g., AR/VR, domestic robotics) or can structure the environment to simplify perception (e.g., industrial robotics), safety-critical applications of robotics in the wild, ranging from self-driving vehicles to search & rescue, demand a new generation of algorithms. This talk discusses two efforts targeted at bridging this gap. The first focuses on robustness: I present recent advances in the design of certifiably robust spatial perception algorithms that are robust to extreme amounts of outliers and afford performance guarantees. These algorithms are “hard to break” and are able to work in regimes where all related techniques fail. The second effort targets metric-semantic understanding. While humans are able to quickly grasp both geometric and semantic aspects of a scene, high-level scene understanding remains a challenge for robotics. I present recent work on real-time metric-semantic understanding, which combines robust estimation with deep learning. I discuss these efforts and their applications to a variety of perception problems, including mesh registration, image-based object localization, and robot Simultaneous Localization and Mapping.
Moderator: David Keith Panelists (4-minute statement each): Kent Larson Carlo Ratti Sarah Williams Jinhua Zhao
Given the severe mobility challenges in urbanizing areas, numerous visions for designing urban mobility systems are discussed by policymakers, planners, and industry. These visions must anticipate technological and sociodemographic developments, while accounting for the constraints of operator business models and environmental concerns. In this session, MIT faculty will share and discuss their ideas for urban mobility systems around the globe, considering both promising technologies as well as heterogeneities among the world’s urban centers.