Over the last decade, a team led by Professor Dina Katabi from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) has been able to develop “RF-Pose,” which uses artificial intelligence (AI) to teach wireless devices to sense human posture and movement, even from behind obstacles.
The researchers use a neural network to analyse radio signals that bounce of the human body and then creates a dynamic stick figure that mimics a posture and gait. To train the AI, the system was given synchronised wireless and visual inputs where it visualises humans through a regular video camera, plus the reflected radio frequency from their bodies. This way, the AI could work out what radio frequency patterns matched actions such as sitting, standing, walking etc. Once the system had been trained on what postures and gaits to identify, it no longer required the visual input because it could track people just as clearly using the radio frequency alone. “If you think of the computer vision system as the teacher, this is a truly fascinating example of the student outperforming the teacher,” says Torralba.
The team says that the system could be used to monitor diseases like Parkinson’s and multiple sclerosis (MS), providing a better understanding of disease development and allowing doctors to adjust medications to match. It could also help elderly people live more independently by providing the added security of monitoring for falls and injuries. The team is currently working with doctors to explore further applications in healthcare by working to create 3-D representations that would be able to reflect even smaller micromovements. An application of this is that it might be able to see if an older person’s hands are shaking regularly enough that they may want to get a check-up. “A key advantage of our approach is that patients do not have to wear sensors or remember to charge their devices.” says Professor Katabi.
Source: all rights to this video belong to MITCSAIL
However, a key challenge the researchers had to address was that most neural networks are trained using data labelled by hand while radio signals cannot be easily labelled by humans. To address this, the researchers gathered thousands of images of people performing activities such as walking, talking, sitting and opening doors using both their wireless device and a camera. These images were then used to extract the stick figures, which they showed to the neural network along with the corresponding radio signal. This combination of examples enabled the system to learn the association between the radio signal and the stick figures of the people in the scene.
Besides sensing movement, the authors also showed that they could use wireless signals to accurately identify a particular individual 83% of the time, out of a line-up of 100 individuals. This ability could be particularly useful for the application of search-and-rescue operations, when it may be helpful to know the identity of specific people. The RF-Pose could also be used for new classes of video games where players move around the house, or even in search-and-rescue missions to help locate survivors.
However, CSAIL claims that future iterations of the technology could use a “consent mechanism” to ensure those being watched are in control of when the system is in use, with users needing to perform a certain set to movements to activate the mechanism.
Eva, Consultant, Leyton UK