Robots can Lie
Dr. Michal Yemini studies optimization and distributed learning networks, using lying agents and intermittent connection, with particular focus on issues of information trust across robotic networks
As part of scanning experiments held across several robotics groups around the world in order to explore and address the effect of malicious robots on autonomous robotic networks, researchers have managed to operate malicious robots in a way that would make trustable robots miss their designated target. “Most classic robotics algorithms would assume that robots can maintain continuous connection, and that they all tell the truth. But robots can also lie,” explains Dr. Michal Yemini. “Anyone who wants to interfere with a system can plant a robot or take over its software, thus making it lie to the next robot. In a more malicious attack, like the one outlined in one such experiment, one malicious robot can pretend to have multiple identities and be multiple robots, and once it enjoys support from several sources, the entire system follows it. The issues of communication and reliability go hand in hand: If the agent is reliable, I can employ it to improve communication, but if it’s a lying agent, it can lead me astray.”
Dr. Michal Yemini studies optimization and distributed learning networks, using intermittent connection and lying agents. Embarking on her undergraduate studies while still at high school, as part of the Program for the Advancement of Mathematically Talented Youth at Bar Ilan, she completed her computer engineering degree at the Technion, followed by the combined dual Electrical Engineering Masters and PhD track at Bar Ilan’s Faculty of Engineering. She moved on to a postdoc at Stanford University, where she spent three years, and at Princeton, where she worked for another two years. “During my postdoc I started to study optimization and distributed learning networks, with intermittent connection and lying agents. Working on multi-agents is fascinating, I love this field, I love the way it interfaces with many other systems and fields, not least robotics, which is really trending now. When you have robotic tasks to solve together, like scanning a big area or tracking, you send out several robots, assigning each its own section. However, they obviously have to be coordinated, which is where information trust comes into the picture. It doesn’t have to involve a hostile agent takeover: All it takes is for me to launch a drone cluster, for example, and if there’s this one drone that I can’t reach directly due to fog, then I’d ask another drone to communicate with it. It takes as little as that to compromise trust.”
According to Dr. Yemini, communication trust is a means for an end. “What matters as far as I’m concerned is whether the network meets the end for which it was operated, whether I’ve managed to learn a given model well enough. Take distributed learning, for instance, where you have several agents that want to train a learning model together. Each such agent has their data, but they’re reluctant to share these data with the other agents, for privacy reasons. One possible solution could be for each of them to make their own calculation and send it over to the cloud, which could in turn regularly summarize input from participants and return information about the shared model. This way, they won’t have to send data they’re reluctant to share for privacy reasons,” she explains. “It’s one of the most popular distributed learning models today, applied in images classification, for example, in selecting travel itineraries, or in medical information, like an MRI scan trying to figure out if a patient has a tumor. My work is to see to it that the information is trusted and secure, whether in a scenario where somebody’s lying or whether it concerns information privacy and security.”
Dr. Yemini believes in multidisciplinary research, which involves her collaboration with several groups. “I find that research problems that comprise several elements are more interesting and hold more benefits for humanity. Plus, it takes a number of strong groups to offer and end-to-end solution,” she says. “I’m a theoretician, sure, but I’m also interested in overlapping disciplines, where you put together a comprehensive system at every level – software, hardware, algorithmics, learning. I’m also interested in ways to ensure there are bounds in place for the system’s behavior, to confirm that it behaves the way I’d like it to in the real world. I collaborate with theoretician groups that explore learning with the aim of obtaining information; I have recently embarked on such collaboration with the neuroengineering track, as well as the nanoelectronics track.
One of her collaborations, ongoing for about five years, involves Harvard’s Dr. Stephanie Gil, who specializes in robotics. Together, the two women study the resilience and trust of distributed cyberphysical systems. This collaboration has recently earned them the prestigious Zuckerman Travel and Research STEM Fund support. “This grant is awarded to collaborative studies involving Harvard researchers and Israeli peers, and it’s highly competitive – only six groups have earned it, from across Harvard’s engineering and exact sciences tracks,” she tells us. “The fund was established with the aim of encouraging collaborations between Harvard and Israeli universities, while covering travel costs between universities: I’m going to visit Dr. Gil, while she’s going to arrive at my lab with her students.”
Dr. Yemini is currently setting up her lab at Bar Ilan’s Faculty of Engineering, which she has joined in 2023. In the coming year, as part of the data engineering track, she is set to teach an optimization and distributed learning seminar, as well as an introductory AI course. She is now looking for collaborators. “I’m looking for curious, motivated, mathematics-oriented Masters and PhD students who wish to be part of the future of distributed learning and optimization networks, while collaborating with world-leading research groups,” she concludes.
Interested? Contact Dr. Yemini at email@example.com
Last Updated Date : 23/08/2023