Eduardo received his Bsc. (Hons) Intelligent Systems from University of Portsmouth (UK) in 2007, and his M. Eng and Ph.D. degrees in robotics engineering from Osaka University (Japan) under the guidance of Prof. Hiroshi Ishiguro. During his graduate studies, Eduardo’s research focused on swarm robotic systems and how to achieve collective, cooperative, and collaborative groups of robots. Eduardo is currently a postdoctoral fellow (Marie Curie) at the MIT Media Lab (Human Dynamics Group), where he conducts research on the synergy of swarm robotics systems and blockchain technology. In his previous post-doc position at MIT, Eduardo designed, implemented, and tested a whole range of new robotic agriculture systems (Food Computer) at the OpenAg initiative. His research interests include swarm and multi-agent robotics systems, decentralized and distributed control, bio-inspired robotic systems and technology transfer procedures.
Connect with Eduardo
You are one of the most involved individuals in robotics and artificial intelligence (AI) I have had the pleasure to know. There’s a lot of confusion between the terms AI and ML, or machine learning. Could you provide clarification?
Artificial intelligence is basically the term we give to the field of computer science in which we design algorithms that do something that can be named as “smart.” However, AI is a really big field these days.
What do you think are going to be the most immediate consequences for the average user of AI?
We now have the ability to combine algorithms that we invented 40 or 50 years ago (e.g., reinforcement learning) with the largest amount of data about human behavior in history. We now have a ton of data about ordinary activities such as where do we go when we are walking around the city, what do we search online, what is our behavior when we want to buy something, how we behave when we are sick, etc.
Does this raise concerns about the ethics regarding how the data will be employed?
You have to think that ML algorithms are as good as the data that you feed into them. So the data you use in order to train and make these systems “smart” is key. What we currently don’t understand is that when we provide this data or when we design these apparently smart algorithms, we are also projecting our biases into them. For example, if you code a very complex deep neural network that predicts the credit score of people (another way to code reputation), but you only train this neural network with a certain population data (e.g., high-income white males), other profiles that might use your product in the future might not be well represented or might be mistreated (e.g., single black mothers). So we need a new social contract about AI, in order to make sure that these systems are deployed in the right way and as diverse as possible. So they are not mistreating certain people just because these people were not there when these technologies were pioneered. Actually, my biggest fear about AI and ML is not super-intelligent machines but dumb ones.