Volume 2 Issue 8 August 2017
In June, together with more than 500 experts in Artificial Intelligence (AI), I participated in the “AI for Good” Global Summit. Organised in Geneva by the ITU and the XPRIZE Foundation, the summit focused on a crucial and timely question: Can AI contribute to achieve the 17 Sustainable Development Goals that the UN has set to end poverty, protect the planet and ensure prosperity for all?
As an expert in automated decision-making, I know first-hand that AI is a uniquely powerful and transformative technology. AI can have a huge impact not only to further the progress of the wealthy countries, but also to foster the advancement of developing nations. For example, AI can teach people new skills and support lifelong learning. At the same time, the development of AI raises ethical and societal challenges for AI experts and policy-makers, who share the responsability to deploy an AI technology that is safe, reliable and fair.
Why is AI so special? As observed by Stephen Cave during the Summit, AI is a tool different from any other because of three crucial aspects: (i) AI is a universal tool, which will be soon incorporated in all other technologies (e.g. self-driving cars, smart homes, robotics, personalised medicine); (ii) AI can accelerate its own development, besides the development of other tools (e.g. machine learning is an AI tool that can improve itself as well as other AI algorithms); and (iii) AI is autonomous (AI agents make and implement decisions without constant human intervention). In addition, AI is based on data, which are often collected from people and contain sensitive information about them. Considering all these factors together, it becomes clear why AI generates excitement but also concern.
In the last few years, I have focused on the development of UAVs (unmanned aerial vehicles) for surveillance and disaster response applications. I have formulated techniques based on task planning and probabilistic reasoning to make UAVs smart enough to fly autonomously and strategically to achieve sophisticated goals specified by domain experts over a large geographical area and a long temporal horizon. The potential of intelligent vehicles in emergency scenarios is enormous as resources are limited and time is critical. However, these are challenging situations in which decisions can have a life-changing impact and human operators need to trust the machines and understand their behaviour.
At the Summit, Professor Virginia Dignum formulated three principles on which AI development should be based, which I find particularly relevant:
- Accountability: an AI system needs to to be able to justify its own decisions based on the algorithms and the data used by it. We have to equip AI systems with the moral values and societal norms that are used in the context in which these systems operate;
- Responsibility: although AI systems are autonomous, their decisions should be linked to all the stakeholders who contributed in developing them: manufacturers, developers, users and owners. All of them will be responsible for the system’s behaviou;
- Transparency: users need to be able to inspect and verify the algorithms and data used by the system to make and implement decisions.
I would like to conclude with a provocative remark that Professor Gary Marcus brought to the table during the Summit and that I share: are we really so close to Strong AI as many people seem to think, where “Strong AI” means a system that exhibits integration between all aspects of intelligence: common sense, planning, reasoning, analogy, language and perception? I believe that, although AI can truly change the world, we still need fundamental advances first. Key to achieve them are interdisciplinarity and global collaboration. In particular, I would welcome multi-disciplinary collaborations to make UAVs and drones truly effective in disaster response scenarios.