“In the 1950s and 1960s, we finally created a world where there was a ‘no surprises’ rule regarding nuclear testing. […] You have to start building a system where because you’re arming yourself or getting ready, you trigger the same thing then the other side. We don’t have anyone working on it, and yet the AI is so powerful.”
This is how Eric Schmidt, former boss of Google, talks about artificial intelligence. According to him, AI is as dangerous in our modern societies as the atomic bomb was in the 1950s. He therefore recommends that world powers like the United States or China find agreements. He even goes so far as to speak of treaties of deterrence such as exist today between countries possessing nuclear weapons.
“We are not ready for the negotiations that we need.” – @ericschmidt #AspenSecurity pic.twitter.com/As749t6ZyU
—Aspen Security Forum (@AspenSecurity) July 22, 2022
Framing artificial intelligence
Broadly speaking, these treaties were born out of the aftermath of the Second World War. While the great powers equipped themselves with nuclear bombs as deterrent weapons, agreements were made between Nations. Since then, it is forbidden to carry out nuclear tests without warning other foreign powers.
According to Eric Schmidt, States should unite to apply a similar policy on artificial intelligence. He believes that this technology may, in the future, represent a real threat to humanity. A point of view shared by Sundar Pichai who declared in 2018 to Recode:
Advances in artificial intelligence are still in their infancy, but I consider it to be the most profound technology that humanity will work on and we need to ensure that we exploit it at the profit of society (…) Fire also kills people. We have learned to control it for the good of humanity, but we have also learned to master its evils.
Rather than waiting for the authorities to take care of it, Eric Schmidt therefore took things (a little) in hand. Last February, he created the AI2050 fund. Reserved for academics, it aims to finance “research on ‘hard problems’ in artificial intelligence”. Among these “problems”, the researchers will focus on the programming biases of the algorithms, the drifts of the technology or even the geopolitical conflicts. A first donation of 125 million dollars will make it possible to initiate research.