Friendly AI will still be our master. Or, why we should not want to be the pets of super-intelligent computers
Article Ecrit par: Sparrow,Robert ;
Résumé: When asked about humanity's future relationship with computers, Marvin Minsky famously replied "If we're lucky, they might decide to keep us as pets". A number of eminent authorities continue to argue that there is a real danger that "super-intelligent" machines will enslave-perhaps even destroy-humanity. One might think that it would swiftly follow that we should abandon the pursuit of AI. Instead, most of those who purport to be concerned about the existential threat posed by AI default to worrying about what they call the "Friendly AI problem". Roughly speaking this is the question of how we might ensure that the AI that will develop from the first AI that we create will remain sympathetic to humanity and continue to serve, or at least take account of, our interests. In this paper I draw on the "neo-republican" philosophy of Philip Pettit to argue that solving the Friendly AI problem would not change the fact that the advent of super-intelligent AI would be disastrous for humanity by virtue of rendering us the slaves of machines. A key insight of the republican tradition is that freedom requires equality of a certain sort, which is clearly lacking between pets and their owners. Benevolence is not enough. As long as AI has the power to interfere in humanity's choices, and the capacity to do so without reference to our interests, then it will dominate us and thereby render us unfree. The pets of kind owners are still pets, which is not a status which humanity should embrace. If we really think that there is a risk that research on AI will lead to the emergence of a superintelligence, then we need to think again about the wisdom of researching AI at all.
Langue:
Anglais