Abstract
This paper argues that, in addition to assessing a design’s hard impacts in terms of safety, environmental issues, and health, also soft impacts, such as ethical implications, need to be systematically and proactively addressed. This is particularly important for (weak) AI-technologies that are increasingly shaping today’s society. When Microsoft released its AI chatbot Tay on Twitter in March 2016, Tay was supposed to learn to chat as an average American teenage girl. However, she quickly became sexist, racist, and anti-Semitic. Microsoft turned out to be overly naïve about the intentions of Twitter users who had to ‘train’ Tay and, in doing so, the developers did not properly acknowledge their ethical responsibility. Even though technology’s non-neutrality has become generally accepted within the fields of ethics and philosophy of technology, other disciplines, such as engineering and computer science, often still adhere to the view that technology itself is neutral. Yet, technology mediates our actions and perceptions in numerous ways. Today, algorithms not only have an important share in how we see the world, they can also predict our future behaviour. Neither algorithms nor datasets are inherently neutral; on the contrary, users’ and developers’ biases can seep into them. Overall, this paper seeks to give attention to the non-neutrality of AI-technologies, the ethical responsibility of its developers, and the soft impacts of existing and emerging AI-technologies. In doing so, three ways are discussed in which the agenda of technology developers can be broadened with a more systematic focus on assessing soft impacts.