June 6, 2023

An Ethics for AI or AI for Ethics?

— Author: Stefano Gallinaro

As Artificial Intelligence (AI) has become a trending topic, we have seen the emergence of both enthusiasts and detractors.

This is because the first response of human beings is often emotional. When faced with something new, two fundamental human emotions can be triggered: enthusiasm and fear. Depending on which emotion prevails, we may find ourselves supporting or opposing the new development.

Upon further reflection, we begin to question the reasons for our enthusiasm or fear and seek to verify them.

One argument frequently made regarding artificial intelligence is the importance of ethical considerations in its development and use. This is not a new or trivial question; indeed, Isaac Asimov built his career around it.

What moral criteria guide an AI system?

Consider the case of a self-driving car that must decide whether to hit a child who has run into the road or swerve, potentially killing the passenger inside. There is no definitive right answer. We might accept any decision made by a human in such a situation because it would be the result of an emotional response rather than a pre-programmed choice.

The ethics of machines, then, are a consequence of the choices made by the humans who program them. The power in the hands of these programmers is enormous, as their ethics can shape those of the machines and, in turn, our relationship with them. It is not difficult to imagine the dystopian potential of this situation.

However, this remains true only if we assume that:

  • Ethics are static and do not change over time
  • Machines are not endowed with intelligence

Regarding the first point, it is clear that ethics evolve alongside civilization.

Practices once considered acceptable, such as slavery, are now seen as barbaric, while previously unacceptable ideas, such as equality between men and women, have become the norm.

The second point was true until the advent of AI. The distinguishing feature of AI is not simply its ability to answer questions like a human (Siri already does that) but to learn and adapt based on the context in which it interacts. The ultimate goal of AI is to make the best possible decision based on the information it has, and it can improve its decision-making ability by expanding its dataset and refining its model.

If machines can improve in areas like diagnosing skin cancer, why couldn't they improve their ethical reasoning as well? Human advancements in science have been accompanied by social and cultural progress, including ethical development. This process has involved trial and error, acquiring information, and self-questioning—similar to the workings of AI.

The enthusiastic part of me is inclined to see the potential for AI to accelerate our cultural progress. Imagine the number of hypotheses that could be formulated and tested without AI becoming attached to a concept due to common sense, but rather evaluating it based on merit. Even if programmed with significant initial biases, AI systems can learn from their environment and improve their reasoning more rapidly than humans.

Machines calculate, diagnose, and predict better than we do; why shouldn't they also legislate or educate better than we do?

However, the fearful part of me worries that this path will be longer than expected, not because of the machines themselves, but because of human resistance. In the end, what could be a tool for creating a better world may become an obstacle if we are unwilling to embrace that better world.

linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram