Thursday, October 5, 2017

Missy Cummings — We need to overcome AI's inherent human bias


Like I've been saying. GIGO.

This is somewhat similar to the systems/game theoretical approach to military strategy that was highly popular with strategists affecting policy at the time of the Vietnam War.
Missy Cummings | Director, Humans & Autonomy Laboratory

See also

Justifiably, there is a growing debate on the ethics of AI use. How do we roll out AI-based systems that cannot reason about some of the ethical conundrums that human decision-makers need to weigh – issues such as the value of a life and ending deep-seated biases against under-privileged groups? Some even propose halting the rollout of AI before we have answered these tough questions.

I would argue that it’s not acceptable to reject today’s AI due to perceived ethical issues. Why? Ironically, I believe it might be unethical to do so.

Greater good

At its core, there is a “meta ethics” issue here.

How can we advocate halting the deployment of a technology solely because of a small chance of failure, when we know that AI technologies harnessed today could definitely save millions of people?
The basis of utilitarian consequential ethics is "utility." "Good" is defined in terms of the greatest good for the greatest number.

Deontological ethics is rule-based. Kantian deontological ethics is based on the rule of making decisions based on whether the action can be generalized as principle, which is a philosophical way of stating the Golden Rule.

Virtue ethics is based on a constellation of virtues that do not necessarily align. Practical wisdom must be applied as the criterion of reason.

Moral sentiments theories like those of David Hume and Adam Smith in The Theory of Moral Sentiments are based on a moral sensibility or refined feeling.

Situational ethics denies a universal approach to ethical decision-making in that every case is a special case and needs to be approached as such.
Situational ethics, or situation ethics, takes into account the particular context of an act when evaluating it ethically, rather than judging it according to absolute moral standards. In situation ethics, within each context, it is not a universal law that is to be followed, but the law of love. A Greek word used to describe love in the Bible is "agape". Agape is the type of love that shows concern about others, caring for them as much as one cares for oneself. Agape love is conceived as having no strings attached to it and seeking nothing in return; it is a totally unconditional love. Proponents of situational approaches to ethics include Kierkegaard, Sartre, de Beauvoir, Jaspers, and Heidegger.
Specifically Christian forms of situational ethics placing love above all particular principles or rules were proposed in the first half of the twentieth century by Rudolf Bultmann, John A. T. Robinson, and Joseph Fletcher. These theologians point specifically to agapē, or unconditional love, as the highest end. Other theologians who advocated situational ethics include Josef Fuchs, Reinhold Niebuhr, Karl Barth, Emil Brunner, Dietrich Bonhoeffer, and Paul Tillich.  Tillich, for example, declared that "Love is the ultimate law."
Fletcher, who became prominently associated with this approach in the English-speaking world due to his book (Situation Ethics), stated that "all laws and rules and principles and ideals and norms, are only contingent, only valid if they happen to serve love" in the particular situation, and thus may be broken or ignored if another course of action would achieve a more loving outcome. Fletcher has sometimes been identified as the founder of situation ethics, but he himself refers his readers to the active debate over the theme that preceded his own work.
Perennial Wisdom is in agreement with "the law of love" as supreme while also emphasizing that there are categories of mutual upholding, such that different conditions result in different responsibilities independently of specific contexts and circumstances. For example, parents responsibility is to provide first for their own families; citizens responsibility is first to their own countries.

Ethical dilemmas should not halt the rollout of AI. Here’s why
Kartik Hosanagar | Professor, The Wharton School, University of Pennsylvania

3 comments:

Ignacio said...
This comment has been removed by the author.
Ignacio said...

It's a big problem, but the problem gets itself solved when you have AI actually being capable of learning and evolving itself completely unassisted.

That would imply some level of self-awareness, so we are not even close to that.

Current "AI" ("neural" networks mostly) is more fixed than usually acknowledged and has very little capabilities in terms of topological morph and expansion.

Noah Way said...

We will know when AI as been achieved when it acts as stupidly as people do or when it starts exterminating humans.