Content-type: text/html Downes.ca ~ Stephen's Web ~ Ethics, Analytics and the Duty of Care

Stephen Downes

Knowledge, Learning, Community

Oct 02, 2023

Excerpts from my article, which can be found in full here: https://docs.google.com/document/d/1WM3u6ddxDQdK4FvxfEJEBlhdx0mJelgFj9YRJcyIZp8/edit?usp=sharing (warning: long and detailed, yet still not as comprehensive as it could be - comments are very welcome).

Ethics should make us joyful, not afraid. Ethics is not about what's wrong, but what's right. It speaks to us of the possibility of living our best life, of having aspirations that are noble and good, and gives us the means and tools to help realise that possibility. We spend so much more effort trying to prevent what's bad and wrong when we should be trying to create something that is good and right.

Similarly, in learning analytics, the best outcome is achieved not by preventing harm, but rather by creating good. Technology can represent the best of us, embodying our hopes and dreams and aspirations. That is the reason for its existence. Yet, "classical philosophers of technology have painted an excessively gloomy picture of the role of technology in contemporary culture," writes Verbeek (2005:4). What is it we put into technology and what do we expect when we use it? In analytics, we see this in sharp focus.

Ethics, at first glance, appears to be about 'right' and 'wrong', perhaps as discovered (Pojman, 1990), perhaps as invented (Mackie, 1983). The nature of right and wrong might be found in biology, rights, fairness, religion, or any number of other sources, depending on who is asked. Or instead, ethics may be based in virtue and character, as described by Aristotle (350-2003, I.III) in ancient Greece. Either way, ethics is generally thought of as speaking to what actions we 'should' or 'ought' to take (or 'should not' or 'ought not' take).

In this paper, however, I argue that ethics is based on perception, not principle. It springs from that warm and rewarding sensation that follows when we have done something good in the world. It reflects our feelings of compassion, of justice, of goodness. It is something that comes from inside, not something that results from a good argument or a stern talking-to. We spend so much effort drafting arguments and principles as though we could convince someone to be ethical, but the ethical person does not need them, and if a person is unethical, reason will not sway them.

We see the same effect in analytics. Today's artificial intelligence engines are not based on cognitive rules or principles; they are trained using a mass of contextually relevant data. This makes them ethically agnostic, but they defy simple statements of what they ought not do. And so the literature of ethics in analytics express the fears of alienation and subjugation common to traditional philosophy of technology. And we lose sight, not only of the good that analytics might produce, but also of the best means for preventing harm.

What, then, do we learn when we bring these considerations together? That is the topic of this essay. Analytics is a brand new field, coming into being only in the last few decades. Yet it wrestles with questions that have occupied philosophers for centuries. When we ask what is right and wrong, we ask also how we come to know what is right and wrong, how we come to learn the distinction, and to apply it in our daily lives. This is as true for the analytics engine as it is for the person using it.

...

"Feminist ethical theory deals a blow to the exclusively rational systems of thought that may have as their grounding and inherent disregard for the inherently personal, and sometimes, gender-based nature of knowledge construction." (Craig Dunn and Brian Burton writing an Encyclopedia Britannica) What this means is that it moves ethical knowledge from the realm of explicit knowledge to what Polanyi would describe it as the realm of tacit knowledge. 

Our ethical actions are not deductions. They're not inferences. So, what are they? We might say they're like what Jack Marshall calls 'ethics alarms', the feelings in your gut. "Emotions and their embodiments thus become central to the construction of knowledge and knowing-subjects, and in particular knowledges about education and pedagogies of inclusion/exclusion, justice/injustice" (Motta & Bennett, 2018:634).  It's more like a sensation than a type of cognition. We can call this a moral sense, or as David Hume would describe it, a moral sentiment.

To expand on this as a story about moral sense, we can draw from Elizabeth Radcliffe, who suggests that moral distinctions depend on our experience, sentiments or feelings. This is not a theory of innateness or natural morality, nor are we saying that we have an inborn awareness of what morality is. It's not a sort of Cartesian certainty like "I think, therefore I am, therefore, I am moral." The idea that we can learn ethics, but we learn ethics in such a way that we feel or experience a moral sense rather than fully formed general principles.

It's important here to be clear that this is different from moral intuition. Speaking of the ethics of care, people may equate what they're talking about with intuition, as in "women's intuition", for example. But that's not what's intended. It's more like a sentiment or a feeling. It's more equivalent to your sense of balance; you get this feeling when you're off balance, and you wouldn't describe that as an intuition. It is often experienced at a subsymbolic (or 'ineffable') level - ethics is not (contra Kant) not a matter of rationality* but rather one of sympathy. How we react in a particular case depends on our ethical background and is the result of multiple simultaneous factors, not large-print key statements

How can you learn a sense? Think about training your taste buds. A sommelier, for example, a taster of wine, will over time, learn how to distinguish different types of wines. Similarly, someone who is a coffee aficionado will learn to distinguish different types of coffee. Moral sensations are like that, a sort of affective feeling that we might have, not an emotion, the sense of anger, or fear, or hope or desire, but actually a much more gentle and subtle kind of feeling. Similarly, it is arguable that such feelings "have shaped the cultural evolution of norms. For example, groups share autonomy norms in part because these norms resonated with moral feelings of respect and were therefore favoured in cultural transmission."

...

Ethics based on virtue, duty, or beneficial outcomes are not satisfactory in the case of fields like AI and learning analytics. We don't agree on what 'the good' is. We can't predict what the consequences will be. We can't repair bad consequences after the fact. Ethics – especially in the professions - are typically defined in terms of social contracts, rights or duties - and as such, as statements of rules or principles. But these don't take into account context and particular situations. They also don't take into account the larger interconnected environment in which all this takes place. And they don't take into account how analytics and AI themselves work.

Instead, as the feminist philosophies of care show us, ethics - including the ethics of AI - is about relationships, how we interact and care for each other. And as a key point of these interactions, our analytics are always going to reflect us (think Michael Wesch: the Machine is us/ing us; think of the case of Tay, the racist AI based on Tweets). The ethics of AI is based - in a concrete practical sense - on what we do and what we say to each other. This is the ethics we apply when we ask "what makes so-and-so think it would be appropriate to post such-and-such?" If there is a breakdown in the ethics of AI, it is merely reflective of a breakdown in the social order generally (Belshaw, 2011). 

This breakdown is what motivates us to study the Duty of Care, a feminist philosophical perspective that uses a relational and context-bound approach toward morality and decision making, and more importantly, looks at moral and ethical relationships that actually work. These are based on different objectives - not 'rights' or 'fairness' but rather things like a sense of compassion, not on a rigid set of principles but rather an attitude or approach of caring and kindness, not on constraining or managing our temptation to do wrong, but in finding ways to do good.

In the end, ethics are derived from our own lived experiences, and thus reflect the nature of a community as an entire system, rather than one individual making a decision. We need to keep in mind how we're all connected. What's important here is how we learn to be ethical in the first place (as opposed to the specific statement of a set of rules defining what it is to be ethical). How should this be approached in practice, in learning, in a workplace, and in society? By creating an ethical culture (rather than emphasis on following the rules), by encouraging a diversity of perspective to create a wider sense of community, and by encouraging openness and interaction (art, drama, etc) to develop empathy and capacity to see from the perspective of others. None of these are ethical principles, but they are the ways we arrive at an ethical society.

Mentions

- Ethics, Analytics and the Duty of Care - Google Documenten, Oct 02, 2023



Stephen Downes Stephen Downes, Casselman, Canada
stephen@downes.ca

Copyright 2024
Last Updated: Apr 27, 2024 5:50 p.m.

Canadian Flag Creative Commons License.

Force:yes