Friday, March 19, 2021

Why is ‘AI and Ethics’ work mostly a waste of time? It’s rife with amateurism in ‘ethics’, confirmation bias, anthropomorphism and activism.

I run an AI company, have invested in AI companies, taught the application of AI in my field, give talks and podcasts on the subject and have written a book on the subject. Yet, most of the questions I get on AI and data are what people see as ‘ethical’ issues. The problem is that they’re mostly not.

Actual ethics

There is little in the way of actual ethics among those in this field. Questions about what moral philosophy is being brought to the table is met with blank stares. Ethics has been a serious philosophical subject since the Greeks. From Plato’s Socractic dialogues on the nature of ‘good’ and Aristotle’s Ethics through to Hume and Kant’s brakes on the application of reason in the ethical sphere, the subject has a long pedigree. Yet how may could state the difference between the Golden Rule and the Categorical Imperative? Then on into Utilitarianism. For all the talk about happiness and well-being, few realise that this debate was covered intensely by Bentham, Mill and many others. Ethics is a subject of depth and importance, yet many have little interest or background in the subject, the very subject in which they profess to be ‘experts’.

Design issues

Many so-called ethical issues are simply design issues. Stephen Pinker and others have identified the psychology behind this confusion of design with ethics. Many issues are quite simply glitches in the development of AI solutions. As AI uses data to learn, you have to literally ‘train’ models with data, selected or more general, it makes mistakes, a bit like a child saying sheeps for the plural of a sheep. An algorithm may well confuse x with Y but it has no comprehension that it has. More importantly, the elimination of mistakes or known problems, like overfitting, are well known and a huge amount of effort goes into trying to reduce the errors. This is why we see AI solutions improve, sometimes dramatically over time. My Alexa voice recognition struggled with my Scottish accent at first, it no longer does. We must not confuse ethics with design. On the grander scale, this is why many, such as Pinker, think that AI as an existential threat to our species is wrong-headed. We simply engineer it not to be. The chances of it doing it on its own and next to nil, as there are and will be ample opportunities to stop it happening. Let’s identify these design issues first before we get into a complete tiz over ethical concerns.

Anthropomorphism

Anthropomorphism, the reading of human qualities into non-human entities is rife in this field. The commonest mistake being the false attribution of responsibilities, which we see in often silly discussions about robots. In truth all AI is competence without comprehension. It can perform wonderfully well and beat you at checkers, chess, GO, poker and many computer games, even outperform you on identifying tumours on scans and some levels of decision making. It automates much of what we used to do but that does not mean it is us or even like us. Reading ethical qualities into software is not the point. Nass and reeves researched this anthropomorphism decades ago. Computers, in particular, seem to draw it out of people.

Confirmation bias

Many of the accusations of bias in AI are driven by negativity and confirmation bias in the accusers. So keen are they to blame things, often because their research grant or organisation expects it, that they look for ethical problems in the wrong place, in the technology or identity group they don’t like, rather than looking for solutions to that problem, the first port of call is misallocated blame. Then there’s the negativity bias. AI is tech, it’s new, so it must be problematic. In general, there are potboilers such as Weapons of Math destruction and umpteen ‘best-sellers’ that are thin as prison gruel, yet grab the attention of the lazy reader. Negativity sells.

Activism

Rather than the application of ‘ethics’ which is the study of moral principles, what is good and bad, AI and Ethics seems to have quietly dropped the moral and good side. It is cyclopic in its focus on the bad. This is not ethics, it is activism. People have beefs, often centred in identity politics or political stances around ‘capitalism’ and go for the tech, like predators after prey. This is by far the worst form of imbalanced, subjective politicking. Yet it is common in the one place that should pride itself in objectivity – our universities.

Leave it to the experts

No one denies that there are ethical issues around AI and data. All tech is of ethical concern. Cars kill 1.5 million a year in horrible and mangled deaths, many more injured, the equivalent of a World War every year. Yet we don’t have much attention paid to it as a global moral concern. Yet of a hand sanitizer is poorly calibrated, we cry racism. In fact, there is plenty of attention and effort going into the ethical concerns around this technology, by the EU, IEE and others. We need quality not quantity, or as Russell says in his excellent Human Compatible, whereas what we have at the moment is every man, woman, uncle auntie and their dog on the case. The issues are highly technical, ethical, legal and practical. You need a multi-disciplinary approach, not thinly disguised activism.

We have a model here in healthcare. Pharmaceuticals are highly regulated. You must be able to prove efficacy and safety. Strict rules apply to what you can claim for your product. That is fair. Note that many drugs have proven efficacy and safety without knowing exactly how they work. The focus is on the outputs, not full transparency. One need only apply this to cases where AI could do harm. Many are quite benign and will need little of no regulation.

Conclusion

The problems here are plentiful. First, it may stop good things from happening. The atmosphere in some areas of academe are so extreme that it is putting a brake on good research and outcomes in health and many other areas of human endeavour. Second, it distracts from actually coming up with solutions. So much efforts is put into diagnosis of sometimes imaginary illnesses that little is done on finding cures. Third, much of it is a waste of money with huge amounts of duplication, shallow work and outcomes that fall still born from the press. Much is quite simply a waste of time. Forth, there is a tendency to what to throw babies out with the bathwater, even the baths themselves. Just bear in mind that AI is not as good as we think nor as bad as we fear.

No comments: