As we keep trying to point out, AI is supposed to be biased
We can’t help but think that there’s a category error in here:
There are many issues relating to AI about which we should worry. None of them has to do with sentience. There is, for instance, the issue of bias. Because algorithms and other forms of software are trained using data from human societies, they often replicate the biases and attitudes of those societies. Facial recognition software exhibits racial biases and people have been arrested on mistaken data. AI used in healthcare or recruitment can replicate real-life social biases.
Timnit Gebru, former head of Google’s ethical AI team, and several of her colleagues wrote a paper in 2020 that showed that large language models, such as LaMDA, which are trained on virtually as much online text as they can hoover up, can be particularly susceptible to a deeply distorted view of the world because so much of the input material is racist, sexist and conspiratorial.
We want our AIs to be useful in the real world. For their pattern recognition to tell us something useful about real world patterns.
If human beings are biased - and given the definitions being used of “racist”, “sexist” and so on we all are - then so too must the AIs be biased if they’re to describe that real world.
Of course, it would be possible to train AIs on resolutely non-racist and non-sexist material. Say, nothing but the tweets of Owen Jones and Laurie Penny. It’s just that any output from such models is unlikely to have much relevance to that universe outside the windows. Which rather obviates the purpose of making the models and AIs. A bit like the effort of making convex shovels, really not the point of the effort at all.
At heart we think this is the same as the GOSPLAN delusion. Lovely models of how the economy should work can be constructed. But if base human nature is ignored then such models will depend upon such as New Soviet Man to turn up to make it work - he didn’t, it didn’t. The same will be true of any model of human behaviour or society that assumes - or insists - that we’re all free of those current sins of racism or sexism and whatever other are the modern claims. They might well end up being very pretty models but they’re not going to be of any practical use. So, why bother?
If we want to model humans then we’ve got to model them as they are, not as some might wish them to be. This is as true of AIs as it is of economic models.