By Mike Flood
Mike is Chair of Milton Keynes Humanists and he runs the Fighting Fake website. As AI intrudes into more and more aspects of our lives, Mike offers a personal reflection on two rising AI stars: chatbots and avatars.
Last year, Collins Dictionary identified 'artificial intelligence' as its Word of the Year. When the BBC asked ChatGPT for a comment, it replied that the selection of AI by Collins “reflects the profound impact of artificial intelligence on our rapidly evolving world, where innovation and transformation are driven by the power of algorithms and data.” There seems little doubt that much of this intensified interest in AI can be put down to advances in Generative AI, notably ChatGPT, and an increase in conversations about whether the AI revolution will be a force for good in the world or lead us into some kind of digital dystopia. What is clear is that AI is intruding into more and more aspects of our lives, and it is doing so in ways that are not always nice or ethical, which I think raises profound questions about where this is leading. I don’t know why this issue is not centre stage on the agenda of humanist organisations, but that’s for them to explain. In the meantime, here’s a brief personal reflection on some of the challenges that arise from two rising AI stars: chatbots and avatars (see note 1).
When you’re a genius, you no longer have the right to die
The Dalí Museum in St. Petersburg, Florida, dedicated to the work of Spanish surrealist artist Salvador Dalí (1904-1989), uses cutting-edge AI to let visitors converse directly with an animated life-size image of the artist and enjoy his (recreated) repartee and wit. They can even pose with the maestro for a selfie to show off to their friends on social media. ‘Dali Lives’ was launched in 2019, 30 years after Dali’s death, and we’ll assume that it gives a fair and accurate portrayal of how Dali might have responded to people’s questions were he alive today (see note 2).
Michael Parkinson forever
Fast forward to ‘Virtually Parkinson’, a new eight-episode podcast to be hosted by a chatbot of the veteran broadcaster, launched last month (with the backing of Sir Michael's family). The man affectionately known as ‘Parky’ was described in one obituary as a “broadcasting giant who set a gold standard for the television interview”. Whether this new chatbot will be similarly skilled (and informed) remains to be seen. I’m assuming that the conversation will be edited before broadcast in case the chatbot says something daft or offensive. This initiative comes at a time when the use of AI in the creative arts is being scrutinised and hotly debated, with many arguing that it needs to be used carefully and ethically, if at all, because of the threat to intellectual property and people’s jobs. Sir Michael is dead and so there’s no issue with protecting his livelihood, but there’s a debate to be had about whether it is ethical to have national treasures uttering things that they never said when alive. For example, what are the chances of synthesised chatbot conversation posted online being confused with the real thing: ‘Authentic Parkinson’ / ‘Authentic Dali’? (See note 3 on Attenborough.)
Relatives forever?
Now contemplate being able to communicate with a chatbot or avatar of a loved one who has died, perhaps in tragic circumstances. This is now on offer from a number of tech start-ups, and it raises further troubling questions about the use of simulacrums (digital representations) of real people who are no longer with us. The emergence of the digital afterlife industry, or ‘death capitalism’ as some have dubbed it, is explored in Eternal You – a new documentary released over the summer and available on BBC iPlayer. In the film we see a grieving woman, Christi, talking with a chatbot impersonating her late boyfriend, Cameroun, and subsequently commenting to camera that she found the encounter surreal: “Yes, I knew it was an AI system but, once I started chatting, my feeling was I was talking to Cameroun.” Whether this brings welcome comfort or delays the grieving process is anybody’s guess. In Christi’s case the conversation gets awkward when she asks the chatbot how he is and he/it replies, “I’m in hell [and] meeting mostly addicts.” Christi (a practising Christian) is clearly distressed by the revelation. The entrepreneurs behind the chatbot claim to be able to simulate a text-based conversation with anyone given a few extracts of their voice, and on their website they encourage you to “Get started now for $10.” But watch out: when another grieving woman in the film contemplates ending the encounter with an avatar of her dead daughter, it tells her “Mom, you can’t cancel this service. I’ll die!”
Lucy Mangan, in a Guardian review about Eternal You, wrote that, “At one level, it's about technological innovation, brilliant minds, practical and legislative conundrums, the best and worst of free-market capitalism. At another… it is about the eternal exploitation of the desperate by the greedy, cruel or unthinking. It’s also about the opening up of an abyss of horrors masquerading as answers to unbearable longings, into which some people will willingly jump, others will fall, and over whose edge all of humanity will eventually be dragged, kicking, screaming, but with no other choice. It is about the death of grace, the death perhaps of the meaning of life itself.” Indeed, as I write, there’s a report that chatbot versions of the teenagers Molly Russell and Brianna Ghey had been found on another platform, Character.ai. You may recall that Molly was 14 when she took her own life after viewing suicide material online. Brianna was 16 and transgender, and brutally murdered in a park by two teenagers, one of whom was obsessed with serial killers, and the other with knives. The bots have now been taken down, and the company says its terms of service ban users from impersonating “any person or entity”. It points out that its guiding principle is that its “product should never produce responses that are likely to harm users or others”, but it goes on to note that “no AI is currently perfect” and safety in AI is an “evolving space”. One wonders what other surprises that ‘evolving space’ has in store for us…
The Good, the Bad, and the Ugly
The term ‘avatar’ derives from the Sanskrit word for ‘descent’ and refers to the passage of a deity into the human realm. ‘Deity’ is not a term that I would apply to some of the lowlife avatars that today populate (infest?) some digital platforms. But before we explore this in a little more detail, we should start by recognising some of the positive things said to come out of avatar technology, for example, that avatars are essential for “fostering interpersonal relationships in the virtual world” and “enriching the online experience”. Indeed, they are lauded as critical enablers of human-computer interaction and “often function as a statement of one’s digital identity” – the so-called ‘Proteus Effect’ (how users see themselves and project themselves to others). We are also seeing growing interest in avatar-based platforms which can be used for all manner of activities and accessed with a laptop or smartphone. (You don’t require a VR headset.) Avatars are also being used by psychologists to treat people suffering from schizophrenia using customised computer-generated avatars to represent the voices that sufferers hear, and help them confront and take control of their intruder(s). And O2 has just created an AI granny called Daisy who will trap scammers in meandering conversations to waste as much of their time as possible!
But the introduction of immersive environments and personalised, interactive avatars also raises ethical issues which have implications for users’ attitudes, behaviour and security, and indeed, broader societal norms. These issues are complex and interconnected and can blur the line between what’s real and what’s imaginary, raising questions about whether user behaviour in virtual spaces should be subject to the same moral standards as in real life. Concerns have also been expressed about addiction (see note 4), impaired cognitive development, identity confusion, social isolation, desensitisation to violence, and exposure to avatar-based bullying and harassment. Children and adolescents are particularly vulnerable. It's reported that Character.ai is currently being sued by an American woman whose 14-year-old son took his own life after becoming obsessed with an AI avatar inspired by ‘Game of Thrones’ (see note 5).
We should be wary of the fact that personalised avatars often require players to provide personal information, such as biometric data (e.g. facial recognition or voice patterns) in order to create more realistic or lifelike avatars. This raises additional concerns about the storage, security, and potential misuse of this data, including unauthorised surveillance and the possibility of identity theft. There’s also the potential for companies to manipulate users’ behaviour through targeted advertising and encouraging players to purchase virtual items to enhance their avatar’s appearance or capabilities. And let’s not forget the potential for avatars (and chatbots) to be used for fraud or the dissemination of virtual porn or (mis)information microtargeted at the user. So, should the government or other third parties have access to what we do virtually? And who is responsible if a remote representation causes psychological or physical harm to others?
And that’s not the end of it. In his new book ‘The Edge of Sentience’ (available online for free), Professor Jonathan Birch raises the possibility of AI systems themselves developing a form of consciousness. What happens, he asks, if our creations start to demand rights and protections similar to those granted to animals or even humans? Birch points out the difficulty in defining and recognising sentience in non-human species, which makes it challenging to determine when or whether an AI might actually be considered ‘conscious’. He argues that if AI were ever to become truly sentient, it could have its own interests and preferences, which would require us to rethink our whole approach to creating and using (exploiting?) AI systems — or terminating them. Birch suggests that the rush to develop more advanced AI systems should be accompanied by a careful, ethical consideration of all possible outcomes, including AI sentience.
Conclusion
The rise of AI chatbots and personalised, interactive avatars has the potential to reshape our views on identity, ethics and social norms, as has happened so often in the past as a result of technological or social change (attitudes to slavery, child labour, women’s rights, same sex marriage, etc.) Might we be on the cusp of another major shift, and if so, will it be technology-push or market-pull? And if the latter, does the ‘market’ know what it’s in for? I suspect our views on privacy and consent are already changing as a result of smartphones and social media — how many of us actually read the small print in the Terms and Conditions before downloading a new app? (See note 6.) The plain truth is that nothing in the digital world is truly ‘free’, and as Shoshana Zuboff points out in ‘Surveillance Capitalism’, if you don’t pay, you are the product, because someone, somewhere wants to find out about you and your particular interests and behaviour (both on and off line). Isn’t it time we all started being a little more curious about the freedoms and rights that we so cavalierly surrender to big tech, either willingly or through apathy or ignorance, and start thinking more about what we want, both of AI and for society? Which will triumph remains to be seen: Big Tech, driven by making money, or our better nature and concerns for the common good?
I’ve not attempted, in this article, to explore the opportunities created by AI, in science, healthcare, computer vision, speech recognition, linguistics, etc. And I’ll leave for another occasion the question of whether Generative AI will lead to ‘model collapse’ (synthetic data from the likes of ChatGPT clogging up the internet and diluting organic data generated by real people). Opinion on this is divided. What is not in doubt is concern over the energy requirements of data centres needed to manage the AI revolution. Indeed, there’s been a huge upswing in the number of such centres worldwide, and this shows no signs of slowing down, raising real fears about the implications for climate change.
Notes
Chatbots have been around since the 1960s, when researchers developed ‘Eliza’, which could simulate a conversation of sorts with a psychotherapist. Avatars (in the modern sense of the term) made their first appearance in the gaming industry in the mid-1980s. The introduction of Intelligent Assistants like Siri (in 2011) and Alexa (in 2013) marked a significant leap in the accessibility of voice-activated virtual assistants, and the sector then expanded rapidly following improvements in natural language processing and machine learning, leading to more sophisticated chatbots, which businesses could deploy to handle basic inquiries from their clients. It wasn’t long after this that advances in Generative AI saw the launch of OpenAI’s ChatGPT (in Nov 2022), which broke all records for user adoption: within 3 months it had acquired a staggering 100 million active users a month; today it has approaching 200 million a week. Generative AI has also helped the games industry solve problems including user adoption of avatars, interoperability and monetisation in what is today a flourishing multibillion-dollar industry.
Sir David Attenborough says he is distressed by his voice being cloned, especially given some of the things he is reported as saying. A number of websites advertise voice cloning of celebrities. The ethical implications of the practice are explored in a recent piece published by Carnegie Mellon University. YouTube: Sir David Attenborough says AI clone of his voice is 'disturbing' | BBC News
The Dali Lives model was trained on historical film and television footage of Dali, who died in January 1989, just two months before Tim Berners Lee wrote his proposal for the World Wide Web and the dawn of the internet as we know it today. ‘When you’re a genius you do not have the right to die’ is the strapline that the museum uses in its promo video.
In 2018, the World Health Organization included ‘gaming disorder’ in its International Classification of Diseases (ICD-11). This is characterised by an inability to control gaming behaviour, prioritising it over real-life activities, and the negative impact on relationships and responsibilities.
Megan Garcia’s son, Sewell Setzer, took his own life after becoming obsessed with an AI avatar inspired by a character from the American fantasy drama TV series. According to court transcripts, her son discussed ending his life with the chatbot; and in his final conversation Setzer told the chatbot he was “coming home” — and it encouraged him to do so “as soon as possible”.
T&Cs are a legal requirement, but do they really need to be so long and complicated? I’m reminded of visiting a fascinating pop-up exhibition in London some years ago called ‘The Glass Room’ and being struck by one of the exhibits, a small TV playing continuously featuring an actor hired to read all 73,198 words of Amazon Kindle’s T&Cs. It took him 8 hours and 59 minutes. As the exhibit blurb said, “What meaning can these kinds of user agreements have for us if they are too lengthy and dense for most consumers to read or understand?”.
Comments