© 2020 – 2024 AEA3 WEB | AEAƎ United Kingdom News
AEA3 WEB | AEAƎ United Kingdom News
Image default
IT

Tech firms must beware AI’s ‘faux-thenticity’ and keep lonely humans in mind

How cruelly ironic that London Tech Week and Loneliness Awareness Week coincided on the same seven days in mid-June. It’s not hard to argue that the content of the former – whizzy technology discussed in breathlessly excited tones – begets the latter.

On the Monday of that week, Gallup published its latest State of the Global Workplace report. Buried below the headline-capturing figures – only 23% of the planet’s workforce is engaged, with “active disengagement” costing the global economy almost $9tn annually – it revealed that 20% of the world’s employees experience loneliness daily.

It’s worth dwelling on this statistic. One in five workers experience loneliness – feeling sad because they have no friends or company – every day. Unsurprisingly, according to the Gallup research, the most lonely are fully remote workers – those ‘enabled’ to do their jobs by, yes, whizzy technology.

Of course, people felt compelled to talk about artificial intelligence on stage at London Olympia throughout London Tech Week. But as AI gallops on, and few seem willing – or able – to take the reins, we risk running roughshod over those who supposedly stand to gain the most: us humans.

Suppose the numerous technology conferences, HR, and workplace events I’ve attended recently are a good bellwether. In that case, the lead sheep ringing the way is misguided, and human considerations are increasingly an afterthought. Is there really an ‘I” in AI? I’m seeing evidence to the contrary.

Increasingly, even in ‘future of work’ conversations, people are in the shade. It’s one of the reasons I’m trying to recalibrate the scope of my writing and speaking outputs and change the conversation to focus on ‘human-work evolution’.

Undoubtedly, AI and other technology tools will significantly shape how work looks and feels in the coming years. However, despite having written about tech for almost a decade, I feel a visceral reaction to AI products that, in particular, mimic human behaviour.

On stage at Intuit Mailchimp’s FROM: HERE, TO: THERE conference, on 13 June, I winced as Ameca, a robotic humanoid that interacts with humans using generative AI, was paraded as someone considered a ‘freak’ would have been at a circus a century or more ago.

‘She’ was slow to answer questions, and her robot dance would not worry Peter Crouch. I posted a short video on LinkedIn, and as someone commented: “You could drop ChatGPT in a toaster and get better responses.”

This uneasiness with AI mimicking human behaviour extends beyond robotic humanoids to customer service and personal interactions. For instance, at Qualtrics’ X4, there was great fanfare about a new generative AI-powered tool that provides polished messages to employees or customers.

But are empathetic messages from AI-powered systems sincere, or do they risk alienating those receiving them?

‘Faux-thenticity’

Also in June, at a Twilio event, I heard Azeem Azhar, author of Existential, float the concept of “faux-thenticity”. It’s a portmanteau of false authenticity and the idea is this can leave people feeling more disconnected than ever.

We saw this play out tragically in February 2023, when officials at Vanderbilt University, in Tennessee, used ChatGPT to craft a condolence message following a shooting, triggering a significant backlash.

As we navigate this new terrain, transparency is paramount. Perhaps we’ll see the emergence of disclaimers like ‘AI-generated some of this content with human editing’ becoming commonplace. But will such honesty be enough, or will it only heighten people’s discomfort?

The stakes are high. The tragic case of a Belgian man who took his own life after extended interactions with an AI chatbot underscores the potential consequences of blurring the lines between human and artificial interactions. It’s a stark reminder that AI, no matter how advanced, lacks the nuanced understanding of human emotions and the weight of its words.

The coinciding of London Tech Week and Loneliness Awareness Week poignantly reminds us of the potential disconnect between technological advancement in AI and human well-being.

As we forcibly stride into the digital age, it’s clear that we need more than technological hand-holding. We require authentic human guidance and connection to understand and thrive in this complex new world.

The post Tech firms must beware AI’s ‘faux-thenticity’ and keep lonely humans in mind appeared first on UKTN.

Related posts

Conservatives broke data law to racially profile millions

AEA3

UKTN Podcast Ep2: Chip co-founder on becoming the ‘crowdfunding king’

AEA3

Making machine learning operational

AEA3