Statue heads

Will AI destroy the art of human communication?

We’ve asked what we can gain, but have we asked what we may lose? Generative AI is revolutionising communication, but an overreliance on it has the potential to lead to language atrophy, a loss of critical thinking, and an erosion of trust in human interactions, argues Senior UX Researcher Matt Dalla Rosa.

Matthew Dalla Rosa

28 August 2024

6 minute read

It is AI month at Luminary as we continue to develop what role generative AI will have in the business and the wider tech industry. We’re going through the process of how we can ethically incorporate generative AI into our work flows, focusing on:

  • what to use it for and what not to use it for
  • our responsibilities when using generative AI, and 
  • how to effectively engage with generative AI.

In society in general, but especially in digital and tech, there is a tendency to hail the benefits of new technologies, the utopias that will spring forth with their implementation and praise the new world that awaits – the futurist movement kept alive and well. 

Yet this is often at the cost of stopping to think about what happens when we use it. How will it change us in the margins, how we relate to each other, the ideologies embedded in the technology and how it could influence the way we behave? We’ve seen this time and again from the rise of the internet, smartphones and social media, just to name a few. With hindsight, what might we be asking for when it comes to generative AI?

Our ability to use language and bridge the gap between people – a notoriously difficult task – is facing the possibility of collective atrophy.

Atrophy of language

We are learning to talk and prompt like a machine in order to elicit and create human-like responses. As we get better at creating prompts, refining and massaging responses, do we surrender the mind to machine-like thought patterns?

We ask AI to write responses with a specific tone instead of applying it ourselves. We ask AI to summarise copy without fully considering what we want to truly summarise. Our ability to use language and bridge the gap between people – a notoriously difficult task – is facing the possibility of collective atrophy. 

The ability to communicate is being outsourced to the machine. In our efforts to find ways to save time, expedite research and communication, we speed up until we obliterate the meaning behind anything we’re trying to communicate. The Italian Philosopher Franco ‘Bifo’ Berardi describes the replacement of human language with artificial intelligence in his book ‘The Third Unconscious’ as “The generation of automatic signs whose meaning is established by the code. Automated signification is nothing”. 

Machines talking to machines, empty symbols met by empty symbols. What does it look like when the work we create and emails we send are heavily influenced by AI language? Will our ability to communicate degrade and stagnate without the aid of our virtual assistants?

The process matters

When we start looking to expedite understanding and meaning to summarise source notes, complete a first pass at synthesising data, or write a first version of a report, we continually degrade the quality of our source material and ability to form deeper thoughts. 

This sentiment is explored greatly by short fiction writer Ted Chiang in his article ‘ChatGPT is a Blurry JPEG of the Web’: “Your first draft isn’t an unoriginal idea expressed clearly; it’s an original idea expressed poorly, and it is accompanied by your amorphous dissatisfaction, your awareness of the distance between what it says and what you want it to say. That’s what directs you during rewriting, and that’s one of the things lacking when you start with text generated by an AI.” 

For example, in UX research, when synthesising interview notes, the process of reading through, sorting and theming notes is as much about grouping them together as building an intricate understanding of what has been said, what hasn’t been said, and how it starts to connect in novel and interesting ways. Walking into someone else’s drafts or work, even when you’ve ‘briefed’ them, is often a discombobulating experience and comes with a shallower understanding that requires us to re-work and backtrack. Yet for many, the use of AI in these spaces is often seen as an expeditive tool you’d be unwise to take advantage of. 

Not all work needs to be inherently creative or original. Much of what we do can often feel rote or expected, even in research. Yet by using tools that devalue the creative process, we will only ever be able to find the cursory, the superficial meaning skimmed from the top of a pool. Never knowing the hidden depth below the surface that may uncover issues, illuminate problems in a different way or change our perceptions of what we thought we understood.

Trust is a hard thing to come by

People are fallible by nature. With this comes the perception that machines and AI can often be more effective, efficient and accurate than people in almost any facet. For many tasks this can be true but communication, understanding, and the making of meaning feel like dangerous territory. It’s not just a question of ‘Is this information accurate, has it considered all the given data?’, but how has it chosen to interpret, prioritise and summarise any given set of information? These are decisions that have consequences in the work we do, how we do it and the impacts we have on others. What happens when the foundations are built on this type of trust?

Does using AI to communicate for us and organise our thoughts do a disservice to the people we are interviewing, engaging and communicating with? 

The advent of ghostwriting is not recent. However the ability for it to exist within direct day to day communication has become increasingly ubiquitous with the rise of email, text and direct messages. And even then it was often still another human on the other end speaking through us. When people ask us for something, share stories with us and talk to us by responding with responses heavily mediated by generative AI, what does that say about how we view the other person, the time they’ve taken to talk or stories they’ve told? 

Final thoughts

For many, including Luminary, we’re in the early stages of figuring out what role generative AI should play in our work from an ethical, accountable and efficiency perspective. Many of the key questions revolve around:

  • What are the key tasks generative AI can help with, often revolving around search and finding specific information
  • What is our approach to generative AI in communication from client work, to emails and content creation? Can it be used at all and if it can, how should it be used?

To ignore how generative AI has been introduced and is already being used would be a terrible oversight. Not many are advocating for full trust in generative AI and unmediated use but how could it play a more supportive, advisory role? 

As we wade into the use of generative AI, I think asking many of these questions is worthwhile. How will the technology be useful, how will it be a hindrance and how will it fundamentally change our expectations of how people communicate? 

This article has not been written by generative AI or with AI assistance.

Want to tap into the expertise of an agency that’s been in operation since 1999?

Get in touch

Keep Reading

Want more? Here are some other blog posts you might be interested in.