Is Our Adoption of Generative AI in the Workplace Moving Too Fast?
If you were hoping for the generative AI craze to die down, you’re going to have to wait a little longer. We’ve already transitioned from speculation around the application of AI in the workplace to more and more practical uses cropping every day. McKinsey has reported that roughly 50% of their staff use AI on a daily basis in their roles.
From summarizing emails to constructing entire presentations, generative AI is quickly becoming a staple in workers’ lives. It makes sense – the tech is more advanced than anything we’ve seen prior, it’s widely available with a very low barrier to entry, and it can save hours of work with minimal input.
But are we at risk of integrating it a little too quickly into our workflows? Some of us might be, yes.
(Quick disclaimer: For the sake of this piece, we’ll limit ourselves to only looking at the most widely adopted language model, OpenAI’s ChatGPT.)
As we all know, ChatGPT is incredibly efficient, and we all know the business world has an obsession with efficiency – one that sometimes borders on problematic. We have to streamline, we have to optimize!
AI condenses hours of work into minutes. And this is the whole reason it’s captured our collective attention so substantially. It’s certainly not because the quality is any better than if a human were to do it – in most cases it's far worse.
We’d be wise to remember that efficiency is not the apogee of productivity, it’s a combination of efficiency and accuracy.
ChatGPT wants to resolve its queries more than it wants to be right. As a result, it prioritizes speed and completion (the definition of completion here changing from query to query) over accuracy.
Generative AI is not so much generative as it is regurgative. It does not create in the same way that we do, it reassembles information that’s been fed into its models. Beyond AI’s obvious shortcomings with its mimicry, there have been many reports of its pulling information out of thin air. If tt cannot find a viable source to complete a certain task, it is liable to completely fabricate information.
This is anecdotal but whenever we use ChatGPT internally, we make sure to ask it, “Is there anything else you didn’t include?” and inevitably it will include some other bits of information that typically end up being fairly important.
So What To Make of All of This?
Many large players in the tech space are placing restrictions on what tools their employees can and cannot use. Some of these concerns don’t revolve so much around the efficacy of the tools but more so around the matter of data privacy and protection. Regardless, if the biggest evangelists of technology have reservations about AI, maybe we all should, too.
To be clear, AI will undoubtedly revolutionize the way we work and navigate the world, we do not doubt that in the slightest. In many ways it will benefit us greatly. But it doesn’t hurt us to tread carefully. We don’t need to overhaul each role within our company overnight with something that the large majority of us barely understand. This goes double for roles in areas such as finance or legal, where one mishap could spell disaster.
We’ve already begun implementing AI in some of our projects with clients and to great success, but that doesn’t mean we’re ready to let AI run unchecked. We have to be judicious about its applications, ensuring we’re doing extensive research and QA testing.
At the end of the day, the tools on the market are not final solutions – they’re proofs of concept. Nevertheless, they’re enticing and will only get better with time.
For now though, maybe we don’t need to all hop on the rocket ship if we’re just going to the grocery store.