This website uses cookies and similar technologies to understand how you use our website and to create a better experience for you. To learn more about the technologies used and your choices, please read our Cookie Notice. You may adjust your cookie settings at any time through our cookie preferences tool.
Why Generative AI Isn’t Ready to Go Solo
Human input is still the key to creating copy that does the job, without the risk
Hollywood hasn’t done the best job of singing AI’s praises. From defence networks that send muscular, Austrian cyborg assassins to do their bidding to rogue robots fighting Will Smith in 2035 (just 11 years to go!) – AI hasn’t been the hero.
The Hollywood strikes of 2023 reflect this sentiment, when writers and actors walked out of studios for 118 days to demand compensation, residual increases and protections from the threat of AI. Again, artificial intelligence was painted the villain - but this time it was in real life.
Most films featuring AI powered machines usually end in some sort of cataclysmic event for humanity. We’re not quite there yet…but there are still risks to AI in its current form.
That’s not to say AI doesn’t have a place in marketing. There is a place at the table for something that can quickly analyse data and offer inspiration in seconds.
But human input is necessary to mitigate risk and ensure information is accurate.
If you’ve been considering incorporating AI into your processes or using it to create copy, here’s why you must have steps in place for human eyes to review what the robots may have missed.
Why is human input essential when using AI?
I asked Chat-GPT this very question and it understands its own limitations (AKA it pulled the answers from sources across the web). AI still requires human input and checks, for the following reasons:
- Accuracy and reliability
- Bias and ethical considerations
- Security
- Building user trust
Marketers – and workers across many industries, I’m sure – are feeling the pressure to use AI more in their role to improve efficiencies or risk being left behind.
Research from HubSpot backs this, with 57% of respondents in its AI Trend Report noting that they feel they must learn AI.
HubSpot also found that 43% of marketers are using AI to create content – 46% of these are using AI to write copy, while 41% are utilising it to generate outlines for their content.
Human input ensures accuracy
AI hasn’t always been particularly forthright with its sources. You can ask it to display these to some extent now, but it is still important to check and recheck any generated AI content and the facts it puts forward.
This is particularly true for any YMYL (Your Money or Your Life) content. This content typically covers anything medical-related or in the finance sector where poor advice can negatively impact an individual's life or a business’s reputation.
An expert in the industry should have eyes on any AI-generated copy for a YMYL article, verifying stats and always ensuring accuracy. A good piece of content should always highlight original sources anyway, whether it’s AI generated or written from scratch.
Without human input, bias can influence the output
You would think that something without feeling or life experiences would not be biased in any way. But of course, humans still had a role to play in creating AI and feeding its data, and that’s where biases can come into play.
Some examples of AI biases in recent years include:
- Data being used to train predictive policing tools produced biased results leading to misallocated patrols and racial profiling.
- Twitter’s image cropping algorithm came under fire in 2020, as its ML (machine learning system) appeared to be racially biased, cropping black faces out of images more than white faces.
AI bias has been an issue well before Chat-GPT’s popularity increased and something we must continue to monitor and tackle when spotted.
AI puts business security at risk
AI must get its data from somewhere and this means it may pull information from private or sensitive sources. Human input and sense-checking can ensure the output is compliant with privacy standards and any other regulations that must be followed in an industry.
Using sensitive or non-compliant data can put a business at risk of legal action – something that is always best avoided to ensure your reputation isn’t impacted and finances are not affected.
It’s not just data that poses a risk to security. AI can also inadvertently produce harmful content. Human checks can make sure nothing slips through that could be misinterpreted, offensive or be construed as dangerous.
AI can impact user trust
If you’re not checking anything generated by AI, you could be putting your business at risk of negatively impacting user trust. Misinformation and biased content can reflect poorly on your business and its values, leaving a bitter taste in the mouths of potential consumers.
Human checks that look for issues and remove them before they reach the eyes of your customers help to alleviate the risk of being perceived negatively and impacting user trust.
What can generative AI be relied upon to do without human intervention?
Nothing – and this shouldn’t change. AI can crunch numbers in seconds and share information in mere moments but everything it produces should always be checked and verified forever more.
In the future, its accuracy may improve – especially as generative models continue to make deals with leading publications and newsrooms – but humans must still check what is being generated to reduce risk.
AI: New territory that must be explored by humans
AI is an exciting, shiny new tool for us to utilise for efficiencies and inspiration but don’t rely on it for generating everything you need to produce for your business. Understand the risks, make sure there are steps for humans to get involved and verify everything looks accurate and enjoy a little more time back where AI can get the job done.
Our AI+ proposition aims to reduce risk while saving time
Generative AI helps us get things done at hyper speed, but we wanted to ensure that risk is always minimal. Our AI+ proposition ensures that human expertise is involved at every stage, from briefing to final reviews.
Our content experts work with our proprietary AI tool – Brand Voice – to generate content that can be tailored for all needs. We feed in our client’s tone of voice documents and example copy, resulting in focused and accurate content that can be carefully checked and used across sites and social. Get in touch to learn more about this exciting tool and how it can support your business.