Ethics of AI-generated content

Does AI-Generated Content Present an Ethical Dilemma?

clientAI seems to be omnipresent these days, and AI-generated content is becoming almost a norm in today’s marketing landscape. As cybersecurity PR and marketing specialists, we’ve had to keep across the security impact of large language models (LLMs) like ChatGPT as it becomes a focal point for the security field, and the tools are increasingly influential on our own work too. 

We recently held an internal training session to explore the possibilities in research, planning, and content creation. We will soon launch a collaboration with Meltwater on the practical uses of AI—watch this space. 

But for all its innovation, even the most ardent AI fans must acknowledge that it raises some concerns.  

Who owns what? The AI copyright problem  

How AI tools are trained to complete tasks has opened up a can of copyright-related worms.  

One of the most impressive things about AI tools is the effortless way they seem to go about tasks. Watching ChatGPT4 throw out hundreds of words can feel like a magician pulling a rabbit out of a hat. 

However, getting to this point required a great deal of work to train the AI properly. Tools are trained on vast amounts of internet data to establish foundations like grammar, world facts, and reasoning. Once the basics are in place, the tool can be given more specialised training using more narrow datasets, enabling it to handle specific tasks better.  

Tools for specialist areas like medical diagnosis or cyber threat detection will undergo even more focused training, generally using proprietary datasets wholly owned by the developer.  

But the issue becomes more uncertain regarding broader data, particularly if it includes content under copyright. Recent research has alleged that ChatGPT owner OpenAI is trying to cover up the use of copyrighted material, such as the Harry Potter book series. ChatGPT 4 was happy to summarise the adventures of the bespectacled boy wizard when I asked but sternly refused to reproduce the actual copy.  

Earlier in August, media representatives launched a call for greater transparency with AI operators such as OpenAI and Google. Industry bodies such as the News Media Alliance and European Publishers’ Council, representing thousands of publications, published a note asking for revised regulations on copyrighted material and intellectual property. 

Ethics of AI-generated content

Source: Zapier

How can we address ethical concerns? 

The technology is still too new for a legal consensus on what is and isn’t acceptable when it comes to AI training. Nevertheless, anyone working with the tools should keep an eye on developments and consider any potential issues. 

Ethical considerations in using AI-generated content cannot be overlooked in an industry where trust is a cornerstone. Transparency around the use of AI is vitally important.   

At Code Red, we have developed an AI policy that outlines just this, ensuring we can assuage any client concerns. Honesty and transparency are also important when it comes to journalist relations. For example, some publications have already set editorial guidelines banning AI-generated content. As mentioned previously, we believe it’s best to treat AI as an assistant, not a replacement, for critical PR and marketing roles.  

It’s likely to be some time before we see a definitive bottom line on AI – if we ever do. As the technology continually evolves, it will doubtlessly throw up more ethical concerns along the way. But by following a policy of transparency and consideration, we’re confident in dealing with any issues as they arise.    

Back to Knowledge Hub