What’s the Media Narrative on AI in 2024?
When ChatGPT became a viral sensation in 2022, discussions around AI were wildly positive. The “transformative” capabilities of such technology dictated the lion’s share of the media attention. However, in 2024, the media narrative around AI shifted a lot.
The spotlight is now on the critical challenges and risks of AI tools, particularly regarding fraud and disinformation. Let’s take a look at how the media discourse is shifting towards AI’s darker side and the key themes that are frequently making the headlines.
Disinformation, Deepfakes, and a Critical Election Year
A major theme dominating the media is the use of AI in elections, disinformation and deepfakes. As the 2024 US election approaches, concerns about AI-generated misinformation becomes more prominent. AI’s ability to create realistic but fake videos, audio, and images has been weaponised to manipulate public opinion and spread false narratives.
From national media to tech and IT publications, AI’s role in this year’s elections has been a key topic. Recent events like President Biden’s deepfake announcement and targeted political advertising have really driven the term ‘AI election’ in the headlines. This narrative will continue to gain media attention throughout 2024, as AI becomes a dominant factor in the US election later this year.
AI-powered Fraud and Shadow Engineering
The rise of AI-powered fraud is another controversial subject in the media, as reports on how AI is being abused for malicious activities are gaining traction. AI-driven schemes, such as sophisticated phishing attacks and automated fraud, have become more prevalent. Cybersecurity discussions have clearly shifted to how AI makes cyberattacks more accessible and what proactive measures organisations and individuals can adopt to overcome them.
These AI-enabled tactics make it easier for cybercriminals to deceive and exploit individuals and organisations, heightening the urgency for strong cybersecurity measures. The media’s focus on these abuses reflects a growing concern about AI’s dark side and its potential to cause harm.
Emerging topics like Shadow Engineering and the risks associated with low-code/no-code (LCNC) platforms are capturing the interest of technical and security publications. Shadow Engineering refers to unauthorised or unsanctioned AI development within organisations, posing significant security risks.
Leading publications such as Help Net Security, Infosec, and Dark Reading are frequently covering discussions on the AppSec challenges of LCNC platforms and practices. The ease of creating apps with LCNC tools can lead to vulnerabilities if not properly managed, making it a crucial area for AppSec discussions.
For cybersecurity companies, experts, and vendors, engaging with these stories is essential. Positioning themselves as thought leaders on AI-related risks and solutions will not only enhance their credibility but also maximise engagement and exposure. By actively participating in such topics, they can shape the narrative and contribute valuable insights to the broader conversation.
At Code Red, we bring over two decades of specialised cybersecurity PR expertise to help you become a part of these emerging discussions and gain wider exposure as a thought leader. From rapid response to tier-1 and national media coverage, we can help your firm improve its media presence and unlock more partnership and commercial opportunities.
If you’d like to discuss how we can help improve your media presence, please book a call with our CEO, Robin Campbell-Burt.