Tackling Challenges in Media Health and Safety in 2024: Navigating Online Abuse, Conflict Trauma and AI’s Influence

Published On: December 15, 2023Categories: Geopolitical, Health & Safety, Media

Throughout the challenges posed by the pandemic, geopolitical tensions, and cultural conflicts characterising recent years, Michael Byrne, Group Head of Health & Safety News UK, has played a pivotal role in guiding the journalism juggernaut through these turbulent times. Here, he shares his insights on two critical hurdles faced in 2023 and offers a glimpse into how AI might reshape the future landscape of media health and safety.

Michael Byrne, Group Head of Health & Safety News UK

“Risk assessment is not just about documentation and ticking a box. It is a consultative process involving those potentially affected by the risks and ensuring that hazards and controls are properly balanced against each other. This process necessitates a blend of expertise, experience, qualifications and interpersonal skills.” – Michael Byrne

Rising Tide of Online Attacks

The notion of freedom of speech has become an increasingly contentious issue in recent years, often leading to a divisive social landscape. Regardless of one’s stance in a debate, there is an escalating risk of facing severe backlash and pressure on social media platforms.

Countless journalists have fallen victim to ‘pile ons’ or targeted doxing simply for expressing their opinions or sharing articles. The barrage of toxic comments attacking everything from competence to personal attributes like race and gender floods a victim’s social media feeds during these ‘pile ons’. With doxing attacks, organised groups attempt to reveal and circulate personal information about a subject and their families.

Who is applying this pressure and how can we stop it?’

Identifying the source of these attacks varies depending on the topic, its significance and relevance. Attackers can span from hostile states, to organised pressure groups or ideologically-driven public collectives. Unfortunately, stemming these groups at their roots is an impossible task – which is a frustrating response. Some also dismiss or downplay this issue, attributing it as part of the job. This is especially true when no physical harm has been done.

But the reality is starkly different. The impact of online attacks is far-reaching and distressing for victims subjected to this vitriol, often leaving them feeling isolated and vulnerable.

In today’s interconnected world, social media forms an integral part of personal and professional identities. For better or worse, many also gain a sense of validation from these networks and organisations increasingly recruit public facing talent to gain access to their follower base. One cannot simply switch off or unsubscribe from social media.

An attack on this digital persona can be as distressing as physical harm and should be viewed no differently to the vulnerability and helplessness experienced from being mugged at knife point.

Why? Online attacks are insidious, carrying the potential to undermine the willingness of journalists to contribute the diverse perspectives so crucial for societal progress.

Navigating Trauma Stemming from Conflict Coverage

There are two types of exposure levels to trauma that journalists and media workers have to navigate; acute and chronic.

Journalists often witness harrowing scenes of human suffering while reporting on conflicts, crime and disasters, acutely exposing them to experiences most will never encounter and be able to relate to. While it is easy to recognise how a civilian might suffer from PTSD after exposure to the consequences of war, the impact on journalists remains a less acknowledged facet.

Behind the front lines, Picture Desk grapple with the chronic exposure of trauma when reviewing and selecting imagery and video content from war zones and other distressing environments. Repeated exposure to such distressing material can desensitise and trigger anxiety or detachment disorders among staff.

Opening up to managers or peers about the personal impact of witnessing such atrocities is not easy, particularly in a corporate setting where understanding the visceral realities of war is challenging. It is critical to move beyond reactive and passive Employee Assistance Programmes or neglecting the issue of mental health, to proactively supporting staff to decompress and re-adjust when returning from or finishing mentally demanding assignments.

As geopolitical tensions continue to surge, it is imperative for practitioners to upskill to be able to identify and address psychological risk while leveraging tools to aid in the risk assessment process.

Risks on the Horizon – Artificial Intelligence

‘Look at this risk assessment Chat GPT just wrote – I bet it’s as good as something you could write,’ said a friend recently. And you know what, the risk assessment wasn’t bad on the first scan.

The allure of AI in risk assessment is undeniable, often touted for its efficiency. Yet, as with most content generated by ChatGPT or other AI platforms, it tends to be vague, non-committal and lacking the breadth of detail needed for robust and holistic protection.

Of course, some may argue that AI does the same as a human, especially those that lack extensive experience; it does research and applies the gained knowledge to a specific scenario. Additionally, generic risk assessments are nothing new in the industry and the role of AI in health and safety is one of streamlining and optimising existing practices.

However, risk assessment is not just about documentation and ticking a box. It never has been. It is a consultative process involving those potentially affected by the risks and ensuring that hazards and controls are properly balanced against each other. This process necessitates a blend of expertise, experience, qualifications and interpersonal skills.

Relying on AI might seem convenient, but it risks outsourcing critical thinking and consultation, eventually rendering the purpose of risk assessment redundant. This leads me to conclude that AI in the safety sphere is more a threat than a solution in its current form. AI’s potential in the safety sphere isn’t about simplification but rather the peril of relinquishing human thought processes. As roles converge and bandwidths strain, there’s a looming danger of creating a workforce reliant on AI, ultimately devaluing the importance of critical thinking in risk assessment. 

At RiskPal, we empower safety and security leaders to drive safety engagement within their organisation. RiskPal is a smart risk assessment platform that streamlines safety processes. It provides users with best practice safety guidance for hundreds of activities and threats, and makes past assessments easy to find, update and use again. Scrolling through inboxes searching for old forms becomes a thing of the past.

We are dedicated to making safety simple and compliance straight forward. Reach out if you have any questions or need assistance in enhancing your safety and risk management processes.

Get in touch

Our newsletter focuses on how to drive better safety engagement.

Why not subscribe?

Share this article:

Related Articles
  • Discover the hidden risks of outside broadcasting that endanger crew and equipment alongside the strategies implemented for risk mitigation.

    Read More…
  • In today's ever-evolving landscape, operational security stands as a linchpin for organisational su ...

    Read More…
  • Discover the primary hazards individuals covering the upcoming Summer Olympics and UEFA Euro tournaments should keep in mind.

    Read More…