This article explores the
growing impact of AI technologies on our lives, highlighting both their
potential benefits and serious risks in the realm of cybersecurity. It urges
readers and decision-makers to critically evaluate the consequences of AI
adoption and consider its implications on security, governance, and society while keeping all categories of people at the centre of such decision-making.
Scratching the AI Surface:
Will It Hurt or Help More?
We're still in the early stages
of AI application development, and the road ahead promises even more advanced
and complex tools—tools that could erode trust and, ultimately, threaten the
social order. We may face a growing need to invest in protective measures and
redress systems. But even then, businesses, governments, and individuals might
still find themselves struggling to keep up, especially when these resources
might drain funding from more pressing needs. Technology and online systems are
already under constant attack, and AI only adds fuel to the fire. The evidence
is clear: cybercriminals are exploiting vulnerabilities faster than ever. This
isn't just a Luddite fear—it's rooted in the reality of our rapidly changing
digital world. (Read more on this trend: Threat Actors Are Exploiting Vulnerabilities Faster Than Ever).
Cyberattacks: A Growing Threat
For many, the idea of a
cyberattack conjures images of data breaches or financial fraud. But these
attacks extend far beyond banking. Critical sectors such as healthcare,
utilities, manufacturing, and public services are increasingly targeted.
Take a look at some notable
cyberattacks outside the finance industry:
- Not if, but when: Cyberattacks threaten hospital systems
- Significant Cyber Incidents | Strategic Technologies Program | CSIS
- New Report: Cyber Security Threats in Manufacturing Industry
- Why Does Manufacturing See the Most Cyber Attacks? | Cyber Magazine
- 14 recent cyber attacks on the transport & logistics sector
With AI tools making inroads into
industries like healthcare and manufacturing, a targeted attack could trigger
serious consequences. Imagine a rogue AI being used to trigger false diagnoses in a hospital or an autonomous
manufacturing system overridden to deliberately malfunction, resulting in dangerous products reaching
consumers. These risks, often dramatized in films as science fiction, are now
inching closer to reality. We once trusted that businesses and authorities
would safeguard us, but can we still place that same trust in them?
Cybersecurity: Can We Rely on
It?
Given the escalating risks,
businesses are pouring resources into cybersecurity. The cybersecurity industry
is growing rapidly, with some estimates suggesting a 14% annual growth rate
that will continue for the rest of the decade. However, this growth comes with
its own challenges. One of the most pressing is the stark inequality in
cybersecurity resilience between rich and poor nations, as well as between
large, well-funded organizations and small or developing ones. A report by the
World Economic Forum highlights this gap:
"The distance between
organizations that are cyber resilient enough to thrive and those that are
fighting to survive is widening at an alarming rate. The least capable
organizations are perpetually unable to keep up with the curve, falling further
behind and threatening the integrity of the entire ecosystem."
Moreover, by one estimate, over
82% of small businesses had at least one successful cyber-attack in 2021. (See:
35
Alarming Small Business Cybersecurity Statistics for 2024 | StrongDM).
This growing divide spells trouble not just for businesses,
but for consumers as well. As the demand for cybersecurity professionals
increases, there is a notable shortage of skilled workers. Cybersecurity
companies are struggling to find qualified professionals, which creates
additional stress on the existing workforce. The pressure of constantly
evolving threats leads to burnout, with a high turnover rate among
professionals. (See: 24%
of Cybersecurity Leaders Are Ready to Quit Jobs: Here’s Why - Techopedia).
The skills gap is a significant challenge, but it also presents opportunities
for those entering the field.
Explore the demand and challenges:
- Skills shortage persists in cybersecurity with many jobs going unfilled | VentureBeat
- Nearly 4 Million Cybersecurity Jobs Are Vacant: Here’s Why You Should Consider Breaking Into This Sector
- Booming Job Market: 3 Reasons Why Cybersecurity Jobs Will Reign Supreme - Business2Community
- 4 Million Job Openings Await Skilled Cybersecurity Professionals
At the same time, generative AI
and other advanced technologies are being leveraged to improve cybersecurity
capabilities. AI can speed up software development, automate vulnerability
testing, and even help detect breaches faster. However, its potential to also
fuel cybercrime has raised concerns. (See: How
AI Is Shaping the Future of Cybercrime). As the speed of
technological advancement continues to outpace the development of protective
measures, there is a growing fear that rogue AI could exploit system weaknesses
before cybersecurity systems can adapt. This raises important questions: can we
truly safeguard AI-driven systems, and who is responsible when they fail?
The Role of Governance
Governance frameworks for AI are
emerging, with some governments introducing regulations to manage its
development and deployment. However, many countries, particularly in the
developing world, are lagging. The laws are in place, but enforcement is often
weak, and there's a growing risk of exploitation by malicious actors who
operate outside regulated environments. This creates a dangerous environment
where technology might be used irresponsibly, putting consumers and citizens at
risk.
What happens to those who use
these technologies with malicious intent? Who is protecting the everyday person
from these threats? (See: AI
governance trends: How regulation, collaboration, and skills demand are shaping
the industry | World Economic Forum).
The Big Picture: Who Bears the
Burden of Responsibility?
As the battle between
cybercriminals and cybersecurity experts intensifies, it's essential for us, as
citizens, workers, and consumers, to question our increasing reliance on
technology. At some point, we may need to set limits on its use. Drawing clear lines
about where we allow technology to intervene is becoming one of the most
important decisions we must collectively make.
Governments must take a proactive
approach to engage in this conversation and make decisions from a strategic,
long-term perspective. Unfortunately, with many governments struggling to keep
up with the complexities of AI and cybersecurity, it’s unlikely these issues
will be addressed without widespread public demand.
Until we can develop more secure
systems—an effort that could take years—citizen education and awareness are
crucial. As much as we focus on the benefits of technology, we must also devote
equal attention to its vulnerabilities and potential harms. This means
evaluating risks and lobbying for technology adoption that considers both the
upside and the downside. Only by carefully weighing these factors can we make
informed decisions about where and how to use AI.
It’s essential to involve a broad
range of perspectives in this debate. We must consider the impact on all
people, including the elderly, those with cognitive challenges, and communities
in regions with limited resources. AI development cannot be driven by
convenience and profit alone; it must prioritize people’s well-being. The
metrics for AI adoption should not focus solely on efficiency or growth but
should reflect democratic values and social responsibility. We need to create
spaces where citizens, not just tech companies, have a voice in determining the
direction of AI’s adoption. After all, technology is not an inevitable path in
every domain; it must be navigated thoughtfully, through democratic processes,
with ongoing review and adjustment.
Conclusion
In conclusion, we must educate
ourselves on how these technologies will affect our work and lives and
contribute meaningfully to conversations within our communities and workplaces.
As AI continues to evolve, we need to consider how it impacts different groups
of people—those who are less tech-savvy, the elderly, those with disabilities,
and people in lower-income countries or under-resourced organizations. It's
crucial that AI development puts people at its centre, not just the convenience
of a few or the profits of tech companies.
Adoption and usage should not be
the only goals; they certainly shouldn’t be the most important ones. We must
have the ability to change course, to reverse or redirect AI adoption when
necessary—not as dictated by capitalists or tech enthusiasts, but through
collective decision-making by everyday people. Only through thoughtful
evaluation, democratic consultation, and regular review can we ensure AI’s
integration into society is beneficial and safe. When it comes to technology
adoption, let’s make sure it’s not just a one-way street.
Final Notes
- Consider how AI and other technologies are
impacting your life, your work, and the society around you.
- Who is influencing decisions on technology
adoption in your area? Do you have a way to communicate your questions and
concerns to them?