AI: A Double-Edged Sword for Security Teams in New Zealand

Originally published by The Fast Mode, and Sygnia. Updated with new examples and local context.

Artificial Intelligence is reshaping cybersecurity. It offers unprecedented benefits to defenders, but it also equips adversaries with tools that are faster, more scalable, and increasingly deceptive. This tension, between capability and control, is what makes AI such a double-edged sword.

AI is transforming the way we manage networks. It enables real-time optimisation, predictive fault detection, and autonomous mitigation of routine issues. Google’s AI-based fuzzing, for example, uncovered a twenty-year-old vulnerability that humans had missed. The ability to analyse large datasets and find subtle anomalies gives security teams a real advantage in the race to detect threats.

But attackers are moving just as fast.

Sophisticated fuzzing tools powered by AI can identify exploitable flaws within hours. Reconnaissance and social engineering tasks can now be automated. AI-generated phishing emails, fake LinkedIn profiles, or custom payloads tailored to specific environments are no longer theoretical, they are in active use.

Zero-day vulnerabilities such as those seen in Citrix, Ivanti, Palo Alto, and Fortinet are now being weaponised more quickly than many teams can respond. Attackers can scan, exploit, and persist within a target environment before detection systems catch up. In this arms race, smart does not mean safe.

The risk is highest for organisations that see AI as a shortcut rather than a capability to be governed. Deploying AI without appropriate oversight can introduce new vulnerabilities. Model poisoning, supply chain compromise, and prompt injection attacks are all real-world examples of what happens when AI is left exposed.

Treat AI as an attack surface from day one. That means placing AI systems under governance, monitoring for abnormal interactions, protecting training data, and segmenting AI resources just like other privileged assets. Zero Trust architecture helps here, not as a silver bullet, but as a way to isolate blast radius if something goes wrong.


Deepfakes Are Easier, Cheaper, and Already in Use

One area where this threat is rapidly evolving is in synthetic media.

Creating convincing deepfakes no longer requires advanced tooling or insider access. For the cost of a nice dinner, anyone can generate a realistic impersonation of a colleague, executive, or supplier. Some services offer synthetic voice and face combinations delivered in less than a day. Others provide open-source toolkits with easy-to-use interfaces that run locally.

This is not hypothetical. These capabilities are already being used in the wild by adversaries and penetration testers alike.

Social engineering campaigns now include video messages from what appear to be trusted individuals. Some phishing emails are followed up with deepfaked phone calls or Zoom appearances. The result is a confusing and often high-pressure experience for the target. When urgency or seniority is used to rush decisions, it becomes much harder for staff to push back.

In New Zealand, smaller organisations may assume they are below the radar. But synthetic media removes the cost barrier. Threat actors can launch broad campaigns against local councils, small businesses, or schools without needing scale or custom effort. Automation does the work.

Security teams must adjust. Here are several approaches that have been effective:

  • Contextual identity challenges. Instead of asking static details like ID numbers or manager names, consider questions tied to recent internal events. “What was the codename of last quarter’s project?” or “Which team presented at the last all-hands?” These are harder to fake and not likely to be in training data.

  • Operational use of MFA. If a user is contacted and claims urgency, push an MFA prompt to verify. Internally, some organisations rotate short sign/countersign phrases visible only on intranet dashboards or chat channels. These are harder to fake in real time.

  • Permission to pause. One client avoided a fraud attempt because a staff member paused a video call, saying, “I just need to check something internally.” That behaviour—defaulting to verify—is what good training looks like.

  • Train on the weird. Deepfakes are improving, but they are not flawless. A mispronunciation, odd intonation, or visual flicker can signal something is off. Simulating these anomalies in awareness training helps build human intuition.'

  • Security teams can also flip the script. Ethical deepfakes can be used in red team simulations. Impersonating senior leaders in phishing tests uncovers policy weaknesses and blind trust in hierarchy. If done carefully and with consent, this has proven far more effective than static e-learning.

The takeaway? AI is eroding traditional trust signals. The defence is not necessarily more tech. It is cultural. Teaching people that it is okay to challenge, to pause, and to verify—that is what builds resilience.


Final Thoughts

AI is not just a new tool. It is a new battlefield. While it offers real defensive power, it also enables adversaries to scale, hide, and adapt faster than ever before. If left ungoverned, AI will not solve your security challenges. It will become one.

Organisations in Aotearoa should act early. Secure your AI systems like you would any Tier Zero asset. Monitor them, govern them, and plan for their failure modes. At the same time, prepare your people for the strange. Because sometimes, the voice on the other end of the line is not who they say they are.

Rob Kehl
Rob Kehl is a Principal Cybersecurity Adviser and educator based in Aotearoa New Zealand. Originally from the United States, his career spans the U.S. Air Force and global consultancies like Sygnia and Cognizant. Rob specialises in architecture assessments, incident response, security operations, and AI security strategies. He applies his international experience to support cybersecurity resilience across sectors in New Zealand.

Get in touch