February 19, 2025
The world of artificial intelligence has made leaps and bounds in the past few years. However, these same breakthroughs have also opened up new avenues for malicious actors. Lets explore four emerging dangers of AI that I think organisations need to be aware of, and then discuss what can be done to mitigate these risks.
1. AI-Assisted interviews
The Problem
With more companies embracing remote interviews, candidates are increasingly turning to AI tools to assist them—sometimes without the interviewer’s knowledge. From real-time language generation tools that supply sophisticated responses, to AI-driven voice modulation software that can “polish” spoken answers, an interviewer may be speaking to a tool as much as the applicant. This can lead to serious issues of misrepresentation and a lack of accurate assessment of a candidate’s true skills.
Key Concerns:
- Authenticity: Employers might hire someone who cannot actually perform the role once the AI crutch is removed.
- Ethical Implications: It undermines the trust inherent in a hiring process.
The Solution
- Verification Measures:
- Structured Interviews: Use behaviour-based questions that require personal experiences, making it harder for AI to fabricate coherent, detailed stories in real time.
- Beware of: Interviewees repeating questions back to you. This is a clear sign of AI use where the questions are repeated back and the AI provides answers in real time.
- Policy Updates:
- Organisations should state their AI usage policies explicitly for applicants. As recommended by CERT NZ in their remote work guidelines, clear communication around what is permitted and what isn’t can deter misuse.
2. AI-Enhanced Phishing
The Problem
One of the most alarming developments in 2025 is the release of Deep Seek AI, a powerful language model that—when run offline—bypasses many built-in safeguards that users have come to know when using AI. Threat actors can use it to generate hyper-realistic phishing emails, text messages, images or voice deepfakes, exploiting human psychology with uncanny precision. According to the 2024 Cybersecurity Threat Insights Report from ISACA, AI-driven phishing attacks have surged by 40% compared to the previous year.
Key Concerns:
- Realistic Content: Highly personalised messages that replicate an organisation’s tone and branding.
- Scalable Attacks: Automation allows criminals to target thousands (or millions) of users with minimal effort.
- Voice & Video Deepfakes: AI can impersonate senior executives, tricking employees into making fraudulent wire transfers or sharing confidential information.
The Solution
- Ongoing Training:
- Regular phishing simulations and awareness programmes can help staff recognise suspicious links and attachments, even if they appear authentic.
- Multi-Factor Authentication (MFA):
- Ensure employees use MFA for critical systems and platforms. This significantly reduces the success rate of phishing attempts aimed at stealing credentials.
- AI Detection Systems:
- Advanced email gateways and security platforms are now employing their own AI to detect anomalies in email patterns.
- Clear Protocols:
- Encourage staff to verify requests (particularly financial or data-related) via a second communication channel (e.g., a phone call or messaging platform) before acting. This would ideally be part of an electronic payment policy and/or AI policy.
3. AI-Coded Malware
The Problem
Tools like ChatGPT, Deep Seek AI, and other emerging language models are drastically lowering the barrier to writing malicious code. Cybercriminals can use these models to generate, refine, and obfuscate malware. Furthermore, there are well-documented cases of hacking groups developing their own custom LLMs specifically designed to hunt for vulnerabilities and produce exploits.
Key Concerns:
- Rapid Development: Malware can be created in days or even hours, complete with advanced obfuscation techniques to bypass standard antivirus solutions.
- Continuous Evolution: AI can continually rework code to evade newly introduced detection signatures.
- Weaponised LLMs: Private language models, trained on massive code repositories, can quickly identify and exploit zero-day vulnerabilities in widely used software.
The Solution
- Robust Endpoint Protection:
- Employ advanced endpoint detection and response (EDR) solutions that use machine learning to spot unusual behaviour on devices, rather than relying solely on known malware signatures.
- Frequent Patching & Updates:
- Regularly update software and operating systems to close known vulnerabilities. According to CERT NZ, timely patch management can prevent many AI-crafted exploits from succeeding.
- Threat Intelligence Sharing:
- Participate in industry groups and information-sharing platforms so that new threats and tactics uncovered in one organisation can be quickly disseminated to others.
4. Zero-Day Discovery by Malicious LLMs
The Problem
One of the most concerning developments is that some threat groups have developed or modified large language models solely for discovering and exploiting software vulnerabilities. Unlike traditional security researchers who disclose vulnerabilities responsibly, these models pump out zero-day exploits to be weaponised immediately. One unconfirmed yet widely discussed case involves a criminal organisation allegedly producing at least half a dozen zero-day vulnerabilities in mainstream software in just a few months.
Key Concerns:
- High Impact: Zero-days can target critical infrastructure, cloud service providers, leading to massive disruptions or data breaches.
- Secretive Nature: Without disclosure, software vendors have no chance to develop patches, leaving users exposed indefinitely.
- Arms Race: As soon as one group develops such an LLM, others scramble to follow suit, accelerating the arms race in cyber warfare – which is happening.
The Solution
- AI-Driven Defensive Tools:
- Fighting fire with fire – security companies are now training their own AI models to proactively discover vulnerabilities. This helps shift from a reactive security stance to a preventative one.
- Strong Network Segmentation:
- Isolate critical systems from less-secure areas of the network, ensuring that a single exploit can’t compromise the entire organisation.
- Global Regulatory Frameworks:
- Government agencies in New Zealand and around the world are increasingly discussing regulations around AI development and data sharing. While regulation lags behind technology, frameworks that encourage transparency and accountability could slow the rampant creation of malicious LLMs.
Final Thoughts
AI offers incredible opportunities, but like any tool, its potential for harm runs parallel to its ability to do good. By staying informed, investing in robust security measures, and fostering an organisational culture of awareness and accountability, we can harness the power of AI while minimising its dangers.
Here at Layer3, we utilise a range of tools and practices to stay ahead of evolving threats. We rely on the Todyl security platform—built on Elastic Security—to deliver advanced EDR, MXDR, and SIEM services, allowing us to swiftly detect and respond to anomalies across our network. To bolster our email defences, we use Avanan, an AI-driven security platform that helps filter out even the most sophisticated phishing and malware attempts. Additionally, we run real-time vulnerability monitoring and conduct vCIO sessions on a quarterly basis to track each organisation’s security posture, review emerging threats, and refine our AI policy—ensuring that responsible and secure AI usage remains a cornerstone of our approach.