Artificial Intelligence (AI) is driving major changes in the way we live and work. There is so much to say about how AI is impacting sectors like healthcare, finance, education, retail, transportation, or energy, but today our focus is exclusively on cybersecurity.
On one side of the barricade, we find threat actors (or hackers) who deploy cyberattacks on various targets, namely businesses, in a deliberate attempt to cause damage. On the other side, we find cybersecurity experts who try to prevent or mitigate such threats. Both sides are leveraging Artificial Intelligence to step up their game and get ahead. So… who is winning?
Offensive AI vs. Defensive AI
“Offensive AI evolves faster than defensive AI due to its opportunistic and unrestricted nature. There are no rules, no limits. Threat actors can use literally anything to exploit weaknesses and access unauthorised data or manipulate individuals into specific actions. Defensive AI, while advancing, is regulated by laws and data protection frameworks, which slow its development due to compliance requirements and regulatory processes”, Omar Jellouli, Information Security Analyst, begins to explain.
On the offensive side…
Threat actors are quick to exploit any vulnerability using AI’s creativity to generate polymorphic malware, automate phishing, and even simulate realistic deepfakes. For example, offensive AI can use generative models to craft e-mails that mimic trusted contacts or inject subtle triggers that later enable model jailbreaks.
On the defensive side…
Cybersecurity experts leverage AI defensively on two levels: prevention and mitigation. As Omar explains, “it’s like in any other field: the police try to prevent criminals from committing a crime; doctors try to prevent patients from getting sick; and in cybersecurity we try to prevent cyberattacks from happening in the first place.”
How is this achieved? By collecting threat intelligence – data about threat actors’ motives, targets, and behaviour – which significantly enhances the accuracy of preventive tools.
Defensive tools are also deployed during incident response. “However,” Omar clarifies, “they often rely on behavioural analysis, which can lead to false positives due to variability in user or system behaviour.”
The role of Machine Learning (ML)
Machine Learning algorithms form the foundation of modern AI systems. A prominent ML approach is, according to Omar, Supervised Learning – a technique where the model is trained on labeled datasets with known inputs and outputs. “The algorithm identifies patterns and associations during training to make accurate predictions or classifications. After training, it is tested on unseen data to measure its reliability and effectiveness,” he elaborates.
This method is widely used in:
- Vulnerability scanning tools
Models trained on known vulnerability data to detect weaknesses. - Incident response systems
ML models analyse historical incidents to provide early warning signals. - Pentesting and Red Teaming
Reinforcement learning can autonomously simulate attack paths in a network, helping identify misconfigurations and weak credentials (with careful tuning needed to avoid false positives).
What about Large Language Models (LLMs)?
A Large Language Model (LLM) is a type of Machine Learning model capable of understanding, processing, and generating human language. Trained on vast amounts of text data, LLMs like ChatGPT, Google Gemini, or Claude are used in cybersecurity primarily for threat intelligence analysis. They extract Indicators of Compromise (IOCs) from open-source feeds, dark web monitoring, and threat reports.
LLMs also support:
- Incident response
They analyse extensive log data, generate draft incident response playbooks, and provide context-aware recommendations based on historical incidents. - Phishing detection
LLMs examine e-mail content, tone, and linguistic cues to help identify potential phishing attempts, though dedicated security tools typically handle the detection and blocking of malicious links. - Vulnerability management
These models interpret vulnerability data, correlate known vulnerabilities with an organisation’s systems, and suggest remediation strategies, complementing the insights provided by specialised scanning tools and expert analysis.
How AI is changing the architecture behind security services
Artificial Intelligence is also transforming how cybersecurity services are designed and deployed. “It pushes organisations to redesign their security architectures to handle large-scale data processing and improve detection and response. Cloud-based AI solutions play a big role, offering scalability and real-time capabilities,” Omar explains.
Major cloud providers like Amazon Web Services (AWS), Microsoft Azure, and Google Cloud use their own tools with AI for threat detection (e.g.: AWS GuardDuty, Microsoft Defender, Google Chronicle) to process massive datasets in real-time and identify threats quickly.
Despite these advantages, challenges such as data privacy and cloud dependency persist. Nonetheless, there are ways around these challenges. “Hybrid architectures are emerging, with AI models trained in the cloud and deployed locally for faster response,” Omar discloses.
The pros and cons of AI adoption in cybersecurity
Implementing AI-powered tools in cybersecurity is “non negotiable,” Omar asserts. “It’s like the early days of the Internet. It’s unstoppable. We have this love hate relationship with AI that is steering us, so it’s natural that there will be positive and negative consequences.”
Benefits
- Direct access to advanced tools
Companies can implement cutting-edge threat intelligence and detection technologies. - Improved threat detection and response
Tools like Microsoft Sentinel and AWS GuardDuty analyse massive datasets in real-time to identify and mitigate threats quickly. - Greater automation
AI automates routine tasks (e.g.: vulnerability scanning, compliance monitoring), reduces workloads and increases efficiency (e.g.: AWS Security Hub, Azure Security Center). - Enhanced network and endpoint security
Solutions like Cloudflare and Zscaler block malicious traffic, while tools like SentinelOne and CrowdStrike Falcon use behavioural analysis to protect endpoints. - New job opportunities
Although AI automates some tasks, it also creates roles for managing and supervising AI systems.
Challenges
- Data privacy concerns
Many AI tools process sensitive data in the cloud, which means there is risk of exposure if this data is compromised or misused. Hybrid architectures that keep sensitive data on-premises can help mitigate this risk. - Cloud dependency
Over-reliance on providers like AWS, Azure, and Google Cloud can introduce vulnerabilities if those platforms experience outages or breaches. - False positives
Bias in AI models can lead to incorrect threat detections, flagging legitimate activities as malicious. - Integration complexity
Merging new AI tools with existing systems, especially in hybrid environments, can be challenging. - Lack of transparency
Organisations often have limited insight into how AI models make decisions, which complicates accountability. - Vulnerability to attacks
AI models themselves can be targets for adversarial attacks or data poisoning, requiring ongoing monitoring and updates.
Data poisoning vs. Adversarial attack | |
Definitions |
Data poisoning occurs when an attacker contaminates the training dataset, making it difficult to determine how data influences the model. For example, modifying values in a financial database could lead to miscalculations.
Adversarial attacks manipulate a model – either during or after training – by subtly altering inputs. For instance, an autonomous vehicle might misinterpret a STOP sign if adversaries tamper with its appearance. |
Solutions |
Adversarial training: Strengthens AI models against manipulated inputs. Data validation pipelines: Filter and detect malicious data before it reaches the AI system. Model integrity checks: Regularly verify that AI systems function as intended. Access control: Restrict access to critical AI systems and data. Regular updates and patching: Maintain security in cloud-based AI solutions. Hybrid AI architectures: Keep sensitive data on-premises to minimise exposure. |
What does the future hold?
AI is written all over our future – there is no doubt about that. But where exactly are we headed, and how will AI transform the field of cybersecurity, specifically? Omar Jellouli shares his predictions:
- Federated learning
A decentralised approach that enables organisations to collaboratively train AI models without sharing sensitive data, reducing privacy risks and improving accuracy. - AI-driven XDR (Extended Detection and Response)
It integrates data from network, endpoint, and identity layers into one comprehensive threat detection platform. - Security teams’ mindset
AI will empower cybersecurity professionals to perform dynamic Red Teaming exercises, simulating real-world attacks to proactively test defences. - Hybrid architectures
Combining on-premises control with cloud-based AI enables organisations to leverage global threat intelligence while protecting sensitive data. - Expansion of Natural Language Processing (NLP)
NLP is transforming threat intelligence by extracting actionable insights from unstructured data like blogs and dark web forums. - Intersection with Quantum Computing
While still in early stages, Quantum Computing has the potential to perform complex computations at speeds far beyond traditional systems, enabling advanced threat detection and analysis. However, the risk of breaking traditional encryption methods will lead to the development of post-quantum cryptography – AI will lead this shift, automating the development of quantum-resistant algorithms. - Growth of biocomputing
Emerging research in biocomputing – such as lab grown mini brains capable of learning – demonstrates AI’s potential beyond cybersecurity, paving the way for highly adaptive systems.
To sum it up: “AI will redefine cybersecurity by becoming a more intelligent and integral part of how we defend systems. It won’t just make existing tools faster or more accurate – it will create entirely new ways of thinking about security,” Omar believes.
Best practices and recommendations
To keep cybersecurity strategies as up to date as possible, organisations should:
- Enhance data security protocols
Regularly audit and secure publicly accessible services using robust access controls, encryption, and network segmentation. - Monitor for data poisoning and backdoor attacks
Develop continuous monitoring systems to detect anomalous behaviour that may indicate data poisoning. - Maintain human oversight
Complement automated AI tools with human review, especially in incident response and threat intelligence operations. - Ensure transparent data curation
Implement rigorous processes for assembling and sanitising training datasets to minimise biases and malicious content. - Stay informed on emerging technologies
Regularly review academic and industry research on federated learning, Neural Processing Units (NPUs), and quantum safe cryptography to adapt your security strategies.
Real world examples
Get to know a few recent events and innovations that illustrate various aspects of the AI and cybersecurity relationship:
- DeepSeek database exposure
A stark example of security oversights is the DeepSeek incident. A publicly accessible ClickHouse database exposed over one million lines of sensitive data, including chat history, API secrets, and backend details.
Read more about this incident here. - Data poisoning and model jailbreaks as emerging threats
Recent discussions reveal that attackers are seeding malicious triggers into training data. Specific seed phrases, when used in prompts, can force a model to “jailbreak” its safety protocols. In one case, it took around six months for the impact to be observed in a publicly available model.
Read more about these discussions here. - Red Team insights: getting around DeepSeek’s censorship
Ethical hackers, such as Simone Fuscone, and other researchers are actively testing AI content controls. By employing techniques like substituting vowels with numbers and structured prompt engineering, Fuscone has managed to bypass DeepSeek web’s censorship filters, revealing vulnerabilities that require continuous improvement.
Read more about Simone Fuscone’s achievement here. - Federated Learning and NPUs at the edge
Emerging trends include the adoption of federated learning combined with Neural Processing Units (NPUs) in endpoints. NPUs – integrated into modern CPUs (e.g., Intel Core Ultra or Qualcomm Snapdragon devices) – enable on device model training and threat detection without sending sensitive data to central servers.
Learn more about NPUs here. - Democratising AI: replicating DeepSeek R1 for under $30
A Berkeley AI Research team led by PhD candidate Jiayi Pan has successfully replicated key technologies of DeepSeek R1’s reasoning model for less than $30. This breakthrough demonstrates that sophisticated AI capable of complex reasoning can emerge from modest, cost effective systems.
Read the full report here.
Conclusion
The intersection of AI and cybersecurity is a double-edged sword. While AI enhances threat detection and incident response capabilities, it also equips adversaries with powerful tools for automated phishing, data poisoning, and model jailbreaks. Real world cases like the DeepSeek database exposure and breakthroughs such as replicating DeepSeek R1 for under $30 emphasise the importance of addressing both advanced and basic security vulnerabilities.
To navigate these challenges, organisations must enforce strict data security measures, continuously monitor for emerging threats, balance AI-driven automation with human oversight, and maintain transparency in data curation.
By incorporating best practices from frameworks such as the NIST Cybersecurity Framework and OWASP Top Ten, companies can build more resilient defences and foster a safer digital future.
Read more in this article about how AI is being used for cyberattacks.
