Artificial Intelligence (AI)
AI Security & Strategy

Secure. Explainable. Responsible.

Artificial Intelligence (AI)

Artificial intelligence offers enormous potential – but only when it is interpretable, robust, and securely deployed. We help organisations get AI right.

Regulatory Landscape for AI

AI regulation is converging across multiple frameworks. Organisations need to understand which regulations apply to their AI systems and how they interact. We map this landscape clearly and help you meet requirements efficiently across frameworks.

EU AI Act

The world's first comprehensive AI regulation. Classifies AI systems by risk level – prohibited, high-risk, limited risk, minimal risk. Applies to all organisations using or providing AI in the EU.

ISO/IEC 42001

The international standard for AI management systems – the AI equivalent of ISO 27001. Provides a structured framework for responsible AI deployment and is increasingly requested as a B2B qualification.

DORA (AI components)

Financial entities using AI in ICT risk management must demonstrate systematic security testing. DORA TLPT requirements increasingly cover AI-based systems in production.

GDPR Article 22

Right to explanation for fully automated decisions affecting individuals. Directly applies to AI-driven credit scoring, HR screening, and similar high-stakes automated decisions.

EU AI Act

High-Risk AI: What the EU AI Act Requires

High-risk AI systems – covering employment, education, credit, healthcare, critical infrastructure, and law enforcement – face the most extensive requirements: risk management system, data governance, technical documentation, logging, transparency, human oversight, and accuracy and robustness obligations. Full compliance for high-risk AI applies from August 2026.

Art. 9
Risk Management
Art. 10
Data Governance
Art. 13
Transparency
Art. 14
Human Oversight

Frequently Asked Questions about AI Security

What are the main security risks for AI systems?

AI systems face a distinct set of security risks compared to conventional software. Adversarial attacks manipulate model inputs to cause incorrect outputs. Prompt injection in LLM applications allows attackers to override system instructions. Data poisoning corrupts training data to alter model behaviour. Model extraction allows attackers to replicate proprietary models. These risks require specialised testing methods that go beyond conventional penetration testing.

When does the EU AI Act apply to our organisation?

The EU AI Act applies to any organisation that places AI systems on the EU market or uses AI systems in the EU – regardless of where the organisation is headquartered. The obligations depend on your role (provider vs. deployer) and the risk classification of the AI system. Prohibited practices have applied since February 2025; high-risk AI requirements apply from August 2026; General-Purpose AI Model obligations from August 2025.

What is ISO/IEC 42001 and do we need it?

ISO/IEC 42001 is the international standard for AI management systems – the AI equivalent of ISO 27001. It provides a structured framework for responsible AI deployment and is increasingly requested by enterprise customers as a qualification criterion. Certification is not legally mandated under the EU AI Act but can demonstrate responsible AI governance and is often more achievable than organisations expect, particularly for those already ISO 27001 certified.

How does AI security testing differ from conventional penetration testing?

Conventional penetration testing assesses known vulnerability classes in software and infrastructure. AI security testing extends this with model-specific attack vectors: adversarial inputs, prompt injection, jailbreaks, RAG data source poisoning, and model extraction. The testing methodology must account for the probabilistic, non-deterministic behaviour of AI systems – the same input can produce different outputs, making systematic test coverage more complex than for deterministic software.

Kontakt aufnehmen

Responsible AI for Your Organisation

Strategy, security, compliance, and monitoring – we help you deploy AI that is secure, explainable, and regulatory-compliant.