Your organisation is already using AI tools that no one approved. The question is not whether—it is how many, where, and what data they are touching.
Shadow AI is not a compliance failure. It is a visibility failure. Leadership believes AI usage is limited. Reality is different—by a wide margin.
Employees paste confidential documents, source code, and customer datasets into external AI tools daily—often without realising the implications.
ISO 42001 requires an AI system inventory. NIS2 mandates supply chain oversight. Unmanaged AI usage directly undermines both obligations.
AI tools authorised via OAuth silently access email, calendars, and cloud files. These integrations are invisible without identity log analysis.
Engineering teams embed AI SDKs and APIs into production systems without security review—creating AI-in-the-loop without governance or oversight.
Auditors are beginning to ask for AI inventories. Organisations without one face findings. Organisations with unknown shadow AI face much worse.
A structured, non-disruptive assessment combining network telemetry, identity logs, endpoint audit, developer monitoring, and employee interviews. No agents installed. No productivity interrupted.
Establish a working definition of shadow AI for your organisation and document which AI platforms are currently approved. Everything else becomes a candidate for investigation.
Analyse DNS requests and HTTPS traffic for connections to AI provider endpoints. Identify unexpected outbound flows from endpoints, microservices, and CI/CD pipelines.
Review identity provider authorisation logs (Okta, Entra ID) for third-party AI app permissions, unusual data scopes, and automated data export to AI services.
Enumerate installed browser extensions, local AI assistants, and desktop copilots. Identify extensions that transmit corporate data to external AI APIs.
Review repository audit logs, dependency manifests, and cloud service logs for AI SDK additions, API keys issued to AI providers, and LLM calls in production code.
Review Data Loss Prevention alerts for uploads of confidential files, prompts containing proprietary data, and unusual copy-paste activity to AI tool domains.
Short structured interviews with department representatives to surface unofficial AI workflows, automation scripts, and AI integrations invisible to technical monitoring.
Each discovered AI tool is classified by data sensitivity, system role (assistive vs. decision-making), provider risk, and operational impact. Low / Medium / High / Critical.
Assess vendor data retention policies, training data usage, geographic data processing, and contractual controls for the most significant AI providers identified.
Transition from discovery to action: AI allow-list, prompt data policies, enterprise AI gateway recommendations, and an AI awareness training brief for staff.
Every engagement produces a structured, audit-ready set of documents. Evidence-based findings, no more, no less than what you need to act.
Complete inventory of discovered AI tools including source, data exposure level, authorisation status, and risk classification.
Each tool scored across four dimensions: data sensitivity, system role, provider risk, and operational impact. Prioritised action list included.
Mapping of identified shadow AI usage against ISO/IEC 42001 AI management system requirements and NIS2 supply chain obligations.
Prioritised recommendations: AI allow-list, prompt data policy, enterprise gateway options, and an awareness training brief for employees.
One-page board-ready summary of findings, risk exposure, and top three immediate actions. Suitable for presentation to leadership or supervisory boards.
A structured review session four weeks after delivery to assess progress, answer questions from internal stakeholders, and refine the governance approach.
Shadow AI is rarely malicious. In most cases it signals productivity demand exceeding governance frameworks. Organisations that treat it purely as a security problem struggle. Those that treat it as a governance and workflow transformation challenge succeed.
The deeper question is not simply where shadow AI exists—but why employees feel the need to bypass official tools in the first place. Discovery reveals this. Governance addresses it.
This assessment is calibrated for organisations under active regulatory pressure—not for organisations seeking to build an enterprise security operations centre.
Findings are structured to feed directly into your existing compliance obligations. One discovery exercise, multiple framework benefits.
The assessment is structured, time-bounded, and produces audit-ready output. A 30-minute scoping call is sufficient to determine whether your organisation is the right fit.
Book a Scoping Call meet@axelhoehnke.com