2KR IT Solutions

Shadow AI and POPIA: How Cape Town Businesses Can Audit and Govern Hidden AI Use

The Three Moments That Changed Everything

Three real scenarios we see every week in Cape Town businesses. A staff member pastes confidential client data into ChatGPT to “clean it up.” A project manager enables an AI note-taker on a confidential strategy call. A marketer feeds a draft contract into an AI writing tool. None of these were approved by IT. None of these will show up on any audit. And under POPIA, all of it could be a reportable data breach. This is shadow AI — and a proper shadow AI audit is how you find and fix it before the Information Regulator does.

None of it was approved by IT. None of it was logged. None of it showed up on any compliance register.

And under the Protection of Personal Information Act (POPIA), all of it could be a reportable breach.

This is shadow AI — the unsanctioned use of AI tools inside your business — and for South African SMBs, it’s moved from emerging risk to active liability. The Information Regulator has made it clear: you are responsible for where personal information goes, including into AI systems you didn’t authorise.

The risk isn’t theoretical. One 2026 study found that 38% of employees admit they’ve shared sensitive work information with AI tools without permission. They’re not trying to cause problems. They’re trying to work faster. But they’re making risky decisions as they go, and your business bears the compliance risk.

Here’s what most Cape Town business leaders overlook: the problem isn’t whether to ban AI. The problem is what you can’t see. If you can’t see what tools are in use, what data is being shared, or whether that data is being retained or used for training, you’ve lost control of a critical data governance issue.

And if you lose that visibility, you can’t prove compliance to the Information Regulator. You can’t tell your clients where their data went. You can’t defend yourself if something goes wrong.

The good news: you don’t need to shut down innovation to stay POPIA-compliant. You need a structured shadow AI audit.


Why Shadow AI Matters in South Africa Right Now

Shadow AI is spreading through every layer of your business — marketing teams, HR, client support, engineering workflows. Without clear rules, it becomes your next IT headache. It’s not always a shiny new app someone signs up for. It’s an AI add-on inside Microsoft 365, a browser extension, or a feature buried in a SaaS tool you already use. That makes it invisible to most IT teams and impossible to govern without structure.

And the stakes in South Africa are specific.

POPIA sets a hard rule: Your organisation is responsible for personal information in its care, whether it’s stored on-site, in cloud systems, or fed into third-party AI tools. Section 72 of POPIA creates restrictions around cross-border data transfers — and most AI tools process data in the US or EU, outside South Africa’s regulatory reach.

The Information Regulator has real enforcement power. POPIA penalties run up to R10 million or 10 years imprisonment for serious non-compliance. More immediately, enforcement notices force costly remediation, reputational damage, and mandatory breach notifications to affected data subjects.

Consulting and engineering firms carry extra risk. Your intellectual property is your product. When staff feed project designs, financial models, or client strategies into AI tools, they’re leaking the assets your business is built on. That’s not just a compliance issue — it’s a competitive vulnerability.

Your clients are watching. Professional services firms now routinely audit their vendors’ AI governance as part of due diligence. If you can’t explain how you prevent shadow AI, you lose deals.

This isn’t about being paranoid. It’s about being professional. And it starts with visibility.


The Two Ways Shadow AI Governance Fails

Shadow AI security fails in predictable ways. Understanding them will help you avoid them.

1. You Can’t See What Tools Are Being Used or What Data Is Being Shared

Shadow AI doesn’t arrive as a formal request. It spreads as convenience. An AI add-on gets enabled inside an existing SaaS platform. A browser extension arrives in a team member’s Chrome profile. A feature only shows up for certain users. Before IT realises it’s happening, 10 people are using it across three departments.

Without visibility, you can’t apply controls. And without controls, sensitive data leaks unchecked.

The visibility problem is compounded by personal accounts. Many staff use personal Gmail accounts, personal ChatGPT subscriptions, or free versions of tools that bypass your managed identity system entirely. Your SSO and identity logs won’t catch them. Your endpoint telemetry might flag the traffic, but it won’t tell you what data was sent.

Treat this as your first problem: if you can’t reliably discover where AI is being used, you can’t apply consistent controls to prevent data leakage.

2. You Have Visibility, But No Meaningful Way to Manage or Limit It

Even when you know which tools are in use, shadow AI governance still fails if you can’t enforce consistent behaviour.

This happens when:

  • AI activity lives outside your managed identity systems. Staff log in with personal accounts, so there’s no audit trail through Entra ID or your identity provider.
  • Tools bypass normal logging. A ChatGPT conversation leaves no trace in your SaaS logs or security tools. You can’t answer “what data went into that conversation?”
  • There’s no clear policy. No one knows what’s acceptable. “Don’t share passwords” is clear. “Don’t share confidential client data with AI tools” sounds clear, but staff interpret it differently every time.

You’re left with “known unknowns”: people assume it’s happening, but no one can document it, standardise it, or enforce it.

This quickly becomes a governance crisis. Your team loses confidence in where data flows. Your compliance team can’t prove you’re meeting POPIA obligations. Your clients get worried about due diligence. And then a breach surfaces and you can’t explain how it happened.


How to Conduct a Shadow AI Audit: A Five-Step Framework

A shadow AI audit should feel like routine maintenance, not a crackdown. The goal is to gain clarity quickly, reduce the most significant risks first, and keep the team moving without disruption.

Here’s how we guide our clients through it.

Step 1: Discover Usage Without Disruption

Start by reviewing the signals you already have. Don’t send a company-wide “AI confession” email yet.

Practical places to look:

  • Identity and access logs. Review who is signing in to which tools, whether accounts are managed (SSO) or personal, and whether anyone is using unusual access patterns (e.g., repeated login failures, logins from new locations, or unusual data exports).
  • Browser and endpoint telemetry on managed devices. Most endpoint protection tools (Huntress, Defender, Crowdstrike) can surface web traffic and flag AI tool usage by domain. You won’t see what data was sent, but you’ll see where it went.
  • SaaS admin settings and enabled features. Log into platforms you already use — Microsoft 365, Google Workspace, Slack, Salesforce — and check what AI features are enabled. You might be surprised.
  • A brief, nonjudgmental self-report. Send a short survey: “What AI tools or features are helping you save time right now? We want to support this safely.” Shadow AI spreads because people are trying to work faster, not because they’re trying to break rules. You’ll get better answers when discovery feels like support, not surveillance.

Step 2: Map the Workflows

Don’t get lost in tool names. Map where AI actually touches real work.

Build a simple tracking view for each AI touchpoint you discover:

Workflow AI Touchpoint Input Type Output Use Owner
Client proposal drafting ChatGPT Draft proposal text, internal notes Refined proposal sent to client Sarah (Business Development)
Meeting notes Microsoft Copilot in Teams Confidential strategy discussion Summary emailed internally Operations team
Invoice processing Zapier AI routing Invoice PDFs with client data Auto-categorised in Xero Samantha (Finance)

This view serves two purposes: it forces you to name exactly what’s happening, and it puts an owner’s name against each risk. Ownership is where accountability lives.

Step 3: Classify What Data Is Being Put Into AI

This is where shadow AI governance becomes practical and stops being theoretical.

Use simple, business-language buckets that your team can apply without legal translation:

  • Public: Anything your business has already published (website copy, marketing materials, published case studies). Low risk if lost or exposed.
  • Internal: Information used internally but not confidential (internal process docs, non-strategic meeting notes, general announcements). Medium risk.
  • Confidential: Information that could harm the business if leaked (client lists, pricing, internal financial data, project designs, strategic plans, employee records). High risk.
  • Regulated: Information subject to POPIA, healthcare privacy, financial services rules, or client contracts (client PII, employee personal data, financial records, health information). Critical risk — exposure triggers mandatory breach notification and potential regulatory action.

Go through your Step 2 map and assign a data classification to each input. This is the single most important decision you’ll make, because it determines which tools stay and which go.

Step 4: Triage Risk Quickly

You’re not building a perfect inventory. You’re identifying the highest risks right now so you can fix them first.

Use a simple scoring model. For each workflow, rate it on these dimensions:

1. Sensitivity of the data involved

  • Public data = 1 point
  • Internal data = 2 points
  • Confidential data = 3 points
  • Regulated data = 5 points

2. Account type used

  • Personal email or account = 2 points
  • Managed/SSO account = 1 point
  • (Why it matters: personal accounts have no audit trail; managed accounts do)

3. Tool’s data retention and training policies

  • Data used for training the AI model = 2 points
  • Data retained indefinitely = 2 points
  • Data deleted after 30 days = 1 point
  • Data not used for training = 1 point
  • (Why it matters: once your data goes into training, it’s out of your control forever)

4. Export and sharing capabilities

  • Data can be easily downloaded or forwarded = 1 point
  • Data can be exported with restrictions = 0.5 points
  • Data cannot be exported = 0 points

5. Audit logging available

  • No audit trail = 2 points
  • Tool logs some activity but not fine-grained = 1 point
  • Complete activity logs available = 0 points

Total score: Add them up. Workflows scoring 8+ are critical. 5–7 are high-risk. 3–4 are medium-risk. Below 3 are manageable.

The goal here isn’t academic perfection. It’s to move fast and focus on the biggest problems first. If you spend three months perfecting the audit and fix nothing, the risk hasn’t changed.

Step 5: Complete Your Shadow AI Audit With Clear Decisions

For each workflow, make one clear call:

Approved: This tool and workflow are permitted for these specific use cases. Use a managed/SSO account. Log activity. No regulated or confidential data. Microsoft 365 is your approved platform because it integrates with your identity system, logs activity, and protects data. Document the decision.

Restricted: This tool is allowed only for low-risk inputs (public or internal data only). No confidential or regulated data. Enforce this through policy and training.

Replaced: Transition this workflow to an approved alternative. (E.g., if staff are using ChatGPT for document drafting when they should be using Microsoft Copilot in Word with enterprise data protection, migrate them and phase out the unsanctioned tool.)

Blocked: This tool poses unacceptable risk, or it can’t be governed with reasonable controls. Stop new use and create a deprecation timeline for existing use.

The best decision is one your team can remember and follow without constant policing. If your policy says “staff can use ChatGPT for brainstorming but not for anything with client data,” that’s a line people can hold. “You can’t use any AI ever” is a policy people will ignore.


Making Shadow AI Governance Stick: From Audit to Operation

Running a shadow AI audit once is good. Making it a discipline is better.

Here’s what we recommend after the audit is complete:

Build an AI Acceptable Use Policy. This is your governance anchor. It should:

  • Name which tools are approved, restricted, or blocked
  • Define what data can and can’t be shared with AI
  • Require staff to use managed accounts where available
  • Make clear that personal use of AI for business purposes is logged and subject to audit
  • Connect to POPIA and your organisation’s compliance obligations

This doesn’t need to be a 40-page legal document. A clear, one-page policy that staff can actually remember is more valuable than a tome no one reads.

Assign an owner. Someone needs to own the shadow AI audit process and the policy. Typically this is your IT lead, operations manager, or (for larger firms) a dedicated compliance role. Whoever it is, make sure they have time allocated and executive backing.

Make it repeatable. Run this audit quarterly or semi-annually. The threat landscape changes, new tools emerge, and team members turn over. Your audit needs to adapt. Use the same five-step framework each time so it becomes routine, not a special project.

Link it to your existing security program. Shadow AI governance isn’t separate from your broader security and compliance program — it’s part of it. Connect it to your incident response plan, your data classification standard, your access control framework, and your POPIA compliance obligations.

Stay connected to your Information Regulator responsibilities. POPIA requires that you know where personal information is stored and processed. Shadow AI tools that process personal data need to be logged in your Records of Processing Activities (RoPA) and assessed for risk. If a breach surfaces, you need to be able to explain to the Information Regulator how you discovered it, what you did about it, and how you’re preventing it in future.


A Real-World Example: What We Found at One Cape Town Engineering Firm

We worked with a 40-person engineering firm in the Southern Suburbs who came to us concerned about IP leakage. Their founders were worried that staff were sharing design files, calculations, and site data with AI tools they’d never heard of.

We ran a shadow AI audit. Here’s what we found:

  • 14 different AI tools in active use across the organisation
  • 4 were using personal accounts (no audit trail, no managed controls)
  • One tool was processing confidential client survey data without any retention or training restrictions
  • Three tools had been enabled inside existing platforms (Microsoft 365 add-ins) without IT oversight
  • Staff had no written guidance on what data could or couldn’t be shared

The firm scored two workflows as critical risk (client data processing in ChatGPT, project files uploaded to an AI file-sorting tool).

Here’s what they did:

  1. Immediate: Disabled the AI file-sorting tool and the ChatGPT account being used for client data. Migrated those workflows to approved alternatives (Microsoft Copilot in Word for drafting, SharePoint’s built-in classification for file sorting).
  2. Week 1: Built an AI Acceptable Use Policy. Named four approved tools for specific use cases (Copilot for drafting, Microsoft Copilot in Teams for meeting notes, a cloud storage tool for file management, and one approved summarisation tool for long documents). Prohibited use of personal AI accounts for business purposes.
  3. Week 2: Trained the team on what data could be shared, why it mattered, and how to use approved tools instead.
  4. Month 2: Implemented Entra ID identity governance to force staff use of managed accounts. Enabled audit logging on all AI tools in their Microsoft 365 environment.

The result: shadow AI didn’t disappear, but it became visible and governed. They went from “we have no idea what’s happening” to “we can see what’s happening and enforce policy.”

And critically, they could now defend their POPIA compliance to any client or regulator who asked.


The Bottom Line: Start With Visibility, Move to Control

Shadow AI governance isn’t about shutting down innovation. It’s about making sure sensitive data doesn’t flow into tools you can’t monitor, govern, or defend.

A structured shadow AI audit gives you a repeatable process:

  1. Identify what’s in use (without judgment)
  2. Understand where it intersects with real workflows
  3. Classify the data being shared
  4. Prioritise the biggest risks first
  5. Make decisions that actually stick

Do it once, and you reduce risk immediately. Do it quarterly, and shadow AI stops being a surprise. Add an AI Acceptable Use Policy on top of it, and you’ve turned a compliance gap into a controlled process.

The cost of not doing this is growing. The Information Regulator is paying closer attention to AI governance. Your clients are asking about it in due diligence. And the risk of accidental data leakage into uncontrolled systems is real.


Ready to Assess Your Shadow AI Risk?

If your team is using AI tools but you’re not certain where the data is going, we can help.

Take our free Technology Risk Assessment — a 10-minute online assessment that scores your shadow AI risk, identifies your biggest vulnerabilities, and shows you exactly where to start. No sales pitch. Just clarity.

Take the Assessment

Or if you’d prefer a conversation first, book a 30-minute AI governance discovery call with one of our team. We’ll walk through what you’re seeing, what your actual exposure is, and what a practical fix looks like for your business.

Most Cape Town firms we talk to are shocked at what they find in their first audit. But they’re relieved to know what they’re dealing with. And that clarity is where real compliance starts.


About the author: Kyle Appollis is the Founder and CEO of 2KR IT Solutions, a Cape Town-based managed services provider serving consulting and engineering firms across the Western Cape. 2KR specialises in security-native IT infrastructure, Microsoft 365 hybrid migration, and AI governance for professional services firms. Kyle is also the founder of The 2KR Initiative, a faith-based monthly family support program rooted in the principle of “cheerful giving.”


Frequently Asked Questions

Q: What counts as shadow AI? A: Any AI tool used without IT approval or oversight. This includes ChatGPT, Claude, Copilot, Midjourney, and AI features embedded in tools you already use (like Copilot in Microsoft 365). It also includes browser extensions and mobile apps.

Q: Is using ChatGPT for work a POPIA breach? A: Not automatically. It becomes a breach if you share personal information (client data, employee records, health information, financial details) with ChatGPT’s free version, which uses conversations for training. It’s lower risk if you use ChatGPT Enterprise with a managed account and data protection agreements in place. The safest approach is to have clear policy on what tools are approved and for what purposes.

Q: What does the Information Regulator actually enforce? A: The Information Regulator has issued enforcement notices to organisations that failed to protect personal information, including cases where data was shared with unauthorised tools. POPIA violations can result in fines up to R10 million or criminal penalties. More commonly, they trigger mandatory breach notifications, remediation costs, and reputational damage.

Q: Can we just block all AI tools? A: You could, but it won’t work. Shadow AI spreads precisely because people find it useful. Instead of banning it, govern it. Approve specific tools for specific use cases, require managed accounts, log activity, and train staff on data classification. That’s sustainable.

Q: How often should we run a shadow AI audit? A: We recommend quarterly at minimum, semi-annually for smaller firms with stable tool ecosystems. Your first audit will be thorough (2–4 weeks). Subsequent audits are much faster — you’re mostly checking for new tools, changes in data handling, and staff turnover. Make it routine.

Q: What’s the difference between ChatGPT and Copilot? A: ChatGPT (free or Plus) may use your data for training. Copilot in Microsoft 365 (Enterprise) has built-in data protection and integration with your identity system. For business use, Copilot in Word/Excel/Teams is safer because it respects your organisational data governance. But Copilot still requires clear policy on what data is acceptable to share.

Further Reading

WeCreativez WhatsApp Support
Our customer support team is here to answer your questions. Ask us anything!
👋 Hi, how can I help?