Aug 11, 2025

René Herzer
An employee asks the new AI system: "Show me all projects from Team Alpha with their current budgets." The system searches through project management tools, time tracking, and financial databases. Within seconds, it presents a comprehensive analysis. But a crucial question remains: Should this employee really be able to see all this information?
Here we encounter a fundamental problem: Traditional permission systems are designed for direct, system-specific access – not for AI-mediated, cross-system queries.
The Problem in Practice
Fragmented permissions
Today you have various rights systems:
Active Directory for basic identities
Database rights for table access
Application rights in every tool
API rights for interfaces
Every system has its own rules. A project manager may view budget data for their project, but not the salary information of their team. This separation works as long as people work directly with individual systems.
AI breaks through boundaries
AI systems aggregate data from various sources. The query "How profitable is our largest project?" requires:
Customer data from the CRM
Cost data from the ERP
Efforts from project management
Personnel costs from HR systems
Suddenly, a user can access information through a single request that would be locked away from them in separate systems.
The Inference Problem
Even more critical: AI can draw unauthorized conclusions from permitted data. An employee may view project budgets and team sizes. However, salary information can be derived from this. Is this allowed?
Why existing solutions don't work
Roles are too rigid: "Project manager" says nothing about specific AI requests.
Individual permissions are too complex: Defining rules for every possible data combination is impossible.
Manual approvals are too slow: AI should deliver real-time responses.
What really works
1. Policy Engine als Gatekeeper
Instead of managing permissions in every system, a central engine decides on every AI request:
"May User X ask this type of question?"
"Is this request compatible with their role?"
"Could the result expose sensitive data?"
Practical: Systems like Open Policy Agent can do this today.
2. Intent-based permissions
Instead of asking "May they access Table Y?" ask "May they perform profitability analyses?"
Example rule: "Sales managers may analyze customer profitability, but not derive individual salaries."
3. Dynamic filtering
The AI system receives the request, but the Policy Engine filters the result:
Shows only data the user is allowed to see
Removes sensitive combinations
Logs everything for compliance
The pragmatic approach
Start small:
Implement a Policy Engine for your first AI system
Define 5-10 basic rules
Expand gradually
Use existing tools:
Your identity systems remain
Just add a central decision layer
No complete re-architecture needed
Focus on critical data:
Not everything needs to be perfect
Protect salary data, finances, personnel data first
The rest can wait
The reality
Without solving this problem, AI systems remain limited to non-critical areas. The permissions problem becomes a bottleneck for serious AI usage.
The good news: You don't have to solve everything at once. Start with a Policy Engine for your first productive AI system. Learn from it. Expand gradually.
The alternative is to keep AI permanently in the sandbox.
This is the third part of our series on AI integration beyond pilot projects. Next week: How to develop a realistic roadmap for AI-native organizations.
Stay Up to Date