The Problem of Responsibility: Who is liable when AI decides?

The Problem of Responsibility: Who is liable when AI decides?

The Problem of Responsibility: Who is liable when AI decides?

The Problem of Responsibility: Who is liable when AI decides?

Sep 11, 2025

René Herzer, founder basebox

René Herzer

basebox: Who is liable when AI decides?
basebox: Who is liable when AI decides?

An AI system rejects a loan application. The customer complains to BaFin. The supervisory authority asks: "Who made this decision and why?" The bank responds: "Our AI system." BaFin continues: "Who in your organization is responsible for this?"

Silence.

This reveals a fundamental problem: In traditional IT systems, responsibilities are clear. With AI systems that make autonomous decisions, these boundaries blur. The result is a responsibility vacuum that puts organizations at legal and operational risk.

Why traditional responsibility models fail

Clear chains in deterministic systems

In conventional software, the chain of responsibility is unambiguous:

  • The developer writes code according to specifications

  • The business unit defines the business rules

  • The user performs conscious actions

  • The system executes exactly what it was instructed to do

When something goes wrong, the cause can be traced back: programming error, incorrect specification, or user error.

Blurring boundaries with AI

AI systems break through these clear assignments:

  • The model makes decisions based on patterns, not explicit rules

  • Training data influences decisions without anyone knowing how

  • The algorithm develops its own "logic" that no one fully understands

  • The context at the time of decision can influence the outcome

When the AI system makes a wrong decision, it's often unclear where the error lies.

The responsibility gaps in practice

The Data Scientist: "I only trained the model"

"I'm not responsible for business decisions. I built a technical system that recognizes patterns in data. How the system is used is decided by others."

The Business Unit: "I don't understand the technology"

"I said what we want to achieve: better credit decisions. How this is technically implemented is not my area. I can't assess whether the model is working correctly."

The IT Manager: "We only operate the infrastructure"

"We ensure the system runs and is available. We're not responsible for business logic. The model is a black box to us."

The Management Team: "We commissioned experts"

"We hired qualified teams and engaged external consultants. The responsibility lies with the subject matter experts who developed these systems."

Why this is dangerous

Legal risks

Regulated industries must be able to justify decisions. "The AI system decided" is not a legally defensible answer.

Banking supervision example: BaFin can impose fines if credit decisions are not comprehensible. Without clear responsibilities, this affects the entire organization.

Operational risks

When no one feels responsible, problems are not detected or resolved in time.

Example: An AI system for fraud detection develops a bias against certain customer groups. Who monitors this? Who intervenes? Who decides on corrections?

Reputational risks

AI errors are publicly discussed. Organizations that don't have clear responsibilities appear unprofessional and negligent.

New responsibility models for AI

The AI Product Owner

A person who is responsible end-to-end for an AI system:

  • Professional responsibility: Defines goals and success criteria

  • Technical oversight: Understands the model sufficiently for decisions

  • Operational responsibility: Monitors performance and intervenes when problems arise

  • Compliance responsibility: Ensures regulatory requirements are met

The AI Governance Board

An interdisciplinary body for strategic AI decisions:

  • Business unit: Defines business requirements

  • Data Science: Evaluates technical feasibility

  • Legal/Compliance: Reviews legal risks

  • IT: Evaluates operational implementability

Shared responsibility with clear boundaries

Model Owner (Data Science Team):
  • Technical quality of the model

  • Documentation of limitations

  • Recommendations for use cases

Business Owner (Business Unit):
  • Definition of use cases

  • Evaluation of business results

  • Decision on model updates

Operations Owner (IT):
  • Technical availability

  • Performance monitoring

  • Incident response

Compliance Owner (Legal/Risk):
  • Regulatory compliance

  • Audit documentation

  • Risk assessment

Practical implementation

RACI matrix for AI systems

Define for each AI decision:
  • Responsible: Who performs the task?

  • Accountable: Who is ultimately responsible?

  • Consulted: Who must be consulted?

  • Informed: Who must be informed?

Credit decision example:
  • Responsible: AI system (technical), loan officer (final)

  • Accountable: Credit department head

  • Consulted: Risk Management, Data Science

  • Informed: Compliance, Audit

Define escalation paths

Clear rules for when human intervention is required:

  • Automatic escalation: For confidence scores below thresholds

  • Manual escalation: For customer complaints or unusual cases

  • Systematic escalation: For detected bias or performance problems

Documentation requirements
  • Decision log: Every AI decision with context and justification

  • Responsibility proof: Who made or confirmed which decision when

  • Audit trail: Traceable chain from data through model to decision

The most common pitfalls

Delegating responsibility to technology

"The AI system decides" is not a solution. Humans must take responsibility.

Too many responsible parties

When everyone is responsible, no one is responsible. Clear, unambiguous assignments are necessary.

Responsibility without competence

Those who are responsible must also have the competence to make informed decisions.

What really works

Start with clear roles

Define who is responsible for what before the first productive AI system.

Train your responsible parties

AI Product Owners need both professional and technical competence.

Document everything

Responsibility without documentation is worthless. Create traceable processes.

Test your processes

Simulate problem cases: Do your escalation paths work? Are responsibilities clear?

The uncomfortable truth


AI systems make decisions, but they cannot take responsibility. This remains with humans. Organizations that ignore this expose themselves to legal, operational, and reputational risks.

The question is not whether you need responsibility models for AI – the question is whether you define them before or after something goes wrong.

Clear responsibilities are not just a compliance issue. They are the foundation for AI systems to be operated trustworthily and successfully.


This is the fourth part of our series on AI integration beyond pilot projects. Next week: Why traditional IT budgeting fails with AI systems and what new cost models you need.

Copy link

Copy link

Copy link

Copy link

Stay Up to Date

© 2025 basebox GmbH, Utting am Ammersee, Germany. All rights reserved.

Made in Bavaria | EU-compliant

© 2025 basebox GmbH, Utting am Ammersee, Germany. All rights reserved.

Made in Bavaria | EU-compliant

© 2025 basebox GmbH, Utting am Ammersee, Germany. All rights reserved.

Made in Bavaria | EU-compliant

© 2025 basebox GmbH, Utting am Ammersee, Germany. All rights reserved.

Made in Bavaria | EU-compliant