User Tools

Site Tools


public:t-709-aies-2025:aies-2025:ai_in_practice

This is an old revision of the document!


DCS-T-709-AIES-2025 Main
Link to Lecture Notes



AI in Practice: What Can Go Wrong?


Introduction

As AI systems move from lab environments to real-world applications, new kinds of ethical and practical problems emerge.

Why does this matter?

  • AI systems may work well in testing but fail unpredictably in the real world.
  • Ethical risks are often hidden in data, deployment context, or incentives.
  • Failures can scale quickly and impact real lives: discrimination, safety issues, loss of trust.
  • Understanding how and why AI goes wrong is essential to preventing future harm.

What Can Go Wrong in Practice?

Tupe of Issue Description Example
Bias and Discrimination Biased training data leads to biased outputs. Replicates and reinforces social inequalities Hiring tools, predictive policing
Lack of Transparency Users don’t understand or cannot challenge AI decisions. Systems act as “black boxes” Credit scoring, medical diagnosis
Overreliance / Automation Bias People trust AI even when it is clearly wrong or misaligned with context GPS directions, autopilot
Function Creep AI used for one purpose expands silently into others (e.g., law enforcement use of commercial data) Smart speakers, surveillance
Data Privacy Violations Personal data collected without proper consent. Data being reused or sold Smart homes, mental health apps
Unsafe Deployment AI deployed in real-world contexts without sufficient testing or safeguards Self-driving cars, robotic surgery
Feedback Loops System learns from its own outputs, reinforcing narrow behavior (e.g., filter bubbles) Recommender systems, ad targeting

Why These Failures Happen

Biased Training Data Data reflects past human biases (e.g., racist policing records, gendered job roles)
Lack of Contextual Testing Systems tested in narrow environments don’t generalize (e.g., from private roads to city streets)
Misaligned Objectives Optimizing for engagement, clicks, or efficiency can ignore fairness or well-being
No Human Oversight Systems make decisions without accountability mechanisms or intervention
Incentive Misalignment Companies optimize for speed or profit, not ethics or safety
Lack of Regulation or Standards No legal limits on harmful deployment or poor design
/var/www/cadia.ru.is/wiki/data/attic/public/t-709-aies-2025/aies-2025/ai_in_practice.1758702457.txt.gz · Last modified: 2025/09/24 08:27 by leonard

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki