public:t-709-aies-2025:aies-2025:ai_in_practice
                Table of Contents
DCS-T-709-AIES-2025 Main 
Link to Lecture Notes
AI in Practice: What Can Go Wrong?
Introduction
As AI systems move from lab environments to real-world applications, new kinds of ethical and practical problems emerge.
Why does this matter?
- AI systems may work well in testing but fail unpredictably in the real world.
- Ethical risks are often hidden in data, deployment context, or incentives.
- Failures can scale quickly and impact real lives: discrimination, safety issues, loss of trust.
- Understanding how and why AI goes wrong is essential to preventing future harm.
What Can Go Wrong in Practice?
| Type of Issue | Description | Example | 
| Bias and Discrimination | Biased training data leads to biased outputs. Replicates and reinforces social inequalities | Hiring tools, predictive policing | 
| Lack of Transparency | Users don’t understand or cannot challenge AI decisions. Systems act as “black boxes” | Credit scoring, medical diagnosis | 
| Overreliance / Automation Bias | People trust AI even when it is clearly wrong or misaligned with context | GPS directions, autopilot | 
| Function Creep | AI used for one purpose expands silently into others (e.g., law enforcement use of commercial data) | Smart speakers, surveillance | 
| Data Privacy Violations | Personal data collected without proper consent. Data being reused or sold | Smart homes, mental health apps | 
| Unsafe Deployment | AI deployed in real-world contexts without sufficient testing or safeguards | Self-driving cars, robotic surgery | 
| Feedback Loops | System learns from its own outputs, reinforcing narrow behavior (e.g., filter bubbles) | Recommender systems, ad targeting | 
Why These Failures Happen
| Biased Training Data | Data reflects past human biases (e.g., racist policing records, gendered job roles) | 
| Lack of Contextual Testing | Systems tested in narrow environments don’t generalize (e.g., from private roads to city streets) | 
| Misaligned Objectives | Optimizing for engagement, clicks, or efficiency can ignore fairness or well-being | 
| No Human Oversight | Systems make decisions without accountability mechanisms or intervention | 
| Incentive Misalignment | Companies optimize for speed or profit, not ethics or safety | 
| Lack of Regulation or Standards | No legal limits on harmful deployment or poor design | 
Categories of AI Risk
| Risk Category | Description | Real-Worls Example | 
| Epistemic risk | The system is wrong or misleading | Self-driving car misclassifies child as inanimate object | 
| Moral risk | The system does harm or violates values | Hiring AI favors privileged groups | 
| Political risk | The system shifts power, often unequally | Surveillance AI targets protesters. Data sold to authoritarian regimes | 
| Social risk | The system changes relationships or behaviors | Recommender AI increases polarization and echo chambers | 
| Legal risk | The system conflicts with the law or lacks legal clarity | Chatbot offers health advice that violates medical regulations | 
How Can We Prevent These Failures?
| Preventive Action | Goal | 
| Bias auditing | Detect and address fairness issues in data and model outcomes | 
| Explainability features | Help users understand and contest AI decisions | 
| Ethical impact assessments | Identify potential harms and affected groups before deployment | 
| Stakeholder inclusion | Bring real-world users into design discussions | 
| Transparency reports | Show how AI is trained, tested, and used | 
| Independent testing | Test systems in realistic, high-variance environments | 
| Fail-safe design | Build human override, logging, and rollback capabilities | 
Takeaways
- AI systems fail not only due to bugs, but also due to ethical blind spots.
- What works in lab conditions can be harmful in society.
- AI failures often reinforce existing social inequalities.
- Most failures are predictable and therefore preventable.
- Ethical foresight is part of responsible design, not idealism.
/var/www/cadia.ru.is/wiki/data/pages/public/t-709-aies-2025/aies-2025/ai_in_practice.txt · Last modified: 2025/09/24 08:33 by leonard
                
                