DCS-T-709-AIES-2025 Main
Link to Lecture Notes
As AI systems move from lab environments to real-world applications, new kinds of ethical and practical problems emerge.
Why does this matter?
| Type of Issue | Description | Example |
| Bias and Discrimination | Biased training data leads to biased outputs. Replicates and reinforces social inequalities | Hiring tools, predictive policing |
| Lack of Transparency | Users don’t understand or cannot challenge AI decisions. Systems act as “black boxes” | Credit scoring, medical diagnosis |
| Overreliance / Automation Bias | People trust AI even when it is clearly wrong or misaligned with context | GPS directions, autopilot |
| Function Creep | AI used for one purpose expands silently into others (e.g., law enforcement use of commercial data) | Smart speakers, surveillance |
| Data Privacy Violations | Personal data collected without proper consent. Data being reused or sold | Smart homes, mental health apps |
| Unsafe Deployment | AI deployed in real-world contexts without sufficient testing or safeguards | Self-driving cars, robotic surgery |
| Feedback Loops | System learns from its own outputs, reinforcing narrow behavior (e.g., filter bubbles) | Recommender systems, ad targeting |
| Biased Training Data | Data reflects past human biases (e.g., racist policing records, gendered job roles) |
| Lack of Contextual Testing | Systems tested in narrow environments don’t generalize (e.g., from private roads to city streets) |
| Misaligned Objectives | Optimizing for engagement, clicks, or efficiency can ignore fairness or well-being |
| No Human Oversight | Systems make decisions without accountability mechanisms or intervention |
| Incentive Misalignment | Companies optimize for speed or profit, not ethics or safety |
| Lack of Regulation or Standards | No legal limits on harmful deployment or poor design |
| Risk Category | Description | Real-Worls Example |
| Epistemic risk | The system is wrong or misleading | Self-driving car misclassifies child as inanimate object |
| Moral risk | The system does harm or violates values | Hiring AI favors privileged groups |
| Political risk | The system shifts power, often unequally | Surveillance AI targets protesters. Data sold to authoritarian regimes |
| Social risk | The system changes relationships or behaviors | Recommender AI increases polarization and echo chambers |
| Legal risk | The system conflicts with the law or lacks legal clarity | Chatbot offers health advice that violates medical regulations |
| Preventive Action | Goal |
| Bias auditing | Detect and address fairness issues in data and model outcomes |
| Explainability features | Help users understand and contest AI decisions |
| Ethical impact assessments | Identify potential harms and affected groups before deployment |
| Stakeholder inclusion | Bring real-world users into design discussions |
| Transparency reports | Show how AI is trained, tested, and used |
| Independent testing | Test systems in realistic, high-variance environments |
| Fail-safe design | Build human override, logging, and rollback capabilities |