User Tools

Site Tools


public:t-709-aies-2025:aies-2025:ai_in_practice

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

public:t-709-aies-2025:aies-2025:ai_in_practice [2025/09/24 08:27] – created leonardpublic:t-709-aies-2025:aies-2025:ai_in_practice [2025/09/24 08:33] (current) leonard
Line 22: Line 22:
 ===== What Can Go Wrong in Practice? =====  ===== What Can Go Wrong in Practice? ===== 
  
-|  Tupe of Issue  |  Description  |  Example  |+|  **Type of Issue**  |  **Description**  |  **Example**  |
 |  Bias and Discrimination  | Biased training data leads to biased outputs. Replicates and reinforces social inequalities | Hiring tools, predictive policing | |  Bias and Discrimination  | Biased training data leads to biased outputs. Replicates and reinforces social inequalities | Hiring tools, predictive policing |
 |  Lack of Transparency  | Users don’t understand or cannot challenge AI decisions. Systems act as “black boxes” | Credit scoring, medical diagnosis | |  Lack of Transparency  | Users don’t understand or cannot challenge AI decisions. Systems act as “black boxes” | Credit scoring, medical diagnosis |
Line 43: Line 43:
  
  
 +===== Categories of AI Risk ===== 
 +
 +|  **Risk Category**  |  **Description**  |  **Real-Worls Example**  |
 +|  Epistemic risk  | The system is wrong or misleading | Self-driving car misclassifies child as inanimate object |
 +|  Moral risk  | The system does harm or violates values | Hiring AI favors privileged groups |
 +|  Political risk  | The system shifts power, often unequally | Surveillance AI targets protesters. Data sold to authoritarian regimes |
 +|  Social risk  | The system changes relationships or behaviors | Recommender AI increases polarization and echo chambers |
 +|  Legal risk  | The system conflicts with the law or lacks legal clarity | Chatbot offers health advice that violates medical regulations |
 +
 +
 +===== How Can We Prevent These Failures? ===== 
 +
 +|  **Preventive Action**  |  **Goal**  |
 +|  Bias auditing  | Detect and address fairness issues in data and model outcomes |
 +|  Explainability features  | Help users understand and contest AI decisions |
 +|  Ethical impact assessments  | Identify potential harms and affected groups before deployment |
 +|  Stakeholder inclusion  | Bring real-world users into design discussions |
 +|  Transparency reports  | Show how AI is trained, tested, and used |
 +|  Independent testing  | Test systems in realistic, high-variance environments |
 +|  Fail-safe design  | Build human override, logging, and rollback capabilities |
 +
 +
 +===== Takeaways ===== 
 +
 +  * AI systems fail not only due to bugs, but also due to ethical blind spots.
 +  * What works in lab conditions can be harmful in society.
 +  * AI failures often reinforce existing social inequalities.
 +  * Most failures are predictable and therefore preventable.
 +  * Ethical foresight is part of responsible design, not idealism.
/var/www/cadia.ru.is/wiki/data/attic/public/t-709-aies-2025/aies-2025/ai_in_practice.1758702457.txt.gz · Last modified: 2025/09/24 08:27 by leonard

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki