DCS-T-709-AIES-2025 Main
Link to Lecture Notes
Traditional ethics in technology is often reactive: Something goes wrong → Reflection → Fix
Ethics-by-Design and Participatory Ethics aim to change this: Ethics becomes part of the initial design and development process.
What is it? | A proactive design approach that seeks to embed ethical values into AI systems from the start. In terms of both architecture and use context. |
Purpose | Ensure that AI reflects human values and legal/social norms, even in ambiguous or high-risk contexts. |
Main Features | Design with ethical principles in mind (fairness, accountability, etc.) |
What is it? | A process that involves stakeholders and affected parties directly in identifying ethical goals and risks in system design. |
Motivation | Developers can't predict all social impacts |
Ethics-by-Design | Participatory Ethics | |
Primary Focus | Embedding values into the technical system | Involving people in defining what values matter |
Who defines ethics? | Designers, developers, ethicists | Affected communities, stakeholders |
Strength | Prevents issues early, compatible with engineering | Brings real-world perspectives and power critique |
Limitations | Risk of blind spots, overly abstract values | Slower, can be co-opted or reduced to PR |
Best used when… | Value conflict is low and technical trade-offs dominate | Systems affect vulnerable or divers communities |
A large company develops an AI system to score job applicants based on CVs and prior hiring data. The system performs well initially, but after public testing, it reveals racial and gender bias in predictions. |
Default | No audit of training data, no consulting with stakeholders - No explainability module, Bias goes undetected during development, - and the system causes unfair outcomes upon release. |
With Ethics-by-Design | Training data audited for representation - Scoring adjusted to ensure fairness across sensitive attributes, Human review required for borderline cases, - and explanation UI for rejected applicants implemented. |
With Participatory Ethics | Job candidates, HR staff, and anti-discrimination experts consulted - Values like transparency, respect, and accessibility identified as priorities, - Rejection messages redesigned based on candidate feedback, - and an opt-out option for automated review added. |
Fairness vs. Accuracy | Should the system reduce overall performance to reduce disparities? |
Transparency vs. Security | Can we make decisions explainable without leaking sensitive information? |
Autonomy vs. Harm reduction | Should users be allowed to override AI safety features? |
Individual vs. Common Good | Should we collect sensitive data to benefit broader groups (e.g., pandemic prediction)? |