[[/public:t-709-aies:AIES-25:main|DCS-T-709-AIES-2025 Main]] \\ [[/public:t-709-aies:AIES-25:lecture_notes|Link to Lecture Notes]] \\ \\ ====== Ethics-by-Design and Participatory Ethics ====== \\ ===== From Retrospective to Prospective Ethics ===== Traditional ethics in technology is often reactive: Something goes wrong → Reflection → Fix Ethics-by-Design and Participatory Ethics aim to change this: Ethics becomes part of the initial design and development process. === Why this shift is necessary === * AI systems can scale quickly and affect many people before harm is visible. * Bias and unfairness are often baked in before deployment. * Retrofitting ethics after deployment is often too late (and too expensive). * Ethics-by-Design helps to prevent problems rather than solving them afterward. ===== Ethics-by-Design ===== | What is it? | A proactive design approach that seeks to embed ethical values into AI systems from the start. In terms of both architecture and use context. | | Purpose | Ensure that AI reflects human values and legal/social norms, even in ambiguous or high-risk contexts.| | Main Features | Design with ethical principles in mind (fairness, accountability, etc.) | * Consider intended and unintended consequences. * Translate ethical values into technical system constraints. * Use ethics as a requirement, not an add-on. * Implementation examples: * Limit data collection to what is necessary (privacy-by-design). * Create interfaces that support informed decision-making. * Allow users to challenge or explain outcomes (e.g., explainability modules). ===== Participatory Ethics ===== * Tries to provide an answer to the question "Who gets to decide what 'ethical' means?" * Participatory ethics says: Not just developers or companies, but users, communities, and generally those affected. | What is it? | A process that involves stakeholders and affected parties directly in identifying ethical goals and risks in system design. | | Motivation | Developers can't predict all social impacts | * Some users are more vulnerable than others. * Inclusion improves both ethics and usability. * Promote democratic legitimacy of AI deployment. * Participation Methods: * Co-design sessions * Public consultation forums * Focus groups or structured interviews. * Challenges: * Risk of tokenism (including people but not listening) * Conflicting values: fairness for one group may mean exclusion for another * Time and resource-intensive * Not all voices are equally loud or equally heard. ===== Comparison ===== | | **Ethics-by-Design** | **Participatory Ethics** | | Primary Focus | Embedding values into the technical system | Involving people in defining what values matter | | Who defines ethics? | Designers, developers, ethicists | Affected communities, stakeholders | | Strength | Prevents issues early, compatible with engineering | Brings real-world perspectives and power critique | | Limitations | Risk of blind spots, overly abstract values | Slower, can be co-opted or reduced to PR | | Best used when... | Value conflict is low and technical trade-offs dominate | Systems affect vulnerable or divers communities | ===== Example: Predictive Hiring Tool ===== | A large company develops an AI system to score job applicants based on CVs and prior hiring data.\\ The system performs well initially, but after public testing, it reveals racial and gender bias in predictions. | | Default | **No audit of training data, no consulting with stakeholders**\\ - No explainability module,\\ Bias goes undetected during development,\\ - and the system causes unfair outcomes upon release. | | With Ethics-by-Design | **Training data audited for representation**\\ - Scoring adjusted to ensure fairness across sensitive attributes,\\ Human review required for borderline cases,\\ - and explanation UI for rejected applicants implemented. | | With Participatory Ethics | **Job candidates, HR staff, and anti-discrimination experts consulted**\\ - Values like transparency, respect, and accessibility identified as priorities,\\ - Rejection messages redesigned based on candidate feedback,\\ - and an opt-out option for automated review added. | ===== Typical Design Conflicts and Trade-offs ===== | Fairness vs. Accuracy | Should the system reduce overall performance to reduce disparities? | | Transparency vs. Security | Can we make decisions explainable without leaking sensitive information? | | Autonomy vs. Harm reduction | Should users be allowed to override AI safety features? | | Individual vs. Common Good | Should we collect sensitive data to benefit broader groups (e.g., pandemic prediction)? |