User Tools

Site Tools


public:t-709-aies-2025:aies-2025:principle_ai_ethics

This is an old revision of the document!


DCS-T-709-AIES-2025 Main
Link to Lecture Notes



Principles of AI Ethics


Main Principles of AI Ethics

  • Transparency
  • Justice & Fairness
  • Safety
  • Responsibility
  • Privacy

Taken from: Jobin, et al. “The global landscape of AI ethics guidelines.” Nature Machine Intelligence 1.9 (2019)

UNESCO added principle (taken from: UNESCO. “Recommendations on the ethics of Artificial Intelligence“ UNESDOC digital library (2022))

  • Proportionality

Transparency

What is it? Understandability of decisions/ operations/ plans of systems
Importance Building trust, scrutinizing the issues, and minimizing harm
Issues - Accountability issues of black-box systems, e.g., LLMs for medicine
- Safety issues: No transparency leads to no improvement, leading to less safety
- Others?
Implementation - Knowledge representation and reasoning using explicit causal networks, decision trees, etc.
- Explainability techniques (Post-hoc and real-time)
- Documentation including, for example, (ANNs) open training (and test) data and model types, r (Symbolic AI) reasoning types

Justice and Fairness

What is it? The promotion of equality by avoiding bias and discrimination. This includes:
- Equality in use/ access (e.g., open-source)
- Equality in training/opportunity: No one must be excluded from AI training
- Equality in AI-based judgement: Decision-making based on specific factors, gender, and race is not allowed.
Importance Technology must be accessible and useful to everyone.
Issues - Conflict of interest between technology providers and users in the competitive AI market.
- Biases in the dataset and data-sensitive algorithms that use these datasets, leading to unequal impacts.
Implementation - Following guidelines for data collection
- Reducing the reliance on data-dependent methods
- Bias detection and correction algorithms.

Safety

What is it? Making sure systems do not cause harm to individuals (or society).
Importance “Do no harm” is more important than “Doing good”
Issues Privacy violations, physical harm, or psychological damage, for example:
- Misuse of private information by companies (e.g., facial recognition)
- Physical harm due to malfunctioning or improper use (e.g., accidents in autonomous cars)
- Stress and social anxiety due to interaction with AI systems/ AI companion chatbots.
Implementation - Using testing and monitoring techniques
- Anomaly detections
- Safety guidelines for the creation and use of AI systems for specific domains

Responsibility

What is it? Accountability of actions and decisions
Importance The need for ethical conduct by AI developers and users
Issues It is usually not clear who is responsible for the mistakes of AI systems. Ai, designer or user?
Implementation - Ethical training regarding the use and development of AI
- Promoting a culture of integrity within AI development teams
- Proper documentation for the system

Privacy

What is it? Protection of personal data
Importance Protecting the rights of the individual
Issues It is challenging to find the balance between privacy and the need for large datasets for data-driven AI systems development
Implementaiton Technical methods like
- Data minimization techniques,
- Privacy-by-design approaches
As well as
- data protection regulations
- Increased public awareness about the privacy rights

Proportionality

What is it? - AI systems must not be used beyond what is necessary to achieve a legitimate aim.
- Developers and deployers should assess and prevent harms, requiring that any use of AI be appropriately scaled and carefully considered relative to its purpose
Importance - Helps ensure AI does more good than harm by insisting that risk is weighed and that harm prevention is central.
- Protects individuals, society and rights (e.g., fairness, privacy).
- Helps maintain public trust, avoids overreach (e.g., surveillance, misuse), and ensures ethical limits to what AI can do.
Issues - What counts as “necessary” or “legitimate aim” is often contested.
- Harms can be indirect, delayed, or unseen, making assessment difficult.
- Risk of mission creep: What starts as a minor system may expand beyond original scope
- Sometimes proportionality conflicts with other principles like fairness, or transparency.
- Asymmetric power: Actors with more power may define what is “necessary” in ways that favor themselves.
Implementation - Use risk assessment and harm impact assessment before, during, and after deployment.
- Limit scope: Ensure capabilities are aligned with what is needed.
- Define and document legitimate aims explicitly.
- Ensure there is the ability to stop or scale back systems.
- Build in legal or regulatory constraints to enforce limits.
/var/www/cadia.ru.is/wiki/data/attic/public/t-709-aies-2025/aies-2025/principle_ai_ethics.1758186664.txt.gz · Last modified: 2025/09/18 09:11 by leonard

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki