User Tools

Site Tools


public:t-709-aies-2025:aies-2025:principle_ai_ethics

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
public:t-709-aies-2025:aies-2025:principle_ai_ethics [2025/09/22 12:31] leonardpublic:t-709-aies-2025:aies-2025:principle_ai_ethics [2025/09/22 12:41] (current) leonard
Line 112: Line 112:
 |  Responsibility  | Still hard to make out who is responsible for the decisions made. | |  Responsibility  | Still hard to make out who is responsible for the decisions made. |
 |  Privacy  | Anonymizing training data (challenging for large datasets). | |  Privacy  | Anonymizing training data (challenging for large datasets). |
 +
 +
 +==== Example ====
 +
 +=== ChatGPT ===
 +  * Generative Pre-trained Transformer (GPT) is one of the largest LMMs.
 +  * GPT-4 had 45 TB, GPT-5 even as much as 280 TB of (unfiltered) training data.
 +  * GPT-4 roughly 1.7 trillion parameters. GPT-5 is estimated to have up to 5-10 trillion parameters (could be less than GPT-4, however, due to a possibly multi-model architecture. This is all unclear.).
 +  * Up to 400,000 tokens context window.
 +
 +**Ethical issues**:
 +
 +  * Transparency: Very limited and it is unclear how some responses are generated.
 +  * Justice and fairness: Use of ChatGPT may violate copyrights, etc.
 +  * Safety: Generating harmful content, misinformation or being misused for unethical purposes.
 +  * Responsibility: Who is responsible?
 +  * Privacy: Massive amounts of data. Some training data can be extracted. See for example [[https://not-just-memorization.github.io/extracting-training-data-from-chatgpt.html| Extracting Training Data from ChatGPT]]
/var/www/cadia.ru.is/wiki/data/attic/public/t-709-aies-2025/aies-2025/principle_ai_ethics.1758544304.txt.gz · Last modified: 2025/09/22 12:31 by leonard

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki