00.Q+.530 Trustworthy AI: Fairness, Interpretability and Privacy

Veranstaltungsdetails

Lehrende/r: Mattia Cerrato

Veranstaltungsart: Vorlesung/Übung

Anzeige im Stundenplan: Trustworthy AI

Semesterwochenstunden: 4

Credits: 6,0

Unterrichtssprache: Englisch

Min. | Max. Teilnehmerzahl: - | -

Voraussetzungen / Organisatorisches:
Zweiteilige Veranstaltung: Vorlesung (Dienstag) und Tutorium/Diskussion (Donnerstag). Teilnahme an beiden wird empfohlen, ist aber nicht verpflichtend.  

Teilnahmevoraussetzungen


  • The instructor is only able to teach in English.
  • To fully benefit from the class, some background in probability and statistics is advisable. However, programming skills are not strictly necessary.

Anforderungen/

  • Final essay on the intersection between Trustworthy AI and the student’s background (e.g. law, philosophy, social sciences…) (6-8 pages)
  • Participation in the weekly discussions, or, alternatively, completion of short essays (3-5 pages)

Inhalt:
A full description is available at https://pibborn.github.io/trust-class/

While this course is designed for computer science students, the concepts of fairness, interpretability, and privacy are highly relevant to other disciplines as well, whether law, social sciences, philosophy, or others. We therefore invite interested students from all disciplines to take part and share their perspectives.

Machine Learning models, especially based on neural networks, are now part of our everyday life, being deployed in smartphones and embedded systems. Computer Vision algorithms recognize our pets in the photos we take; Natural Language Processing models are able to fix our grammar and even generate human-passing text.

While the new wave of so-called "deep learning" systems displays impressive performance in various tasks, these models are very hard to understand in various ways. To cite one issue among others, the GPT family of models employs 175 billion learnable parameters. When something goes wrong, it is almost impossible to understand "why" a model has made a particular decision.

The technical optimism and excitement around Machine Learning has also pushed businesses to apply it in situations where it may impact people’s well-being directly, such as loan applications, candidate selection for job offers and evaluating the chance of re-offending for people who commited crimes. Computer vision applications based on neural networks have even been employed to judge beauty contests.

In such contexts, opaque models are particularly problematic as there is a concrete risk for discrimination against certain groups of people. Protected characteristics such as gender and ethnicity might be used in the computation of the final decision, which is problematic on moral grounds and unlawful. The existence of the gender pay gap, for example, shows how there are complex correlations between law-protected attributes (gender) and other, non-sensitive attributes such as a person’s yearly salary.

If models are learning from biased data, it follows that they will learn to output biased decisions; if we are unable to explain those decisions, we are left with very little human control over what ultimately is a software process. On top of being philosophically troubling and unethical, recent legislation might see these methodologies as unlawful. Taking the General Data Protection Regulation in the European Union as an example, transparency and fairness have to be guaranteed to individuals who are subject to automatic decision-making software systems.

During this class, you will learn about the current discussion in the AI and ML literature about how to control AI and ML algorithms so that they are trustworthy. We will be focusing on three characteristics especially:


  • Fairness: how can we learn non-discriminating models from biased data?
  • Interpretability: how can we make sense of a model’s decisions?
  • Privacy: how do we make sure that a model trained with our personal data does not leak our data to third parties?

Zusätzliche Informationen:
Mattia Cerrato: I work as a Post-Doc in the Data Mining group at Johannes Gutenberg-Universität Mainz, under the guidance of Prof. Stefan Kramer. I am a Principal Investigator for the TOP-ML project, which deals with understanding the trade-offs between properties of machine learning algorithms beyond performance. Before that, I was a PhD. student in the Machine Learning Group at University of Torino, my hometown. Prof. Roberto Esposito was my advisor for three years.
My research interests are centered around deep neural networks - I mostly research algorithms to constrain and understand them better. I have published papers about algorithmic fairness, interpretability and connections between them. I care deeply about making sure that AI applications can be beneficial to everyone - or at least not harm anybody.

Termine
Datum Von Bis Raum Lehrende/r
1 Di, 24. Okt. 2023 10:15 11:45 Vorlesung - FB08 - Raum wird bekanntgegeben Mattia Cerrato
2 Do, 26. Okt. 2023 14:15 15:45 Tutorium/Diskussion - FB08 - Raum wird bekanntgegeben Mattia Cerrato
3 Di, 31. Okt. 2023 10:15 11:45 Vorlesung - FB08 - Raum wird bekanntgegeben Mattia Cerrato
4 Do, 2. Nov. 2023 14:15 15:45 Tutorium/Diskussion - FB08 - Raum wird bekanntgegeben Mattia Cerrato
5 Di, 7. Nov. 2023 10:15 11:45 Vorlesung - FB08 - Raum wird bekanntgegeben Mattia Cerrato
6 Do, 9. Nov. 2023 14:15 15:45 Tutorium/Diskussion - FB08 - Raum wird bekanntgegeben Mattia Cerrato
7 Di, 14. Nov. 2023 10:15 11:45 Vorlesung - FB08 - Raum wird bekanntgegeben Mattia Cerrato
8 Do, 16. Nov. 2023 14:15 15:45 Tutorium/Diskussion - FB08 - Raum wird bekanntgegeben Mattia Cerrato
9 Di, 21. Nov. 2023 10:15 11:45 Vorlesung - FB08 - Raum wird bekanntgegeben Mattia Cerrato
10 Do, 23. Nov. 2023 14:15 15:45 Tutorium/Diskussion - FB08 - Raum wird bekanntgegeben Mattia Cerrato
11 Di, 28. Nov. 2023 10:15 11:45 Vorlesung - FB08 - Raum wird bekanntgegeben Mattia Cerrato
12 Do, 30. Nov. 2023 14:15 15:45 Tutorium/Diskussion - FB08 - Raum wird bekanntgegeben Mattia Cerrato
13 Di, 5. Dez. 2023 10:15 11:45 Vorlesung - FB08 - Raum wird bekanntgegeben Mattia Cerrato
14 Do, 7. Dez. 2023 14:15 15:45 Tutorium/Diskussion - FB08 - Raum wird bekanntgegeben Mattia Cerrato
15 Di, 12. Dez. 2023 10:15 11:45 Vorlesung - FB08 - Raum wird bekanntgegeben Mattia Cerrato
16 Do, 14. Dez. 2023 14:15 15:45 Tutorium/Diskussion - FB08 - Raum wird bekanntgegeben Mattia Cerrato
17 Di, 19. Dez. 2023 10:15 11:45 Vorlesung - FB08 - Raum wird bekanntgegeben Mattia Cerrato
18 Do, 21. Dez. 2023 14:15 15:45 Tutorium/Diskussion - FB08 - Raum wird bekanntgegeben Mattia Cerrato
19 Di, 2. Jan. 2024 10:15 11:45 Vorlesung - FB08 - Raum wird bekanntgegeben Mattia Cerrato
20 Do, 4. Jan. 2024 14:15 15:45 Tutorium/Diskussion - FB08 - Raum wird bekanntgegeben Mattia Cerrato
21 Di, 9. Jan. 2024 10:15 11:45 Vorlesung - FB08 - Raum wird bekanntgegeben Mattia Cerrato
22 Do, 11. Jan. 2024 14:15 15:45 Tutorium/Diskussion - FB08 - Raum wird bekanntgegeben Mattia Cerrato
23 Di, 16. Jan. 2024 10:15 11:45 Vorlesung - FB08 - Raum wird bekanntgegeben Mattia Cerrato
24 Do, 18. Jan. 2024 14:15 15:45 Tutorium/Diskussion - FB08 - Raum wird bekanntgegeben Mattia Cerrato
25 Di, 23. Jan. 2024 10:15 11:45 Vorlesung - FB08 - Raum wird bekanntgegeben Mattia Cerrato
26 Do, 25. Jan. 2024 14:15 15:45 Tutorium/Diskussion - FB08 - Raum wird bekanntgegeben Mattia Cerrato
27 Di, 30. Jan. 2024 10:15 11:45 Vorlesung - FB08 - Raum wird bekanntgegeben Mattia Cerrato
28 Do, 1. Feb. 2024 14:15 15:45 Tutorium/Diskussion - FB08 - Raum wird bekanntgegeben Mattia Cerrato
29 Di, 6. Feb. 2024 10:15 11:45 Vorlesung - FB08 - Raum wird bekanntgegeben Mattia Cerrato
30 Do, 8. Feb. 2024 14:15 15:45 Tutorium/Diskussion - FB08 - Raum wird bekanntgegeben Mattia Cerrato
Veranstaltungseigene Prüfungen
Beschreibung Datum Lehrende/r Pflicht
1. Essay k.Terminbuchung Ja
Übersicht der Kurstermine
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
Lehrende/r
Mattia Cerrato