Why it matters

Not all data harm is malicious, and not all well-intended products end up being good.
Discriminatory outcomes manifest in the design of AI services through the use of personal data. We provided examples of this in the form of a pyramid of algorithmic harm, ranging from ignorance to malintent. It is important to be aware of these subtler, enabling behaviors that allow for more overt forms of harm.
Explore the Project Vault
How we did it

Through participatory workshops, we gathered perspectives, case studies, and experiences straight from the perspectives of practitioners, which fed into our the larger research project.
In our 1st workshop on 2nd of February, we sought out to mapping harmful/discriminatory practices & outcomes throughout the AI pipeline and current work on mitigating them with a wide range of AI practitioners. You can find the pre-read links and a summary of workshop #1 outcomes here, and access the Miro board here.
In the 2th workshop, a week later on the 9th of February, we looked at identifying barriers & red flags regarding the role and influence of design(ers) in creating non-discriminatory AI and challenges in overcoming them. You can find the summary of workshop #2 learnings here, and access the Miro board here.
In the 3th and final workshop, hosted last week on the 16th of February, we imagined and co-created new visions for data citizenship, data rights by design, and embracing dissent as parts of an outcomes-based approach to the role of design non-discriminatory AI. You can find the pre-read links and a summary of workshop #3 insights and prototypes here, and access the Miro board here.
Explore the Reading List behind the project
Output
What we made
Related projects.
(2019-25©)




