In seiner Funktionalität auf die Lehre in gestalterischen Studiengängen zugeschnitten... Schnittstelle für die moderne Lehre
In seiner Funktionalität auf die Lehre in gestalterischen Studiengängen zugeschnitten... Schnittstelle für die moderne Lehre
Current research demonstrates that technology can discriminate against minorities and marginalized groups due to their physical appearance. In order to create more inclusive technological products, the representation of various identities needs to be ensured during the design process. This study aims to explore an alternative and inclusive approach of quantifying the diversity of humankind in the context of technology. The challenge arises that physical variations of humans are fluid and multifaceted rather than clearly defined and, therefore, complex to „measure“.
To test the hypothesis that diversity can be quantified using interactive classification systems, an ideation workshop was conducted to generate ideas on how sensitive data can be entered. Additionally, semi-structured interviews were held to gain in-depth insights into the topics of inclusion, identity, and classification. The acquired knowledge was applied to design a user interface and interaction concept before developing an application that collects data on diversity.
The research shows that computational categories offer enough flexibility to convert information on diversity into data types. However, designing an inclusive classification system is a difficult challenge. Moreover, further steps need to be taken to ensure ethical data collection and analysis.
Over the last decade, facial analysis software has gained considerable popularity and is now implemented into various technologies to support everyday tasks. From unlocking smartphones, to passport control check- points and surveillance technology — facial analysis algorithms are applied to detect, recognize, and identify faces all over the world.
Nevertheless, recent research demonstrates that facial analysis algorithms discriminate against people of color, transgender people, and other marginalized groups. These technologies entail racial bias and inaccuracy with the consequence that people are misidentified or not detected at all. As explained earlier, technological systems are often created without considering ethical matters of underrepresented groups. In the case of commercial facial analysis software, algorithms are often developed using a homogeneous image pool that does not represent humanity's diversity. Therefore, most software never learns how to analyze people who look different.
This thesis builds on existing research concerning algorithmic discrimination and proposes a new concept for creating a dataset of diverse images of faces. The research focuses on the question how diversity can be accurately quantified by classifying key physical traits of humans.
Historically, systems of classification have been used as tools of power to oppress, dominate, and exclude. Consider slavery in the United States, genocide in Nazi Germany, or Apartheid in South Africa. In this context, an alternative approach is explored to examine how classifying and counting can be used to strive for more equality and social justice.
The problem therefore arises that information on diversity, such as phenotypic appearance, has to be placed into categories in order to be counted. Moreover, to be processed by software, information needs to be converted into data, a computational category. So, how can humans with their intertwining, fluid characteristics be categorized and encoded into discrete data types? What alternative design is possible — without building on traditional classification systems which tend to reinforce stereotypes and exclude underrepresented groups? The purpose of the dataset is to ensure diversity within the context of facial analysis software. For that reason, the classification system focuses on key physical traits such as age, gender, and phenotype.
Following literature review, traditional systems of classification for the three categories will be evaluated with a focus on Germany and the United States. An expert interview is held to acquire qualitative, in-depth knowledge concerning inclusion and classification architecture. Subsequently, an ideation workshop is held to elaborate upon interactive concepts to enter such sensitive information. To gain qualitative insights into perspectives and experiences of individuals, interviews are conducted with people of multifaceted backgrounds.
The final interface is designed though multiple ideation loops in which the interviewee's feedback is incorporated into the concept. The technical solution is demonstrated by developing a smartphone application which is connected to a cloud-hosted database. The user takes a picture of their face and self-classifies their appearance before saving the image with its descriptors in the dataset.
Die aktuelle Forschung weist auf, dass sich technologische Produkte auf Minderheiten und unterdrückte Gruppen diskriminierend auswirken können. Um inklusivere Technik zu entwickel, muss die Repräsentation diverser Identitäten während des Designprozesses gewährleistet sein. In dieser Studie wird ein alternativer und inklusiver Ansatz entwickelt, wie sich die Diversität der Menschheit im Bezug auf technologische Produkte quantifizieren lässt. Die Schwierigkeit liegt darin, dass das menschliche Erscheinungsbild keine “messbaren” Abgrenzungen aufweist, sondern vielschichtige, fließende Übergänge erkennbar sind.
Um die Hypothese zu belegen, dass sich Diversität mittels interaktiver Ordnungssysteme quantifizieren lässt, wurde ein Ideation-Workshop gehalten, in welchem Konzepte zur Eingabe sensibler Daten entwickelt wurden. Des Weiteren erfolgten Interviews, um Kenntnis und Erfahrungen bezüglich der Themengebiete Inklusion, Identität und Klassifizierung zu sammeln. Das angereicherte Wissen diente zur Gestaltung eines User-Interfaces sowie Interaktions-Konzepts. Im Anschluss wurde eine App programmiert, mit welcher sich Daten zu Diversität sammeln lassen.
Die Arbeit belegt, dass computerbasierte Kategorien genügend flexibilität besitzen, um Informationen zu Diversität in Datentypen zu konvertieren. Die Gestaltung eines inklusiven Ordnungssystems ist jedoch eine große Herausforderung. Darüber hinaus muss eine ethische Datensammlung sowie -auswertung sichergestellt werden.