I am in the process of finalizing my thesis project and will update this page on April 16th. However, my last project from Thesis Studio I is documented below.


Studio II: Final Deliverables ~ Spring 2019

Abstract

If machines are being run on biased data what does that mean for the future of humanity? Especially for those who are targeted?

SAL demonstrates the dangers of biased data in technology. It watches and examines the user all throughout the project and judges them based on corrupt datasets. The goal of this project is to highlight and stress the importance of human bias in datasets and what that could mean when it comes to the future of technology.

Studio I: Final Deliverables ~ Fall 2018

Abstract

If machines are being run on biased data what does that mean for the future of humanity? Especially for those who are targeted?

SAL demonstrates the dangers of biased data in technology. It watches and examines the user all throughout the project and judges them based on corrupt datasets. The goal of this project is to highlight and stress the importance of human bias in datasets and what that could mean when it comes to the future of technology.

Thoughts & Goals

The questions that are really fueling me throughout this process are a mixture of personal and ethical thoughts and ideas. As a black creative who is interested in machine learning I can not help but think of the dangers of misidentification or biased data in machines. Machine learning and artificial intelligence have been automating our lives and making things easier. What happens when devices are unintentionally fed biased data?

In the end I believe that through his experience the users will leave with a feeling of understanding, empathy, or contemplation. To show this I am building a TSA system that demonstrates the problem and having the user experience it first hand through the process. Depending on who is taking it their results will vary. By placing the power in the hands of the system I'm replicating the power that TSA agents have over travelers.

I am making this to highlight the importance of accurate representation in data and technology. I experience acts of micro-aggression and racism all the time. Like many other minorities I've noticed that it changes based on where I am and what I'm wearing. I constantly think about the future and what it will be like, both the good and the bad. If machines are programmed by people with certain biases or feelings towards others what does that future feel like? As a creative I can't help but think about the implications of the things I'm creating and who I'm creating them for.

The goal of this project is to let people see a side of technology that they don't usually see. Unless you are interested in it you wouldn't really know how your data is being used on you or how you are being classified. I want people to be aware of this and how dangerous it can be if used incorrectly against a specific group of people. We've already seen this sort of classification and treatment being rolled out in various nations like the US (Muslim ban) and China (Social ranking).

Research

I am studying the possible dangers of biased data in machines. I chose this because as someone who experiences micro-aggressions and racism I worry about what will happen when people teach machines this behavior. They may not do it intentionally and that's even scarier. I spent the last three months exploring what the future could possibly feel like for someone like me.

My domains of this research and interest for this project varied a lot throughout the last few months. As my ideas and concepts shifted some domains came and left but two always remained. Human-Computer Interaction and Biased Data. Aside from the personal reasons, I'm interested in this area because I'm a developer and it is something that I must hold myself accountable for as well. Speaking to people and visiting museums/researching work have been my greatest source of information and knowledge. The trip to the Whitney Museum in particular has proven to be the most helpful.

It is highlighted in the Community of Practice Report which serves as my main document of research.

Audience & Experience

This project isn't for a specific group of people, although some may relate to it more than others. The stakeholders would be the general public who interact with technology on a daily basis. Whereas, the community of practice in this case would be more geared towards developers and creators. While trying to test with the audience I've noticed that I've gotten only a majority white or asian group pf people to test it out. I need to get a much wider diverse set of people involved. The core experience of my project is the realization of what's going on as your going through it and realizing that you can't stop it.

Prototype

Prototype 1: Goddard OS

Initially my form only existed as a piece of software and that was how I wanted it to be. I was going to create an OS that could be operated through gestures and voice. This is where I first began flirting with machine learning and biased data. I was exploring what a computer would be like if it behaved like its user.



I was given feedback about how it could be more aggressive and really take on the negative side of people and their behaviors. I liked the ideas and tied the thread to the idea of biased data in machinery.

Prototype 2: Super American License

About midway through the semester I pivoted and went in another direction. With the concentration now on bias data I wanted to build something that could show that. SAL (Super American License) does this by putting the user through an application process and running the profiles through biased datasets as they're going through it. The end result shows wether the user passed or failed the application, given the biased data set, and if so why. It gets printed out on a piece of paper. I initially wanted to go with a booth as the form. To replicate the voting process that is so pivotal in America. (See photos by Santi in Community of Practice Report )



In order to make sure the user experiences some biases I added a single if/else statement to the code to determine the outcome from the beginning. When the user takes their picture SAL will analyze them and determine their race. If they're identified as white they automatically pass. Also, during the pledge of allegiance, SAL takes a photo of the user and if they have their hand over their chest they gain extra points. This was just for the sake of the prototype, in the future iteration the final outcome will be based on various other factors as well.

User Testing

During the Thesis Popup show I was able to get a lot of feedback from various testers. The main problem I was encountering was that the instructions weren't clear enough. Which often caused the message to get lost through the process. Also, I was told frequently to push it further and make the bias more subtle and not so much in your face. It took away from the purpose and the National Anthem along with the Pledge made it seem like too much of a joke for some users. I have to find the right balance of satire to add to the project.





By making the process extremely bias I was able to have the users experience and feel how it feels to be ruled out by a machine but the approach didn't allow them to feel it the right way. The directness of the bias made it not as impactful. Moving forward I plan on implementing this in a far more subtle and realistic manner.