Author: N/A
Publisher: Speevr
Publication Year: 2020
Summary: The following article discusses how the cause for our data ethics concerns stems from artificial intelligence (AI) starting to work. It used to be theoretical, but now these models can work but it directly can massively affect millions and millions of people. The problem is that people who curate data will have bias from people. When we develop models using this data, human-biased decisions are sent out. These machines are simply doing what they were programmed to do. Machines cannot self-reflect as we can. Therefore, the responsibility of keeping these models in check falls on those who program them. If we have a more diverse team, they can hopefully mitigate some of those biases. But that responsibility also falls on those who are leading the projects. Errors can and usually will occur; models are not perfect, and getting an ideal model can be near impossible for some problems. When it comes to data and modeling, we can do a lot on the modeling end to mitigate biases, but much simple augmentation-type work can be performed on the data to minimize bias. Ultimately, we can pull many frameworks from human rights that can be applied to data ethics, so we don’t have to reinvent the wheel. We must remember these ethics throughout the modeling process to create ethical models and be honest data scientists.