This is what happens when you train machine learning apps on biased data. If you trained a machine learning app exclusively on data from LRC, it would be heavily biased towards runners being bigoted asshats.
This is what happens when you train machine learning apps on biased data. If you trained a machine learning app exclusively on data from LRC, it would be heavily biased towards runners being bigoted asshats.
If she wanted a “professional headshot” then maybe she should’ve gone to an actual human “professional photographer” instead of relying on artificial and unreliable technology.
The AI meant that absent eyelids and Asian skin tones are unprofessional.
What’s an Asian skin tone? Asians have the same skin tone
I don’t really know, but I’ve heard it be described as more translucent. Or maybe less. And they are slightly darker on average. Machine learning can classify with decent accuracy, so there’s a visually detectable difference.
It means the training data (internet) thinks the picture on the right is more desirable. AI is just a complex view of the dataset. It doesn’t care that people think is fair or right, just what the data shows.
The developers can add rails to try and overcome the inherent bias people have represented with pictures on the internet or they can make an effort to hand feed it different training data.
It would be interesting to find out what the Playground AI actually did in response to her instruction. I think it went out and searched LinkedIn for a similar photo, just like TinEye or Google Images does when you upload a photo to their sites to see what the source might be.
The AI meant that absent eyelids and Asian skin tones are unprofessional.
Just goes to show the flaws in AI. Even AI is influenced by fake news. The fake news tells you that it is advantageous to be white, while every other metric (earnings, education, employment, etc.) tells you it is advantageous to be Asian.