The trouble with emotion-reading AI – Computerworld



Some companies have voluntarily rejected emotion AI. Microsoft, for example, announced in June 2022 that it would retire the Azure Face API’s emotion-recognition capabilities (along with inference of gender, age, smile, facial hair, hair, and makeup) as part of an overhaul of its Responsible AI Standard. 

The company’s Chief Responsible AI Officer, Natasha Crampton, explained the change by citing “the lack of scientific consensus on the definition of ’emotions,’ the challenges in how inferences generalize across use cases, regions, and demographics, and the heightened privacy concerns around this type of capability.” Microsoft also worried that such technology “can subject people to stereotyping, discrimination, or unfair denial of services.”

So while there are real and helpful uses for emotion AI in some cases, the science behind it is weak, the results are often misleading, employees generally dislike it and find it stressful, bias is likely built in, privacy violations are likely — and it might not even be legal internationally or even across all American states. 



Source link