Does AI-driven cloud computing need ethics guidelines?


Just ask any marketing person—it’s their job to keep demand for a product or service high. So they depend on advertising and other methods to create brand recognition and a sense of demand for what they sell.

These days marketing firms are even more clever, recruiting social media influencers who promote a product or service directly or indirectly—sometimes without disclosing that they are a paid lackey. 

We’re getting better at influencing humans, either by using traditional advertising methods, such as keyword advertising, or, even scarier, by leveraging AI technology as a way to change hearts and minds. Often “the targets” don’t even understand that their hearts and minds are being changed.

Researchers have discovered a challenge presented by the AI-powered speech generator GPT-2, released by OpenAI in 2019. The AI research lab’s chat tool excited the tech community with its capability of generating convincingly coherent language from any type of input.

Shortly after GPT-2’s creation, observers warned that the powerful natural language processing algorithm wasn’t as innocuous as people thought. Many pointed out an array of risks that the tool could pose, especially from those who might seek to weaponize it to do less-than-ethical things. The core concern was that text generated by GPT-2 could persuade people to break ethical norms that had been established during a lifetime of experiences.

This is not Manchurian Candidate stuff, where you’ll be able to activate a zombie-like killer, but really more gray-area decisions. Consider, for example, a person who will likely not stretch the rules for personal gain, such as stealing a customer from another salesperson. Can that moral person be swayed by an AI system that’s able to influence human behavior by leveraging its training? 

Copyright © 2021 IDG Communications, Inc.



Source link