141: Can brands control generative AI?

7 months ago
28

Bo Sacks sent me an email about an article in The Guardian regarding the death of a 21-year old water polo coach. Microsoft created an AI-generated poll that appeared with the post. The poll asked readers "What do you think is the reason behind the woman’s death?” and they were offered three choices: murder, accident, or suicide.

People got pretty upset about that.

It reminds me of the beginning of the movie "The Best Exotic Marigold Hotel," in which a woman gets a marketing call, mentions that her husband had just died, and the call-center person just went on with the script as if nothing had happened.

In other words, the problem of inappropriate or distasteful responses is not limited to AI. Any system that mimics human response without human sensibilities can create the same issues. You could get a similarly inappropriate situation with an ad placement.

My friend Lev Kaye says you almost always want a human in the loop, and I agree. But since every individual on your website could be getting a different ad, or a different survey, how do you monitor this? How do you put the human in the loop?

We hear a lot about the potential of "disinformation" from AI, but "distasteful" might be the bigger threat to many brands.

It's easy to say that you'll review everything from generative AI before it goes live, but is that really possible? I don't think anyone could have reviewed the technology that created that Microsoft poll. It was probably something like ... when a story’s about X, look up other stories about X and see what people talk about in the comments. Make a poll out of that.

What do you do? Here are some ideas.

First, let's take the obvious. Review things that can be reviewed, like images or static text.

Second, when something isn't static, have a "worst-case scenario" brain-storming session. It might be a fun exercise for your employees. Then ask if dynamically generated content is worth the potential risk.

AI is going to start taking over things like customer service call centers, and that promises to save a lot of money. But is it worth the risk of a horribly inappropriate comment?

Consider this. Maybe we shouldn't try to make these AI replicants sound like humans. If ChatGPT makes a silly mistake that a human wouldn't make, you don't get mad -- you think it's funny. But that's because you know it's ChatGPT and not a person.

If you disguise AI -- and try to make it sound like a human -- you might be creating more problems than you're solving. You might be better off owning up to the fact that a computer is answering the phone and making your automated customer service system sound like a computer.

Let's take that idea back to the poll.

What if the website said the poll was generated by AI. Would people have been as offended at its distasteful question? Maybe not. Maybe that’s the safer path.

Resources

AI Ire: 'The Guardian' Blasts Survey Run Next To News Story In Microsoft
https://www.mediapost.com/publications/article/390797/ai-ire-the-guardian-blasts-survey-run-next-to-n.html?edition=132221

Loading comments...