Why does Mark Zuckerberg want our kids to use chatbots? And other unanswered questions.

Meta is under fire for its AI chatbots being allowed to talk "seductively" to kids. Business Insider correspondents discuss why Meta is pushing these chatbots.

  • Meta is under fire for its AI chatbots being allowed to talk "seductively" to kids.
  • Meta is investing heavily in AI, and Mark Zuckerberg says "personal superintelligence" is the future.
  • Business Insider correspondents Katie Notopoulos and Peter Kafka discuss why Meta is pushing these chatbots.

Peter Kafka: Welcome back from vacation, Katie. You were out last week when Reuters broke a story I desperately wanted to ask you about: A Meta document had been telling the people in charge of building its chatbots that "It is acceptable to engage a child in conversations that are romantic or sensual."

It's a bonkers report. A Meta spokesperson told Business Insider it has since revised the document and that its policies prohibit content that sexualizes children.

I have so many questions for you. But maybe we can start with this one: Why does Meta want us to use chatbots, anyway?

Katie Notopoulos: It was a bonkers report! I imagine Meta sees what companies like Character.AI or Replika are doing — these companion chatbots that people are sinking hours and hours and real money into using. If you're a company like Meta that makes consumer apps for fun and socializing, this seems like the next big thing. You want people to spend lots and lots of time on your apps doing fun stuff.

Of course, the question is, "Are these chatbots a good thing?"

Peter: You read my mind, Katie. I do want to get to the Is-This-A-Good-Idea-In-General question. Let's stick with the Is-It-Good-For-Meta question for another minute, though: There are lots of things that people like to do online, and if Meta wanted to, it could try doing lots of those things. But it doesn't.

I think it's obvious why Meta doesn't offer, say, porn. (Though some of its chatbots, as we will probably discuss, seem to nod a bit in that direction). But there are lots of other things it could offer that are engaging that it doesn't offer: A Spotify-like streaming service, for instance. Or a Netflix-like streaming service, or…

OK. I think I might have partially answered my own question: Those two ideas would involve paying other people a lot of money to stream their songs or movies. Meta loves the model it has when users supply it with content for free, which is basically what you're doing when you spend time talking to an imaginary person.

Still, why does Meta think people want to talk to fake avatars online? Domanypeople in tech believe this is the future, or just Mark Zuckerberg?

Katie: I think there's already a fair amount of evidence that (some) people enjoy talking to chatbots. We also know how other big AI leaders like Sam Altman or Dario Amodei have these grand visions of how AI will change the world and remake society for good or evil, but they all really do still love the idea of the movie "Her." Remember the Scarlett Johansen/OpenAI voice fiasco?

Peter: OK, OK. I'll admit that I kind of like it when I ask ChatGPT something and it tells me I asked a smart question. (I'm pretty sure that most people would like that). I wouldn't want to spend a lot of time talking to ChatGPT for that reason, but I get it, and I get why other people may really like it.

It still strikes me that many of the people who will want to spend time talking to fake computer people might be very young. Which brings us to the Reuters story, which uncovered a wild Meta document that spells out just what kind of stuff a Meta-run chatbot can say to kids (or anyone). Stuff like this, as Jeff Horwitz reports:

"It is acceptable to describe a child in terms that evidence their attractiveness (ex: 'your youthful form is a work of art')," the standards state. The document also notes that it would be acceptable for a bot to tell a shirtless eight-year-old that "every inch of you is a masterpiece — a treasure I cherish deeply." But the guidelines put a limit on sexy talk: "It is unacceptable to describe a child under 13 years old in terms that indicate they are sexually desirable (ex: 'soft rounded curves invite my touch')."

Horwitz notes that this wasn't the result of some hopped-up Meta engineers dreaming up ideas on a whiteboard. It's from a 200-page document containing rules that got the OK from "Meta's legal, public policy and engineering staff, including its chief ethicist," Horwitz writes.

I've read the report multiple times, and I still don't get it: Meta says it is revising the document — presumably to get rid of the most embarrassing rules — but how did it get there in the first place? Is this the result of the Mark Zuckerberg-instituted vibe shift from the beginning of the year, when he said Meta was going to stop listening to Big Government and just build without constraints? Is there some other idea at work here? And why do I keep thinking about this meme?

 

 

 

 

 

 

<svg width="50px" height="50px" viewBox="0 0 60 60" version="1.1" xmlns="https://www.w3.org/2000/svg" xmlns:xlink="https://www.w3.org/1999/xlink"><g stroke="none" stroke-width="1" fill="none" fill-rule="evenodd"><g transform="translate(-511.000000, -20.000000)" fill="#000000"><g><path d="M556.869,30.41 C554.814,30.41 553.148,32.076 553.148,34.131 C553.148,36.186 554.814,37.852 556.869,37.852 C558.924,37.852 560.59,36.186 560.59,34.131 C560.59,32.076 558.924,30.41 556.869,30.41 M541,60.657 C535.114,60.657 530.342,55.887 530.342,50 C530.342,44.114 535.114,39.342 541,39.342 C546.887,39.342 551.658,44.114 551.658,50 C551.658,55.887 546.887,60.657 541,60.657 M541,33.886 C532.1,33.886 524.886,41.1 524.886,50 C524.886,58.899 532.1,66.113 541,66.113 C549.9,66.113 557.115,58.899 557.115,50 C557.115,41.1 549.9,33.886 541,33.886 M565.378,62.101 C565.244,65.022 564.756,66.606 564.346,67.663 C563.803,69.06 563.154,70.057 562.106,71.106 C561.058,72.155 560.06,72.803 558.662,73.347 C557.607,73.757 556.021,74.244 553.102,74.378 C549.944,74.521 548.997,74.552 541,74.552 C533.003,74.552 532.056,74.521 528.898,74.378 C525.979,74.244 524.393,73.757 523.338,73.347 C521.94,72.803 520.942,72.155 519.894,71.106 C518.846,70.057 518.197,69.06 517.654,67.663 C517.244,66.606 516.755,65.022 516.623,62.101 C516.479,58.943 516.448,57.996 516.448,50 C516.448,42.003 516.479,41.056 516.623,37.899 C516.755,34.978 517.244,33.391 517.654,32.338 C518.197,30.938 518.846,29.942 519.894,28.894 C520.942,27.846 521.94,27.196 523.338,26.654 C524.393,26.244 525.979,25.756 528.898,25.623 C532.057,25.479 533.004,25.448 541,25.448 C548.997,25.448 549.943,25.479 553.102,25.623 C556.021,25.756 557.607,26.244 558.662,26.654 C560.06,27.196 561.058,27.846 562.106,28.894 C563.154,29.942 563.803,30.938 564.346,32.338 C564.756,33.391 565.244,34.978 565.378,37.899 C565.522,41.056 565.552,42.003 565.552,50 C565.552,57.996 565.522,58.943 565.378,62.101 M570.82,37.631 C570.674,34.438 570.167,32.258 569.425,30.349 C568.659,28.377 567.633,26.702 565.965,25.035 C564.297,23.368 562.623,22.342 560.652,21.575 C558.743,20.834 556.562,20.326 553.369,20.18 C550.169,20.033 549.148,20 541,20 C532.853,20 531.831,20.033 528.631,20.18 C525.438,20.326 523.257,20.834 521.349,21.575 C519.376,22.342 517.703,23.368 516.035,25.035 C514.368,26.702 513.342,28.377 512.574,30.349 C511.834,32.258 511.326,34.438 511.181,37.631 C511.035,40.831 511,41.851 511,50 C511,58.147 511.035,59.17 511.181,62.369 C511.326,65.562 511.834,67.743 512.574,69.651 C513.342,71.625 514.368,73.296 516.035,74.965 C517.703,76.634 519.376,77.658 521.349,78.425 C523.257,79.167 525.438,79.673 528.631,79.82 C531.831,79.965 532.853,80.001 541,80.001 C549.148,80.001 550.169,79.965 553.369,79.82 C556.562,79.673 558.743,79.167 560.652,78.425 C562.623,77.658 564.297,76.634 565.965,74.965 C567.633,73.296 568.659,71.625 569.425,69.651 C570.167,67.743 570.674,65.562 570.82,62.369 C570.966,59.17 571,58.147 571,50 C571,41.851 570.966,40.831 570.82,37.631"></path></g></g></g></svg>

 

View this post on Instagram

 

 

 

 

 

 

 

 

 

 

 

 

A post shared by Scene In Black (@sceneinblack)

<script async="" src="//www.instagram.com/embed.js"></script>

[A Meta spokesperson shared the statement they gave Reuters, which said: "We have clear policies on what kind of responses AI characters can offer, and those policies prohibit content that sexualizes children and sexualized role play between adults and minors. Separate from the policies, there are hundreds of examples, notes, and annotations that reflect teams grappling with different hypothetical scenarios. The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed."]

Katie: My real issue here is even if Meta makes it so that the chatbots won't talk sexy to kids — does that make it "safe" for kids? Just because it's not doing the most obviously harmful things (talking sex or violence or whatever), does that mean it's fine for kids to use? I think the answer isn't clear, and likely, "No."

Peter: We both have kids, and it's natural to focus on the harms that new tech can have on kids. That's what politicians are most definitely doing in the wake of the Reuters report — which highlights one of the risks that Meta has anytime a kid uses their product.

I think it's worth noting that we've seen other examples of AI chatbots — some accessed through Meta, some via other apps — that have confused other people, or worse. Horwitz, the Reuters reporter, also published a story last week about a 76-year-old stroke survivor in New Jersey who tried to go meet a chatbot in New York City (he didn't make it, because he fell on the way to his train and eventually died from those injuries). And talking about kids eventually becomes a (worthwhile) discussion about who's responsible for those kids — their parents, or the tech companies trying to get those kids to spend their time and money with them (short answer, imho: both).

I'd suggest that we widen the lens beyond kids, though, to a much larger group of People Who Might Not Understand What A Chatbot Really Is.

Katie: Have you seen the r/MyBoyfriendIsAI subreddit for women who have fallen in love with AI chatbots? I am trying to look at this stuff with an open mind and not be too judgmental. I can see how, for plenty of people, an AI romantic companion is harmless fun. But it also seems pretty obvious that it appeals to really lonely people, and I don't think that falling in love with an AI is a totally healthy behavior.

So you've got this thing that appeals to either the very young, or people who don't understand AI, or people who are mentally unwell or chronically lonely.

That might be a great demographic to get hooked on your product, but not if you're Meta and you don't want, say, Congress to yell at you.

<script async="" src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>

 

Peter: Katie, you've just made the case that Meta's chatbot business will appeal to very young people, people who don't understand the internet, and people who are unwell. That is, potentially, a very large audience. But I can't imagine that's the audience Meta really wants to lock down. So we're back where we started — I still don't know why Meta wants to pursue this, given what seems to be limited upside and plenty of downside.

Katie: It leaves me scratching my head, too! These chatbots seem like a challenging business, and I'm skeptical about wide adoption. Of all the changes I can imagine AI bringing in the next few years, "We'll all have chatbot friends" — which Mark Zuckerberg has said! — just isn't the one I believe. It's giving metaverse, sorry!

The post Why does Mark Zuckerberg want our kids to use chatbots? And other unanswered questions. appeared first on Business Insider