Remember Microsoft Tay? The 2016 Twitter chatbot suffered from a concerted effort to teach it all manner of vile things, prompting Microsoft to withdraw the service and apologize. That experience is top of mind for pretty much everyone talking about Meta’s new chatbot, now available in public testing in the U.S. The developers behind BlenderBot 3 insist that they, too, had the Tay experiment in mind when they developed the process for teaching this new chatbot to behave. In fact, the beta is designed so people can report when it says something inappropriate that it may have learned from another user. That information is used to teach it what is not appropriate. Will it work?
The next monthly, long-form episode of FIR will drop on Monday, August 22. FIR “shorts” — episodes under 15 minutes — will appear once or twice weekly.
We host a Communicators Zoom Chat each Thursday at 1 p.m. ET. For credentials needed to participate, contact Shel or Neville directly, request the credentials in our Facebook group, or email email@example.com.
Special thanks to Jay Moonah for the opening and closing music.
Links from this episode:
- Meta is putting its latest AI chatbot on the web for the public to talk to
- Meta unleashes BlenderBot 3 upon the internet, its most competent chat AI to date
- Blender Bot 2.0: An open source chatbot that builds long-term memory and searches the internet
- ‘Overrepresented among America’s super rich’: Facebook’s new Blender Bot 3 chatbot makes antisemitic comments
- Facebook’s new AI can tell lies and insult you – so don’t trust it, firm warns
- Meta’s AI chatbot has some election-denying, antisemitic bugs to work out after the company asked users to help train it
- The human error of Tay (from Neville Hobson’s blog in 2016)
- Midjourney (AI image generation service)