Podcast: Play in new window | Download (Duration: 1:39:40 — 40.8MB) | Embed
Subscribe: Google Podcasts | Email | RSS | Subscription Options
All generative Artificial Intelligence (AI) works fundamentally the same: AI neural networks learn from large training sets, gleaning patterns from the contents of those training sets in order to create original content based on their understanding of those patterns. When the companies behind those AI tools use content available on the web for training, do they need to ask permission from the content creators? You and I don’t. We can look at as much as we like and learn as much as we can. Is it the same for AI training sets or is it something else altogether, more akin to Napster using existing music without compensating the artists? Neville and Shel are on opposite sides of the debate. Also in this episode:
- The University of Iowa’s school of business has introduced a program to teach students how to tell stories. Storytelling is a crucial business skill that few businesses value. Communicators can help change that.
- A BBC football analyst — a contractor, not an employee — made some partisan remarks on a social network and was punished for violating standards by which employees are required to abide. It reopened a long-dormant discussion about social media policies.
- Did Silicon Valley Bank communicate too little, contributing to its failure? Or did it communicate too much? Is it possible the bank did both?
- Microsoft has introduced Co-Pilot, the tool that will let users of its software tap into the power of Artificial Intelligence.
Dan York shares news from WordPress, WhatsApp, the world of ChatBots, and more in his Tech Report.
Continue Reading →