That was worth watching, but I felt they spent the last half conflating topics.
Manipulating the suggestibility of people by carefully targeting what you present them is a scary/powerful tool. But that stuff is driven by humans, not really AI in my view.
And the part about Microsoft's chat bot turning into a troll made it seem like the thing turned hateful just being exposed to the public at large, but I'd guarantee that was the consequence of Anons targeting it to shape it. Still spooky, but the threat vector isn't really the bot, it's folks behind it manipulating it.
The scariest stuff is what Musk was talking about, where the bot has control, and a 'mission', and then executes that mission irrespective of things we'd consider important, but forgot to program into the bot.
It's still really hard to see the point where the bot 'learns' enough on its own to program its own mission and limits (or ethics, or whatever you want to call them). But I guess that'd be when we're the ants.