Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.

DeepMind’s new AI chatbot, Sparrow, is being hailed as an important step towards creating safer, less-biased machine learning systems, thanks to its application of reinforcement learning based on input from human research participants for training. 

The British-owned subsidiary of Google parent company Alphabet says Sparrow is a “dialogue agent that’s useful and reduces the risk of unsafe and inappropriate answers.” The agent is designed to “talk with a user, answer questions, and search the internet using Google when it’s helpful to look up evidence to inform its responses.” 

But DeepMind considers Sparrow a research-based, proof-of-concept model that is not ready to be deployed, said Geoffrey Irving, safety researcher at DeepMind and lead author of the paper introducing Sparrow.

“We have not deployed the system because we think that it has a lot of biases and flaws of other types,” said Irving. “I think the question is, how do you weigh the communication advantages — like communicating with humans — against the disadvantages? I tend to believe in the safety needs of talking to humans…I think it is a tool for that in the long run.” 


MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way al...