Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.

The rapid growth in machine learning (ML) capabilities has led to an explosion in its use. Natural language processing and computer vision models that seemed far-fetched a decade ago are now commonly used across multiple industries. We can make models that generate high-quality complex images from never before seen prompts, deliver cohesive textual responses with just a simple initial seed, or even carry out fully coherent conversations. And it’s likely we are just scratching the surface.

Yet as these models grow in capability and their use becomes widespread, we need to be mindful of their unintended and potentially harmful consequences. For example, a model that predicts creditworthiness needs to ensure that it does not discriminate against certain demographics. Nor should an ML-based search engine only return image results of a single demographic when looking for pictures of leaders and CEOs.

Responsible ML is a series of practices to avoid these pitfalls and ensure that ML-based systems deliver on their intent while mitigating against unintended or harmful consequences. At its core, responsible AI requires reflection and vigilance throughout the model development process to ensure you achieve the right outcome. 

...