In this edition of our Day in the Life series, we follow Sr. Principal Research Manager Cecily Morrison of Microsoft Research Cambridge’s Equitable AI team.

What Does a Typical Workday Look Like for You

Every day looks different. Many days we are working together as a team or with users. Below is an example of what a day might look like when I am doing individual work from home. 

  • 6 am: Team meeting (including UK and Australia)

  • 7 am: 1-1 with direct report

  • 7:30 am: mentoring call

  • 8 am: Work time (Review paper for a colleague)

  • 10 am: Partner meeting

  • 11 am: Intern supervision

  • 12 pm: Lunch

  • 12:30 pm: Maths lesson with my (blind) son

  • 1:15 pm: Email

  • 2 pm: Meeting with HR about new policy

  • 3 pm: Pick up kids from school

With Your Current Research Focus, What Major Problems Are You Trying To Solve?

The world is a kaleidoscope of people -  rich in history, cultural nuance, and different ways of being. My research team focuses on how we bring that plurality into AI models. AI should be the mirror of the full richness of our societies, but it is currently limited by data, architectures, and evaluation approaches. 

We’ve started to imagine what the next generation of AI could look like through thinking about the key building blocks of AI: data and evaluation. While people have long focused on data, we have taken the stance that data and evaluation cannot be separated. To choose the right data to train a system on, we must also define what we want the outcome to be and find ways that we can evaluate that outcome at scale. 

We have brought this idea from theory to practice in building AI data stewardship tools for marginalized communities, helping them define what “good” representation means for their community in AI media generation and then supporting that notion of “good” with data and metrics. In doing so, we make space for many community voices to shape what AI produces. 

Can You Share a Specific Challenge You’ve Faced Recently in Your Research and How You Approached Solving It?

Communities know what matters to them, but they don’t always know how to express that to an AI system. A recent example of this challenge was creating highlights, often referred to as bounding boxes in AI literature. Highlights help the AI system know what matters in the image during training. In a recent project, we asked community leads to highlight image elements they considered important for representing their community. In the example below, we see that community leads annotated general items, such as a wall clock and coat hanger rather than the representational aspirations of their community, that it showed a politician of short-stature using adaptive furniture. To address this mismatch between how people think and the needs of an AI system, we made several changes to the user experience. We limited annotations to two categories of community relevant ‘objects’ and ‘people/animals’, and capped each image at five bounding boxes to encourage meaningful selections. By tailoring our highlights instructions to these categories, we achieved a closer coupling of the annotation task to community-specific elements and representation aspirations, which improved the relevance of annotations over more generic labels. 

What’s Something You Wish More People Understood About Working in AI Research?

It is not all about the models. 

I believe teachable AI will be critical to creating AI systems that work everyone. Teachable AI systems allow users to provide their own examples to teach an AI system new concepts. No AI developer or company can fathom what the entire world needs. We need to provide infrastructure that allows communities and individuals to make AI their own. The quickly developing field of post-training is opening up a myriad of ways that we can inject data into models to create bespoke experiences. I look forward to these methods being extended in ways that allow users to directly shape their own AI experiences and outputs. 

How Do You Show Up for Others in Your Work?

Critical to our work is our multidisciplinary team. We have people who’s expertise comes from human-centered disciplines, such as design and human-computer interaction, as well as people trained in machine learning and engineering. With such a tightly knit team, we all show up for each other. We give each other time to ask questions and learn about concepts from other domains; we step in for each other when someone needs a moment for their personal life; and we value each person’s growth and make space for it in our projects. It is always a pleasure to welcome interns onto the team, many who’ve never had the opportunity to work in a multidisciplinary team, to offer a research space that is open and collaborative.   

Who or What Has Influenced Your Thinking the Most in Your Research Journey?

 

Photo taken by design student Yinyin Zhou as part of her project developing a speech computer for my son.

My family’s lived experience of disability has been one of the most significant steers in my research career. Having a very capable child with significant disability gives me a daily reminder that we cannot serve our marginalized communities “later.” When we are innovating, we should be mindful that every AI decision we make needs to enable AI to be inherently extensible to all people. The common engineering approach that “this is a hard problem; we’ll solve for the easy 80% and then we’ll figure out how to extend it to everyone” unfortunately leaves many underserved by technology. With the right approach and innovation focus, we can build models in ways that are extensible to everyone. 

How Do You Hope Your Research Will Impact People or Society in the Next Decade?

We are currently building the foundation for large model AI. We need to build that foundation such that it is extensible to everyone. We do not want to institute an AI divide that mirrors the digital divide, causing prosperity divisions within countries and across the world. A big part of that is making sure we have robust mechanisms for evaluation. If we haven’t defined where we are going through how we evaluate, we aren’t likely to get there. I hope that our research in bringing community voice into AI measurement practices is an important part of the puzzle of making AI that reflects the diverse colors of our world. 

What Advice Would You Give to Someone Curious About Working in AI Research?

AI research is for anyone who wants a hand in shaping the world. We necessarily work across domains and skill sets, and seek diversity of perspectives, giving lots of doors to entry. The deeper your knowledge of people and their lived experience, the better AI researcher you will be.

Cecily Morrison

Role: Sr. Principal Research Manager
Company: Microsoft

I am a Sr Principal Research Manager in Equitable AI at Microsoft Research Cambridge. I co-lead the Teachable AI Experience team (TAI X) which aims to innovate new human-AI interactions that bring us to a more inclusive society.

I believe strongly that we must innovate the machine learning techniques that we use in conjunction with designing new types of experiences. I hold a PhD in Computer Science from University of Cambridge and an undergraduate degree in Ethnomusicology from Barnard College, Columbia University.

👉 Connect with Cecily on LinkedIn

Reply

or to participate