Day in the Life is a series featuring Microsoft researchers. From the big problems they’re tackling to the emerging AI trends shaping their work, it offers an inside look at the people building the systems that are becoming part of our daily lives.
Trista Chen
What Does a Typical Workday Look Like for You?
6am: reading research papers, and deep focused thinking
8am: online meetings with colleagues in the US time zone
12pm: workout
2pm: meetings and collaboration with colleagues in Taipei
5pm: online meetings with colleagues in the European time zone
With Your Current Research Focus, What Major Problems Are You Trying to Solve, and Why Is This Work Meaningful or Exciting to You?
AI runs on trust. I lead a research team pioneering methods to ensure the authenticity of live users, which is critical now that AI touches nearly every part of our work and personal lives. If AI can’t tell whether it’s interacting with a real person in real time, its power risks doing more harm than good.
Face recognition, liveness, and anti-spoofing are at the heart of this. Recognition alone isn’t enough: printed photos, video replays, or even AI-generated faces can potentially trick a system. As spoofing and Generative AI race ahead, we push detection and trust validation to advance even faster. Our focus is the essence of live personhood and machine–person agency: ensuring AI knows not just who it interacts with, but that the interaction is real, current, and trustworthy.
And yes, our work area is scattered with spoofing props that double as Halloween material. With such rich material for both work and play, how could research in AI and trust be anything but exciting?

What’s Something You Wish More People Understood About Working in AI Research?
Developing a better AI model is only a small part of the work. In AI research, true success comes from clearly understanding the use scenario, designing the process pipeline, preparing high-quality data, and aligning with human values. That’s what allows a new model to truly shine. As Microsoft CEO Satya Nadella has noted, AI’s success must be measured in its impact, which encompasses all these concepts in addition to the AI model itself. When everything beyond model training is done well, the success of an AI system becomes almost inevitable.
What Emerging AI Trends Are You Most Curious About, and Why?
I’m most curious about neuro-symbolic AI—approaches that could dramatically reduce the data and compute demands of today’s deep learning models. If successful, they might capture intelligence in a form as elegant and profound as Einstein’s E = mc², which I believe is one of the most beautiful equations in the universe.
How Do You Show Up for Others in Your Work?
I strive to spark curiosity wherever I can: sharing how AI is built and applied and helping students prepare to thrive in the new AI era. I am also passionate about uplifting girls and women in tech, and about offering parents a transparent perspective, so they can cut through the noise and step into the future with confidence.

Who or What Has Influenced Your Thinking the Most in Your Research Journey?
Albert Einstein influenced me most in my research journey with his maxim: "If you can’t explain it simply, you don’t understand it well enough." NVIDIA CEO Jensen Huang brings this to life through his whiteboard practice, sketching chips, systems, and strategies in real time. By reducing complexity to boxes and arrows, he distills ideas to their essence, making them clear to anyone in the room. For Huang, as for Einstein, drawing is both a test of clarity and a proof of understanding, serving as the bedrock of true innovation.
What Advice Would You Give to Someone Curious About Working in AI Research?
Stay true to your curious heart. Research is full of failures before success, and it’s curiosity and passion that carry you through—helping you notice small things like data discrepancies, misalignments, differences in representation, or subtle shifts in order that can spark big breakthroughs. In the end, great AI research is like great art: its impact comes from passion for both the subject (domain) and the craft (algorithms).
What Advice Would You Give to Someone Curious About Working in AI Research?
As I move through the day: keeping up with product developments, planning next steps, working with colleagues and coordinating, understanding customer needs, and mentoring team members, it can get exhausting. But to most people’s surprise, I don’t recharge with a coffee break or a good meal. I recharge by quietly reading research papers without distraction. Discovering a fresh idea makes my eyes light up and gives me new energy.

Trista Chen
Role: Director, AI Research Center
Company: Microsoft
Trista Chen is an AI scientist and tech executive, currently Director, AI Research Center at Microsoft. Her work focuses on multimodal LLM, agentic AI, and human-centered AI. She has published over 30 top-tier papers, holds more than 110 patents, and her research has been recognized internationally, including a Nature Portfolio publication, the 2023 CVPR Workshop Best Paper Award, and the USAID Intelligent Forecasting world championship. Previously, she held leadership roles at Nvidia, Intel, and startups, and she earned her Ph.D. from Carnegie Mellon University and M.S. and B.S. from National Tsing Hua University.
👉 Connect with Trista on LinkedIn
Flavio Griggio
What Does a Typical Workday Look Like for You?
7:00 am: My morning begins with a cup of green tea and a quick glance at my calendar. I use this time to scan my inbox for anything urgent and jot down my top priorities for the day. On Tuesdays and Wednesdays, I have weekly early morning meetings with our European counterparts.
7:30 am: I lace up for a brisk morning run—my favorite way to charge up both mind and body before the day’s adventures begin. It helps organize my thoughts for the day.
8:30 am: I hop aboard the Connector bus from Seattle to our Redmond campus—easily my favorite Microsoft perk. The commute transforms into a rolling think tank, where I devour the latest quantum circuit papers and jot down fresh insights to share with my team. Staying ahead of the curve is a must in a field that rewrites itself daily.
9:00 am: Most days, right after a cup of espresso, my first meeting is a standup with our fabrication team. We briefly sync on progress, blockers, and any discoveries. It’s energizing to hear how every member is advancing their part of the puzzle.
10:00 am–12:00 pm: This is my most productive block—deep work time. I typically code, design experiments, or assess our fabrication yields. I mute notifications and dive in, often covering my whiteboard with so many equations and sketches that colleagues joke it could qualify as modern art—or perhaps as an unsolved riddle for visiting physicists.
12:00 pm: Lunch is usually a quick affair. If the weather’s nice, I take a walk outside or share a meal with coworkers. These informal moments often spark creative conversations or lead to new ideas.
1:00 pm: Afternoons are reserved for collaboration—mentoring engineers, meeting with academic partners, or hosting brainstorming sessions. I love these interactions; they challenge my thinking and keep me inspired.
3:00 pm: I might have a 1:1 with my manager or one of my reports. This is also a good time to catch up on Teams discussions and provide feedback to colleagues across different time zones.
4:30 pm: Most days wrap up with reviewing experiment results and planning the next steps. I log what worked, what didn’t, and create a short list for the days ahead.
6:00 pm: Before signing off, I do one last sweep of urgent emails and jot down a few lines in my research journal—a practice that helps me track progress and reflect on the day’s challenges and wins.
My Research: Superconducting Circuits for Quantum Computing
Opening New Doors in Fault-Tolerant Quantum Technology
I’m working on building superconducting circuits that can spot the tiniest changes in capacitance—these shifts show us what’s happening with our topological qubits. We’re always checking out what other teams are learning about superconducting circuits, so we can stay up to date and keep improving.
What really matters is our big goal: to create a quantum machine that stays reliable even when things go wrong, for example if the readout is noisy, slow or miscallibrated, it can misrepresent the qubit state, leading to incorrect results or failed error correction. My team’s readout circuits are key to making this happen—they help us actually see and use the power of topological qubits. When we get this right, it could have a tremendous impact—helping research in medicine, energy, environmental science, smart materials, and much more.
Most importantly, Microsoft is working to build a quantum system that can scale up and help as many people as possible. It’s exciting to be part of a team that’s moving this technology forward, and I can’t wait to see how it will shape the future.

Can You Share a Specific Challenge You’ve Faced Recently in Your Research and How You Approached Solving It?
One time when our team faced a real challenge was during the early development of a design of our superconducting circuits. We noticed some performance inconsistencies that didn’t line up with our simulations. Rather than just tweaking parameters at random, we held a brainstorming session to map out which fabrication or material variables might be at play. We narrowed down our shortlist to the most likely culprits and designed quick experiments to test each one, even syncing up with colleagues in different time zones to analyze the results in real time. That methodical approach helped us zero in on a subtle manufacturing issue that, once fixed, noticeably boosted our circuit performance. It was a great reminder of how collaboration and targeted experiments can turn a vague problem into a breakthrough.
How Do You Decide Which Research Questions Are Worth Pursuing?
Here’s what guides me: Will cracking this puzzle help teams everywhere sidestep setbacks? Can we run quick, low-cost experiments for a sneak peek before diving deep? And above all, can we capture lessons that fuel real progress—not just stories traded in the coffee room?
What Emerging AI Trends Are You Most Curious About, and Why?
I'm focused on AI that boosts engineering performance and reliability by automating tasks and generating testable hypotheses. Foundation models impress me with their ability to generalize and explain outcomes, and I’m interested in designing energy-efficient AI that still delivers strong results. For example, my team tracks product performance metrics, and AI now helps correlate deviations with process variables, offering targeted parameters for engineers to review. These advancements motivate us to develop technology that's both effective and efficient.
Can You Recall a Moment in Your Career That Made You Think, “This Is Why I Do This Work”?
One time when our team was deep into testing a new component, I witnessed a moment that truly crystallized why I do this work. After weeks of fabricating, dicing, and packaging, we gathered to review the radiofrequency characteristics. As the system team pulled up the cryogenic temperature plot, the room fell silent—the performance characteristics landed precisely where our projections had predicted, a rare alignment of theory and reality. There was a collective pause, an unspoken acknowledgment that all the careful tracking of process telemetry, projections, and etched tolerances had paid off. In that quiet, each of us felt the impact of our work and the power of collaboration, as a subtle tweak in etch bias and oxide deposition proved decisive in hitting our frequency targets. It was a reminder that behind every breakthrough, there's a team willing to test, refine, and celebrate the moments when science comes alive.
How Do You Hope Your Research Will Impact People or Society in the Next Decade?
In the next decade, I want our lab work to make quantum‑scale hardware feel dependable. By making readout components more power‑efficient and our fabrication more predictable, we’ll help larger quantum systems—and the science they enable—move from fragile prototypes to reliable tools that benefit medicine, materials, and climate research.
How Do You Stay Inspired or Recharge During the Day?
When I need a burst of inspiration, I reach for my favorite sci-fi novel—nothing fires up my imagination like exploring worlds where the impossible feels close at hand. Outside the lab, group runs are my secret recharge button; the rhythm of shared strides clears my mind and sparks new ideas before I’m back at my desk. And of course, you’ll often find me fueling up during spontaneous coffee chats with colleagues—where some of our best brainstorms begin.

Flavio Griggio
Role: Research Manager
Company: Microsoft
Flavio Griggio, a Research Manager at Microsoft Quantum, specializes in bridging research and engineering in quantum computing. Originally from Italy, Flavio studied at the University of Padua and completed his PhD at Penn State. His career spans Intel, where he focused on process technology and reliability, and Microsoft, where he contributed to both Surface devices and quantum hardware. Driven by curiosity and technical excellence, Flavio connects scientific innovation with scalable engineering solutions.
Outside of work, Flavio enjoys outdoor sports, gardening, exploring Seattle’s vibrant music and food scenes, and spending quality time with his partner and son.
👉 Connect with Flavio on LinkedIn
Ahmed Awadallah
What Does a Typical Workday Look Like for You
8:00 AM — Focus Time
I like to start and finish my day with focus time blocks. In the morning, it is more focused on hands-on work, designing, planning, etc.10:00 AM — Team Syncs and Cross-Team Alignment
Depending on the project, this could be planning meetings or a sprint review meeting where I might push the team with various questions, helping them unblock challenges and land on the right solutions.On some days, I spend the time meeting with sister teams, product partners, compliance/legal teams to align roadmaps, discuss dependencies, ensure compliance, etc.
1:00 PM — Deep Work Discussions and/or 1:1 meetings
Reserved for more technical deeper dives into different projects. Sometimes that means reviewing experiments, discussing next steps, etc..On some days, I block this time for 1:1s with my team. It’s where I can coach, listen, receive/share feedback, create clarity, etc.
4:00 PM — Focus Time
This is another time block for more focused work. At this time, I prioritize reviewing what got done, reflecting on notes from the day, updating documents, sketching out the next day’s priorities, and tying up loose ends.
With Your Current Research Focus, What Major Problems Are You Trying To Solve? Why Is This Work Meaningful or Exciting to You?
I’m currently focused on building agentic AI systems—models that can plan, execute, and adapt reliably and safely. These systems are designed to handle increasingly complex multi-step tasks, interact with their environments, and learn to improve over time, all while maintaining strong oversight, alignment, and transparency.
This work on AI agents really kicked off in 2023 with AutoGen, a project I helped lead. AutoGen is a widely adopted open-source framework for building AI agents and enabling multi-agent collaboration to solve tasks. Since its launch, AutoGen has graduated to Azure, where the team continues to evolve to provide enterprise-ready agentic AI capabilities on Azure.
Last year, Microsoft introduced the Phi family of small language models (SLMs), which redefined what’s possible with SLMs by achieving competitive performance with dramatically smaller footprints. Our current focus is on making these models agentic, starting with Phi-4-reasoning, which incorporates reinforcement learning and advanced synthetic data generation with multi-agent simulation to unlock stronger reasoning capabilities.
What’s Something You Wish More People Understood About Working in AI Research?
One misconception I often hear is that AI researchers spend most of their time inventing entirely new algorithms. In reality, much of the work happens elsewhere: curating data so it reflects the task accurately, designing experiments that test the right hypotheses, carefully evaluating results, and improving the infrastructure that makes training and testing more streamlined.
Another thing I wish more people understood is that the kinds of tasks AI can and cannot do don’t always line up with human intuition. For example, people often assume that if an AI system can solve a complex math problem, then it should easily handle a much “simpler” common sense reasoning task. But the reality is that what feels easy for humans can be incredibly difficult for AI, and vice versa. This also relates to the misconception that if AI can do something a human can do—say, solve math problems or translate languages—it must be doing it in the same way humans do. The outputs can look impressively human-like, but the underlying processes are very different, which is why models can still make brittle mistakes in situations where a person wouldn’t.
How Do You Decide Which Research Questions Are Worth Pursuing?
I usually start from a problem or a goal rather than from a new method or an abstract curiosity. That might be a capability we don’t yet have or a limitation in existing methods. I try not to define the problem too literally at the beginning, since the right framing often evolves as we develop more understanding—but I also avoid moving the goal posts too often, because that can hinder progress.
Another important factor is whether the question allows for incremental progress. Ideally, I want to see a path where we can build and measure tangible steps toward the bigger goal, rather than waiting for a single breakthrough at the end.
Finally, I’m often drawn to problems that are both use-inspired and fundamental—a perspective shaped in large part by several mentors I had here at Microsoft, who emphasized the value of work that advances underlying science while also connecting back to real capabilities and applications.
What Emerging AI Trends Are You Most Curious About, and Why?
Since early 2023, we’ve been enthusiastic adopters of Agentic AI and synthetic data generation, observing significant gains in improving SLMs by using large models as teachers to generate demonstrations and explanations for problem-solving (Orca). The Phi model family pioneered the use of synthetic data generation at scale for pre-training SLMs that rival much larger models in performance. More recently, with AgentInstruct, we’ve shown that multi-agent simulations can generate diverse, high-quality data by producing prompts, responses, and environments for validation. These methods have proven especially effective in unlocking superior reasoning capabilities in models like Phi-4-reasoning.
We are making a lot of progress in using these methods for even more complex, multi-step tasks—helping to train agents capable of reasoning and acting to solve more problems. One of the directions I am most excited about these days is the convergence of agent-based simulation, synthetic data generation, and reinforcement learning. The goal is to build scalable simulation environments for many tasks—where models not only create tasks and data, but also receive feedback on their actions. This represents a step toward self-play systems, enabling agents to learn from the outcomes of their actions within large-scale simulated environments.
Who or What Has Influenced Your Thinking the Most in Your Research Journey?
I've been fortunate to have several mentors that shaped my approach to research, but if I were to choose only one, it would be Susan Dumais, whom I was lucky to work with over many years in different capacities: mentor, manager, and collaborator. Two lessons stand out in how working with her influenced my work: first is unwavering curiosity and commitment to a rigorous, thoughtful approach to research and experimentation. Second is her remarkable ability in selecting the right problems to work on, despite her often attributing it to luck.
On the other hand, one of the most meaningful aspects of leading my team has been how much I’ve learned from the people I work with. Each person brings a different perspective and strengths, and I’ve found myself adopting ideas and approaches from many of them. Sometimes it’s a new technical insight, sometimes it’s a creative way of framing a problem, and other times it’s the persistence and conviction when they believe strongly in a direction. Being surrounded by talented individuals working together toward an ambitious goal is one of the most rewarding aspects of my job.
How Do You Hope Your Research Will Impact People or Society in the Next Decade?
We are already witnessing significant progress in both capabilities and adoption of AI agents, and I’m eager to see continued advancements in the near future—particularly in making them more reliable and trustworthy, so they can increasingly augment human capabilities in everyday life.
I also hope that we will make progress in making AI more accessible and affordable, reaching areas and communities that typically have limited access to technology—serving as tutors, healthcare advisors, etc.
What Advice Would You Give to Someone Curious About Working in AI Research?
Working on AI right now is one of the most exciting opportunities out there and we are lucky we have the chance to participate in shaping this technology. My advice is:
Build a strong foundation especially in math, statistics and coding, but also think about how to bring a multidisciplinary perspective to AI
Be curious and have a learning mindset. The availability of learning materials now is unparalleled but also the field is moving so fast that we must always be in learning mode
Learn by doing--participate in open-source projects, replicate experiments, embed yourself in a community

Ahmed Awadallah
Role: Partner Research Manager
Company: Microsoft
Ahmed Awadallah is a Partner Research Manager at Microsoft AI Frontiers Lab, where he leads teams that drive innovation in agentic AI—e.g., AutoGen, Magnetic-One, and OmniParser—as well as the development of small language models like Orca, Phi-3, and Phi-4-reasoning, and advancements in synthetic data generation and distillation. His work centers on enhancing the agentic capabilities of AI—making it more reliable and effective for real-world tasks—and improving efficiency to ensure accessibility across diverse platforms.
Previously, Ahmed led model compression and distillation efforts at Microsoft and even earlier contributed to projects in AI for productivity, Web search, and language understanding. Ahmed is also the recipient of the 2020 Karen Spärck Jones Award, recognizing significant contributions to natural language processing and information retrieval.
👉 Connect with Ahmed on LinkedIn
Jina Suh
What Does a Typical Workday Look Like for You?
My typical day varies drastically depending on the day of the week, if school is in season, or on what sports or after school activities my daughter happens to be doing. I do drop-off and pickup for my daughter and take the motherly feeding instinct to the extreme, so I’m constantly thinking about how I can coordinate early morning and afternoon meetings with food prep. So, this is a tricky question to answer, but let's look back at a summer day when my daughter was away at summer camp.
A Jina Thursday in July: AI crisis management day; [no school drop off]
7:15-7:30 am: Wake up, wash face, get dressed, pack up
7:30-8 am: Commute to work.
8-8:30 am: Check email, missed messages, open my project tracker spreadsheet to orient myself on all my to-do's.
8:30-10 am: Working group meetings on psych influences of AI, community calls and support meeting for my intern’s upcoming talk.
10-10:30 am: AI evaluation coordination meeting to get updates on model availability for testing for psychosocial harms.
10:30-1 pm: Prep for and conduct weekly workshop with external partners on AI crisis management project.
1-2:30 pm: Synthesize from our recent AI evaluation exercise and build a set of recommendations for mitigating psychosocial harms in AI design.
2:30-3 pm: Meet with our Chief Scientific Officer, Eric Horvitz, to brainstorm and determine how we establish ourselves as a thought leader in the psych space.
3-4 pm: Recap the day, my outstanding action items (note: there are a lot!) and update my tracker before heading home.
4-4:30 pm: Snack and catch up on email/chat.
4:30-5:30 pm: Commute home and take call from the car to update the team on an AI crisis management project.
5:30-6 pm: Eat, preferably something extremely spicy like Sichuanese
6-10 pm: Focus time to do work where I have the space to think through items like:
Study design for AI crisis management flow evaluation and an ask for funding;
Handle new questions about content moderator wellbeing study from IRB;
Review progress on AI longitudinal dependency study.
10 pm: Pack for 3 weeks of travel to pick up my daughter from summer camp, go on a road trip with family, and visit colleagues in Microsoft NYC and Atlanta
11 pm: Sleep...finally!
With Your Current Research Focus, What Major Problems Are You Trying To Solve? Why Is This Work Meaningful or Exciting to You?
My research focus is at the intersection of technology and mental health, where I examine the role of technology in improving human wellbeing. The current set of problems I am trying to solve are (1) understanding the space of potential psychological harms or risks that AI can introduce and (2) identifying ways that AI can be designed to maximize human potential/wellbeing. We are at the early stages of understanding a phenomenon that could unfold over decades and that could impact not just individuals but society.
Success to me looks like the technology/AI industry recognizing their role in shaping and influencing people’s mental health, regardless of whether that technology is intentionally designed for mental health.

Can You Share a Specific Challenge You’ve Faced Recently in Your Research and How You Approached Solving It?
When it comes to the intersection of mental health and technology, we tend to “medicalize” the issue. It’s easy for us to assume that if we consulted clinicians here and there, we’ll eventually figure it out, but the issue is quite nuanced; it requires all disciplines to engage.
It’d be great if we could find one culprit for everything, but the issue is more nuanced and complicated. We can’t place the blame solely on technology for rising mental health issues because we then ignore the societal issue that is mental health care and the social determinants of mental health. We can’t fix the issue of mental health by regulating AI psychological safety alone or by pouring billions of investments into digital mental health solutions powered by AI. We need to have a holistic approach to the problem that challenges the traditional separation between strictly mental health focused technologies and everyday technologies. We need to have transdisciplinary dialogues. This perspective paper describes my approach.
How Do You Decide Which Research Questions Are Worth Pursuing?
The first set of questions I ask are: Why should I be doing this research at Microsoft? Why can’t an academic do it in their institutions? What is something that I can only uniquely do from Microsoft’s perspective? I try to choose research topics where Microsoft’s leadership is absolutely essential in pushing the world toward the right direction. For example, we need a big player like Microsoft to establish standards for how we manage AI human infrastructure (the global ecosystem of people hired as crowd workers and vendor workers that contribute to AI safety). We need a big player like Microsoft to establish psychological safety standards for how AI should be designed and used.
How Do You Show Up for Others in Your Work?
I show up for others by being intentional about creating spaces for others and making sure their needs are met. Time is the most important currency for me, and I make sure that I give others my undivided attention when they need it.
Mentoring interns and students is a big part of my work and is a big priority for me. For interns, I start preparing for their arrival 4-6 months before they get here. 12 weeks is a short time and the intern’s time is very valuable. I make sure that the research topic is crystalized, barriers are identified ahead of time, and partners across the company can provide early input into the project direction.
When I involve others, I try to provide my time and expertise to ensure that they are successful by being as involved as I can in their research project. My involvement varies from being a facilitator of multi-institution negotiations to weekly syncs on advising PhD students.
Again, because of my “why at Microsoft” question, I make sure that the research problem I take has an actual product impact. It’s a balance between what I see as an important research question and what the product groups need to succeed in their own way. Nowadays, I have a 100% alignment with product groups. For example, the Microsoft AI team is very much interested in AI safety and making sure that the consumer Copilot successfully engages their users. The question about how to handle crises or how to measure dependency came directly from their needs. I also have a very close relationship with the AI red team, not just for topical alignment (i.e., AI safety) but also for my work around AI human infrastructure wellbeing. I show up for the AI red team when they need to figure out how to evaluate the psychological safety of AI systems and how to set up AI human infrastructure while ensuring worker safety.

Who or What Has Influenced Your Thinking the Most in Your Research Journey?
Mary Czerwinski: I would not be a researcher had it not been for Mary who invested in me in both my career and my passion. I was still a Research Software Development Engineer (RSDE) when Mary took me under her wing. At the time, I had a huge appetite for doing research rather than development. Before coming to Mary’s team, I was beginning to rediscover my passion for research (I had dropped out of a Physics PhD program a long time ago) and I felt as though not having a PhD prevented me from being part of research discussions and agendas. Mary encouraged me to pursue a PhD at the University of Washington and supported me to do that while also working at MSR. I felt extremely guilty for asking for part-time accommodation during the first year when I had the highest course load. I remember telling Mary about this guilt. She told me that she was making an investment in me because she knows that Microsoft will reap the benefit of it in 5-6 years when I graduate. Mary treated me like a partner and a collaborator. She saw the value I brought to the research world as a developer, and I made sure that I was as productive as any other researcher. Mary pushed to have my title changed from RSDE to Researcher, even before I finished my PhD. Mary was also one that flamed the fire under me to go all in on mental health. Mary was a pioneer at Microsoft to lead research on emotional intelligence and mental health. And she created the space for me to prioritize and thrive in this work.
What Role Do You Think AI Will Play in Everyday Life in the Near Future?
Everyone thinks AI will transform people’s lives by increasing productivity and affecting jobs. I think that’s true, but I believe psychological impacts will be the key. The real transformation is something we’re beginning to see and is yet to be seen in how people conceptualize the role of AI in their day-to-day lives, how psychological influences of AI change people’s motivations and behaviors, how that impacts the wellbeing of the society, how regulators react to that, how societal pressures influence AI industries into different products, and so forth. As much as we want to simplify AI as a tool, it’s become more than that – for some it's a listener, a friend, a therapist, an advisor. The psychological impact will vary from person to person, context to context, as we have seen from past technological advancements. The important thing to notice here is that we (Microsoft) get to nudge the direction of this impact. For example, our recent research on AI dependency is seeing that people are conceptualizing AI dependency as functional reliance, and this functional reliance is being imposed upon by society and external forces (e.g., work pressure to use AI). So, my simple answer to what role AI will play is what we shape the role to be.

Jina Suh
Role: Principal Researcher
Company: Microsoft
Jina Suh is a Principal Researcher at Microsoft Research exploring the intersection of AI, mental health, and psychological safety. Her interdisciplinary work draws from HCI, Affective Computing, and Psychology to design human-centered AI systems that promote wellbeing while anticipating and mitigating potential harms.
She studies how technology design and development practices influence mental health across clinical, workplace, and everyday contexts. Jina received her PhD in Computer Science and MS in Human-Centered Design and Engineering from the University of Washington and previously worked as a developer at Xbox.
👉 Connect with Jina on LinkedIn
Cecily Morrison
What Does a Typical Workday Look Like for You
Every day looks different. Many days we are working together as a team or with users. Below is an example of what a day might look like when I am doing individual work from home.
6 am: Team meeting (including UK and Australia)
7 am: 1-1 with direct report
7:30 am: mentoring call
8 am: Work time (Review paper for a colleague)
10 am: Partner meeting
11 am: Intern supervision
12 pm: Lunch
12:30 pm: Maths lesson with my (blind) son
1:15 pm: Email
2 pm: Meeting with HR about new policy
3 pm: Pick up kids from school
With Your Current Research Focus, What Major Problems Are You Trying To Solve?

The world is a kaleidoscope of people - rich in history, cultural nuance, and different ways of being. My research team focuses on how we bring that plurality into AI models. AI should be the mirror of the full richness of our societies, but it is currently limited by data, architectures, and evaluation approaches.
We’ve started to imagine what the next generation of AI could look like through thinking about the key building blocks of AI: data and evaluation. While people have long focused on data, we have taken the stance that data and evaluation cannot be separated. To choose the right data to train a system on, we must also define what we want the outcome to be and find ways that we can evaluate that outcome at scale.
We have brought this idea from theory to practice in building AI data stewardship tools for marginalized communities, helping them define what “good” representation means for their community in AI media generation and then supporting that notion of “good” with data and metrics. In doing so, we make space for many community voices to shape what AI produces.
Communities know what matters to them, but they don’t always know how to express that to an AI system. A recent example of this challenge was creating highlights, often referred to as bounding boxes in AI literature. Highlights help the AI system know what matters in the image during training. In a recent project, we asked community leads to highlight image elements they considered important for representing their community. In the example below, we see that community leads annotated general items, such as a wall clock and coat hanger rather than the representational aspirations of their community, that it showed a politician of short-stature using adaptive furniture. To address this mismatch between how people think and the needs of an AI system, we made several changes to the user experience. We limited annotations to two categories of community relevant ‘objects’ and ‘people/animals’, and capped each image at five bounding boxes to encourage meaningful selections. By tailoring our highlights instructions to these categories, we achieved a closer coupling of the annotation task to community-specific elements and representation aspirations, which improved the relevance of annotations over more generic labels.

What’s Something You Wish More People Understood About Working in AI Research?
It is not all about the models.
What Emerging AI Trends Are You Most Curious About, and Why?
I believe teachable AI will be critical to creating AI systems that work everyone. Teachable AI systems allow users to provide their own examples to teach an AI system new concepts. No AI developer or company can fathom what the entire world needs. We need to provide infrastructure that allows communities and individuals to make AI their own. The quickly developing field of post-training is opening up a myriad of ways that we can inject data into models to create bespoke experiences. I look forward to these methods being extended in ways that allow users to directly shape their own AI experiences and outputs.
How Do You Show Up for Others in Your Work?

Critical to our work is our multidisciplinary team. We have people who’s expertise comes from human-centered disciplines, such as design and human-computer interaction, as well as people trained in machine learning and engineering. With such a tightly knit team, we all show up for each other. We give each other time to ask questions and learn about concepts from other domains; we step in for each other when someone needs a moment for their personal life; and we value each person’s growth and make space for it in our projects. It is always a pleasure to welcome interns onto the team, many who’ve never had the opportunity to work in a multidisciplinary team, to offer a research space that is open and collaborative.
Who or What Has Influenced Your Thinking the Most in Your Research Journey?

Photo taken by design student Yinyin Zhou as part of her project developing a speech computer for my son.
My family’s lived experience of disability has been one of the most significant steers in my research career. Having a very capable child with significant disability gives me a daily reminder that we cannot serve our marginalized communities “later.” When we are innovating, we should be mindful that every AI decision we make needs to enable AI to be inherently extensible to all people. The common engineering approach that “this is a hard problem; we’ll solve for the easy 80% and then we’ll figure out how to extend it to everyone” unfortunately leaves many underserved by technology. With the right approach and innovation focus, we can build models in ways that are extensible to everyone.
How Do You Hope Your Research Will Impact People or Society in the Next Decade?
We are currently building the foundation for large model AI. We need to build that foundation such that it is extensible to everyone. We do not want to institute an AI divide that mirrors the digital divide, causing prosperity divisions within countries and across the world. A big part of that is making sure we have robust mechanisms for evaluation. If we haven’t defined where we are going through how we evaluate, we aren’t likely to get there. I hope that our research in bringing community voice into AI measurement practices is an important part of the puzzle of making AI that reflects the diverse colors of our world.
What Advice Would You Give to Someone Curious About Working in AI Research?
AI research is for anyone who wants a hand in shaping the world. We necessarily work across domains and skill sets, and seek diversity of perspectives, giving lots of doors to entry. The deeper your knowledge of people and their lived experience, the better AI researcher you will be.

Cecily Morrison
Role: Sr. Principal Research Manager
Company: Microsoft
I am a Sr Principal Research Manager in Equitable AI at Microsoft Research Cambridge. I co-lead the Teachable AI Experience team (TAI X) which aims to innovate new human-AI interactions that bring us to a more inclusive society.
I believe strongly that we must innovate the machine learning techniques that we use in conjunction with designing new types of experiences. I hold a PhD in Computer Science from University of Cambridge and an undergraduate degree in Ethnomusicology from Barnard College, Columbia University.
👉 Connect with Cecily on LinkedIn






