2 Min Read

Is Microsoft's AI Having an Impact on Earnings and Productivity?

Featured Image

Wondering how to get started with AI? Take our on-demand Piloting AI for Marketers Series.

Learn More

Microsoft's recent quarterly earnings report and AI productivity study have sparked discussions about the real-world impact of AI tools in the workplace.

While the tech giant boasts impressive growth in AI services, their latest research on AI productivity raises questions about the effectiveness of current implementation strategies.

I broke down the earnings and research with Marketing AI Institute founder and CEO Paul Roetzer on Episode 108 of The Artificial Intelligence Show.

The numbers game: Microsoft's AI growth

Microsoft’s recent earnings presented a mixed picture. 

On one hand, the company beat overall revenue and earnings per share expectations. Overall revenue grew by 21% year over year. And, the company noted that AI services contributed 8 percentage points of growth to Azure and other cloud services revenue.

On the other, they did fall short on cloud revenue expectations ($28.5 billion vs. an anticipated $28.7 billion). 

That caused the stock to drop a bit after earnings.

However, as Altimeter partner Jamin Ball pointed out, the AI piece of the business has some wildly compelling numbers.

The productivity puzzle: Microsoft's AI study

Now, at the same time of earnings, Microsoft also released a large study on AI productivity in the workplace.

The report is called Generative AI in Real World Workplaces. It synthesizes findings from a number of recent Microsoft studies on the impact of generative AI in real-world work environments. And Microsoft calls it "the largest controlled study of productivity impacts in real-world generative AI.”

Yet, while you might expect some groundbreaking results from a study of this kind…

The findings were a bit, well, underwhelming. Microsoft found things like:

  • Copilot users read 11% fewer emails
  • They spent 4% less time on emails
  • Users edited 10% more documents using the tool

Roetzer expresses skepticism about the study's approach and findings.

"Is email really the interesting use case here?" Roetzer questions. “[The research] was focusing obviously on products and capabilities that [Copilot] enables. So they looked at meeting time and emails and stuff like that. I feel like they were assessing this against features that I don't find that interesting."

As Roetzer has personally seen daily in talks, workshops, and conversations with customers and partners, there are far more innovative use cases for generative AI within organizations today.

The missed opportunity

Part of the problem may be with how Microsoft chose to structure their research, says Roetzer. 

  1. Limited scope: The research focused primarily on email and meeting metrics, ignoring more innovative use cases.
  2. Lack of training: There's no mention of user education or onboarding for Copilot.
  3. Homogenous applications: The study didn't account for different roles or departments within organizations.

"If you're Microsoft and you want to show the value of Copilot, giving it to 6,000 people across 60 organizations [as one study did] and waiting to see if they sent fewer emails or if they spent less time in meetings, if that's your measurement of whether or not they got value from Copilot, then I think you've got a bigger problem on your hands as Microsoft," Roetzer argues.

The bigger picture: AI's true potential

While Microsoft's study may have fallen short, it does highlight a crucial gap in current AI implementation strategies.

 As companies invest heavily in AI tools, there's a pressing need for more creative and targeted approaches to maximize their potential.

As AI tools become increasingly prevalent in the workplace, companies must look beyond surface-level metrics and generic use cases. The key to unlocking AI's true potential lies in:

  1. Tailoring AI applications to specific roles and departments
  2. Investing in comprehensive training and education programs
  3. Establishing clear benchmarks and regularly measuring impact
  4. Encouraging creative thinking about how AI can transform workflows

"Someone has the opportunity to do the largest [study] that actually customized use cases, trained people how to use the platform and benchmark [performance] before and after," Roetzer challenges.

Related Posts

Microsoft's New AI Makes Deepfakes from a Single Photo

Mike Kaput | April 23, 2024

Microsoft just debuted an AI model that can deepfake you from a single photo.

How Microsoft's Phi-3 "Small Language Models" Could Massively Boost AI Adoption

Mike Kaput | April 30, 2024

Microsoft is out to make the point that bigger isn't always better in AI...

Microsoft's MAI-1 Takes on Google, Anthropic, and OpenAI in AI Model Showdown

Mike Kaput | May 14, 2024

Microsoft is getting into the AI model-building game.