Microsoft's recent quarterly earnings report and AI productivity study have sparked discussions about the real-world impact of AI tools in the workplace.
While the tech giant boasts impressive growth in AI services, their latest research on AI productivity raises questions about the effectiveness of current implementation strategies.
I broke down the earnings and research with Marketing AI Institute founder and CEO Paul Roetzer on Episode 108 of The Artificial Intelligence Show.
The numbers game: Microsoft's AI growth
Microsoft’s recent earnings presented a mixed picture.
On one hand, the company beat overall revenue and earnings per share expectations. Overall revenue grew by 21% year over year. And, the company noted that AI services contributed 8 percentage points of growth to Azure and other cloud services revenue.
On the other, they did fall short on cloud revenue expectations ($28.5 billion vs. an anticipated $28.7 billion).
That caused the stock to drop a bit after earnings.
However, as Altimeter partner Jamin Ball pointed out, the AI piece of the business has some wildly compelling numbers.
Some data / stats on AI related products at Microsoft. Real revenue!
— Jamin Ball (@jaminball) July 30, 2024
Azure AI Services
- $5b run rate up 900% YoY
- 60k customers up 60% YoY
- Responsible for ~8% of overall Azure growth this Q
Developer Tools
- GitHub at $2b run rate (It was ~$1b in Sept '22)
- GitHub…
The productivity puzzle: Microsoft's AI study
Now, at the same time of earnings, Microsoft also released a large study on AI productivity in the workplace.
The report is called Generative AI in Real World Workplaces. It synthesizes findings from a number of recent Microsoft studies on the impact of generative AI in real-world work environments. And Microsoft calls it "the largest controlled study of productivity impacts in real-world generative AI.”
Yet, while you might expect some groundbreaking results from a study of this kind…
The findings were a bit, well, underwhelming. Microsoft found things like:
- Copilot users read 11% fewer emails
- They spent 4% less time on emails
- Users edited 10% more documents using the tool
Roetzer expresses skepticism about the study's approach and findings.
"Is email really the interesting use case here?" Roetzer questions. “[The research] was focusing obviously on products and capabilities that [Copilot] enables. So they looked at meeting time and emails and stuff like that. I feel like they were assessing this against features that I don't find that interesting."
As Roetzer has personally seen daily in talks, workshops, and conversations with customers and partners, there are far more innovative use cases for generative AI within organizations today.
The missed opportunity
Part of the problem may be with how Microsoft chose to structure their research, says Roetzer.
- Limited scope: The research focused primarily on email and meeting metrics, ignoring more innovative use cases.
- Lack of training: There's no mention of user education or onboarding for Copilot.
- Homogenous applications: The study didn't account for different roles or departments within organizations.
"If you're Microsoft and you want to show the value of Copilot, giving it to 6,000 people across 60 organizations [as one study did] and waiting to see if they sent fewer emails or if they spent less time in meetings, if that's your measurement of whether or not they got value from Copilot, then I think you've got a bigger problem on your hands as Microsoft," Roetzer argues.
The bigger picture: AI's true potential
While Microsoft's study may have fallen short, it does highlight a crucial gap in current AI implementation strategies.
As companies invest heavily in AI tools, there's a pressing need for more creative and targeted approaches to maximize their potential.
As AI tools become increasingly prevalent in the workplace, companies must look beyond surface-level metrics and generic use cases. The key to unlocking AI's true potential lies in:
- Tailoring AI applications to specific roles and departments
- Investing in comprehensive training and education programs
- Establishing clear benchmarks and regularly measuring impact
- Encouraging creative thinking about how AI can transform workflows
"Someone has the opportunity to do the largest [study] that actually customized use cases, trained people how to use the platform and benchmark [performance] before and after," Roetzer challenges.
Mike Kaput
As Chief Content Officer, Mike Kaput uses content marketing, marketing strategy, and marketing technology to grow and scale traffic, leads, and revenue for Marketing AI Institute. Mike is the co-author of Marketing Artificial Intelligence: AI, Marketing and the Future of Business (Matt Holt Books, 2022). See Mike's full bio.