Table of Contents
Get Software, Prompts, and Content Right to Make Microsoft 365 Copilot Work
Ever since Microsoft announced Copilot for Microsoft 365 last March, I’ve spent time to learn about concepts like generative AI to better understand the technology. I’ve also tracked Microsoft’s announcements to interpret their messaging about Copilot and analyzed the costs organizations face to adopt Copilot. Given the hefty licensing costs, I’ve reflected on how organizations might go about deciding who should get Copilot. You could say that I’ve thought about the topic.
Which brings me to a Microsoft partner session delivered yesterday about preparing for Microsoft 365 Copilot. I wrote on this theme last June, so wanted to hear the public messages Microsoft gives to its partners to use in customer engagements.
Get the Right Software
Mostly, I didn’t learn anything new, but I did hear three messages receive considerable emphasis. The first is that customers need the right software to run Microsoft 365 Copilot. Tenants need:
- Microsoft 365 apps for enterprise.
- Outlook Monarch.
- Microsoft Loop.
- Microsoft 365 Business Standard, Business Premium, E3, or E5.
Apart from mentioning the semantic index, nothing was said about the focus on Microsoft 365 SKUs. The semantic index preprocesses information in a tenant to make it more consumable by Copilot. For instance, the semantic index creates a custom dictionary of terms used in the organization and document excerpts to help answer queries. The idea is that the semantic index helps to refine (“ground”) user queries (“prompts”) before they are processed by the LLM.
Nice as the semantic index is, there’s nothing in the selected Microsoft 365 SKUs to make those SKUs amendable to the semantic index. Microsoft has simply selected those SKUs as the ones to support Copilot. It’s a way to drive customers to upgrade from Office 365 to Microsoft 365, just like Microsoft insists that customers use Outlook Monarch instead of the traditional Outlook desktop client.
Mastering Prompts
Quite a lot of time was spent discussing the interaction between users and Copilot. Like searching with Google or Bing, the prompts given to Copilot should be as specific as possible (Figure 1).

It’s rather like assigning a task to a human assistant. Prompts are written in natural language and should:
- Be Precise and detailed.
- Include context (for instance, documents that Copilot should include in its processing).
- Define what output is expected (and what format – like a presentation or document).
The aim is to avoid the need for Copilot to interpret (guess) what the user wants. A human assistant might know what their boss wants based on previous experience and insight gained over time, but Copilot needs those precise instructions to know what to do.
Constructing good prompts is a skill that users will need to build. Given that many people today struggle with Google searches twenty years after Google became synonymous with looking for something, it’s not hard to understand how people might find it difficult to coax Copilot to do their bidding, even if Copilot is patient and willing to accept and process iterative instructions until it gets things right.
Microsoft 365 Copilot is different to other variants like those for Security and GitHub that are targeted at specific professionals. A programmer, for instance, has a good idea of the kind of assistance they want to write code and the acid test of what GitHub Copilot generates is whether the code works (or even compiles). It’s harder to apply such a black and white test for documents.
The Quality of Content
Microsoft talks about Copilot consuming “rich data sets.” This is code for the information that users store in Microsoft 365 workloads like Exchange Online, Teams, SharePoint Online, OneDrive for Business, and Loop. Essentially, if you don’t have information that Microsoft Search can find, Copilot won’t be able to use it. Documents stored on local or shared network drives are inaccessible, for instance.
All of this makes sense. Between the semantic index and Graph queries to retrieve information from workloads, Copilot has a sporting chance of being able to answer user prompts. Of course, if the information stored in SharePoint Online and other workloads is inaccurate or misleading, the results will be the same. But if the information is accurate and precise, you can expect good results.
This leads me to think about the quality of information stored in Microsoft 365 workloads. I store everything in Microsoft 365 and wonder how many flaws Copilot will reveal. I look at how coworkers store information and wonder even more. Remember, Copilot can use any information it can find through Microsoft Search (including external data enabled through Graph connectors), which underlines the need to provide good guidance in the prompts given to Copilot. Letting Copilot do its own thing based on anything it can find might not be a great strategy to follow.
Lots Still to Learn
Microsoft 365 Copilot is still in private preview (at a stunning $100K fee charged to participating customers). Until the software gets much closer to general availability, I suspect that we’ll have more questions than answers when it comes to figuring out how to deploy, use, manage, and control Copilot in the wild. We still have lots to learn.
If you’re in Atlanta for The Experts Conference (September 19-20), be sure to attend my session on Making Generative AI Work for Microsoft 365 when I’ll debate the issues mentioned here along with others. TEC includes lots of other great sessions, including a Mary-Jo Foley keynote about “Microsoft’s Priorities vs. Customer Priorities: Will the Two Ever Meet?” TEC is always a great conference. Come along and be amused (or is that educated?)
So much change, all the time. It’s a challenge to stay abreast of all the updates Microsoft makes across Office 365. Subscribe to the Office 365 for IT Pros eBook to receive monthly insights into what happens, why it happens, and what new features and capabilities mean for your tenant.
$30 per user/month on top of 365 licensing for clippy V2.0