Chatting about Purview, eDiscovery, Copilot and more with Tom O’Connor
Just a little light conversation about eDiscovery, Microsoft 365, Copilot, etc., before you head out to your holiday weekend.
Would you be interested in getting together over a Teams channel to discuss M365, eDiscovery, and other related topics with other subscribers? Perhaps even schedule some chats on occasion? (Paid subscribers, I’ve got some ideas just for you as well.) If you are interested, sign up here. If there’s enough interest, we will definitely get this going in the next few weeks.)
This post will be updated throughout the month as new items are added to the tag.
Be sure to subscribe to my M365 Newsletter for more M365 expertise and news.
If you can get Copilot to drop a link into the auto-summary, it would be less suspicious than an email sent from outside with a link. That’s probably true. After all, if you trust your AI Summarization tool to summarize the email instead of reading it, why wouldn’t you trust any links it included?
There is more detail in the announcement above, but the bottom line is this. You can get Defender and a range of e5 Purview tools for an additional $15 USD per month per user. With Business Premium costing $22 per month per user when paid annually, that’s a significant savings over a full E5 license if you have fewer than 300 users.
Here’s why this is such a big problem. Microsoft recommends blocking Copilot from accessing sensitive information in emails, meetings, documents, and related content by assigning a label to those items and creating a DLP policy that defines the block. This bug renders the system unusable for the affected emails. You simply can’t provide a governance tool that doesn’t deliver the governance it claims to provide. It’s a bad look, Microsoft. It doesn’t help build customer trust.
No, Copilot did not make these emails public or access private information and make it non-private. It accessed information in response to your prompt that it should ignore. That creates a risk that many users might assume does not exist. That is a significant issue, but it’s not equivalent to a data breach. There is another check in place before data leaks out: the end user.
As I’ve said many times, Microsoft has invested too much money in AI to let users opt out of using it, even if it does ruin everything Microsoft has been known for.
Reposts