Thanks for subscribing to Digital Journal's What does this really mean? newsletter.

We dig into the week's innovation and business news and try to explain what it actually means for the people running organizations. Follow us on LinkedIn and join the conversation.

We're Canadian media, so Meta's not an option. Don't get Chris started.

JP Lalonde speaks at the 2025 CIO Association of Canada Peer Forum. — Photo by Jennifer Friesen, Digital Journal

There's a pattern JP Lalonde has been watching play out, and he's seen it before.

Build something useful, make it cheap to adopt, wait until people are hooked, then monetize them. 

It worked with social media. We all love being monetized, right? 

This week, I wrote about Lalonde, a former federal government technologist who spent years building AI systems to track money laundering and disrupt terrorist financing. Now the case he's making is that the same playbook is running again with AI.  

This time, the platforms are after something more valuable than your attention — your institutional memory, your strategic thinking, and your organization's most sensitive work.

Microsoft has already started placing ads inside Copilot conversations, including in some paid tiers. Licensing costs went up in early 2025. The terms changed. 

Not that this should come as a surprise. We all get those "terms have been updated" emails, and I certainly don't read them all. Emails like that are why the archive shortcut key was invented in Gmail.

The adoption decision you already made

My colleague Jennifer Friesen reported this week that 80% of employees have now adopted AI, but the average focused work session has dropped to 13 minutes, and focus efficiency hit a three-year low. What, you’re not tracking your focus efficiency? 

ActivTrak tracked 443 million hours of work activity across more than 1,100 organizations and found that after AI adoption, time spent on email jumped 104% and chat and messaging climbed 145%.

More tools. More to manage. Less time to think.

"We changed location without redesigning work," says Sam Jenkins, managing partner at Edmonton-based Punchcard Systems. "With AI, we're kind of making the same mistake. We're changing tools without redesigning the work."

Jenkins calls it the slingshot effect. Pull too far ahead of your people with new tools and the whole thing snaps back. The leader who has six AI agents running by 6 a.m. and the employee who hasn't touched any of them yet are both telling you something about where the work actually lives.

Put those two things together and you get a picture that doesn't show up often enough in AI strategy conversations. Who controls the platforms. Whether the work has actually been redesigned.

Organizations are adopting tools at scale, feeding them sensitive data, embedding them into daily workflows, and doing almost none of the harder thinking about either.

What the platforms already know about you

  • Employees are using AI tools as journals, strategy aids, and institutional memory, on platforms that operate under U.S. law, with terms that keep changing.

  • Disengagement risk jumped 23% in the same period burnout risk fell. Workers have capacity, whether it’s from AI or not, and nowhere to put it. 

  • Open-source models are closing the performance gap faster than most enterprise AI roadmaps assumed. Building an exit now is cheaper and easier than waiting until you’re locked-in.

The Watercooler

Some light reading for when your IT team is confident but your legal team is quiet.

Why strong security programs still fail In another article by Friesen, she spoke with Andrew Archibald, VP of cybersecurity advisory services at Thrive, who says senior IT leadership tends to "round up their grades" on security. Controls exist on paper, access reviews get checked off, policies are documented. What happens day-to-day is a different audit. This connects directly to the data sovereignty question. You can have a sovereignty policy and still have an API key nobody rotates sitting inside a platform you don't control.

Why most robot demos don't hold up in the real world GM's head of robotics strategy makes the case that the gap between a polished demo and a functioning factory floor is no small feat. The problems are in the hardware and software simultaneously, as well as the people who have to make the whole thing run. Same issue, different domain.

Indonesian kids brace for social media ban Indonesia's ban on social media for under-16s took effect this week, following Australia's move in December. The onus falls on platforms to enforce it, and nobody has said how compliance will be monitored. Notably, this is a case of governments shifting responsibility to platforms rather than regulating users directly. That's a different model than most Western regulators have tried.

Saskatchewan pairs research funding with $4.3B infrastructure plan Canada has a habit of producing strong research and weaker pathways to scale it. Saskatchewan's budget tries to close that gap by funding both sides at once: applied research, startup incentives, and physical infrastructure. File this one as a reminder that an innovation strategy without the physical layer underneath it risks staying theoretical.

Final shots

Lalonde said something that stood out to me. Corporations shouldn't be able to own people's memories. 

He means it literally. People are pouring conversations, photographs, and personal details into AI platforms they don't control. But it applies to organizations too. The institutional knowledge your teams are building inside AI systems right now is increasingly hard to move, and the terms governing it aren't yours to set. 

One more thing. On April 15 and 16, the Digital Journal team will be in Vancouver covering the CIO Association of Canada's Peer Forum, two days with senior technology leaders from across the country working through exactly the questions this newsletter keeps circling. 

If you've got something you'd like to ask a room full of CIOs, reply to this email. I'll bring the best ones with me.

David

Keep Reading