Thanks for subscribing to Digital Journal's What does this really mean? newsletter.

Follow us on LinkedIn for updates throughout the week. 

Photo by Harold Mendoza on Unsplash

AI has learned how management asks for money. 

The formula is familiar to anyone who has worked in an office for more than six months.

  • The message sounds urgent, the details are vague, and the request involves money moving somewhere quickly.

  • Asking follow up questions would technically be possible, but it would also feel like a career-limiting move.

Artificial intelligence appears to have mastered this communication style. 

My colleague Jennifer Friesen reported this week that a new survey found 72% of Canadian companies say AI-enabled fraud cost them between 1% and 5% of annual profits last year.

Let's pause for a second and read that again.

Up to 5% of annual profits‼ disappeared because somewhere in the organization an email sounded important, and everyone decided it was someone else’s job to double check.

That is profit disappearing because software learned how executives write emails.

Jennifer’s full story is here:

The awkward moment when corporate tone becomes a vulnerability

The technology involved in these scams sounds impressive. AI systems can now generate convincing emails, produce realistic invoices, and even clone an executive’s voice well enough to request a wire transfer.

I can personally confirm how convincing these can be. Our accounting team receives emails like this almost monthly. The first time I saw one, I admittedly could not believe how real it sounded.

In one case, a fake person impersonating a broker for a Boston-based software company generated a large invoice and a lengthy (fake) sophisticated email chain spoofed from an email address that looked like mine. It was multiple back and forth emails between fake me, and fake them. But if you worked in accounting and saw the email, it looked legit.

The message claimed that $57,000 was owed for consulting work and was sent directly to our accounting department.

The email allegedly came from me.

It sounded like me and it looked like it came from me.

It was entirely fabricated.

Here's an example:

The awkward part is that none of this required breaking into a system.

It required sounding like management.

According to the survey Jennifer reported on, the most common attacks involve AI-generated phishing emails and chat messages at 60%, manipulated documents at 39%, and voice-clone calls impersonating executives at 24%.

None of this is over the top hacking. It's often an email that sounds legit, and the sender appears senior enough that questioning it would require a certain amount of courage.

The survey shows a whopping 94% of Canadian executives say they are concerned about AI-driven attacks in the coming year, and only 26% say they have a response plan designed to deal with them.

Corporate translation: everyone agrees this is a problem and someone will definitely start a working group.

The TL;DR for your finance team...

A few details from Jennifer's article we should underline so you can share with your finance dept:

  • Fraud used to involve breaking into systems. Now it often involves imitating someone who already works there.

  • The scams succeed because they look ordinary. A request to move money, update vendor details, or approve a document is exactly what many teams process all day.

  • Companies built their workflows around speed. Messages move quickly, approvals move quickly, and money moves quickly because the sender looks legitimate.

  • Security teams are discovering that the weakest point in the system is not always a server or a firewall. It is the moment someone thinks, “This sounds like something my boss would ask.”

The next wave of enterprise security will focus on authenticity. Verifying voices, documents, and identities may soon become as routine as verifying passwords.

Some light reading from the week in business, where the technology keeps getting smarter and the decision-making occasionally does not —

Your cloud contract just got political
Cloud infrastructure used to be an IT decision about price and reliability. Now it increasingly comes with geopolitical implications, which is not something most procurement teams expected to manage. But this is 2026. It's fine. Everything's fine.

In a concentrated banking system, Atlantic fintech looks south
Canada’s banking sector leaves little room for newcomers. Some Atlantic fintech companies are discovering the most logical place to scale is not across Canada but across the border.

The trust gap holding CIOs back in the AI era
CIOs are being told to adopt AI quickly while also ensuring nothing goes wrong. That is a delicate balance for technologies most organizations are still figuring out how to govern. If this issue is a challenge in your organization, you probably want to be at the 2026 CIO Peer Forum, taking place April 15-16 in Vancouver. We'll be there.

Final shots

For years corporate cybersecurity focused on protecting systems.

  1. Protect the servers.

  2. Protect the network.

  3. Protect the infrastructure.

AI fraud exposes a more awkward reality.

Most decisions inside companies do not happen inside secure systems. They happen inside email threads, chat messages, phone calls, and routine approvals that move ahead because the request sounds legitimate.

Artificial intelligence did not invent a new vulnerability. It simply learned how companies already talk to each other.

Until next week,

Keep Reading