5 min read

AI Adoption Without Governance Is a Brand Risk

AI Adoption Without Governance Is a Brand Risk

If you lead an accounting, advisory, or M&A practice, you have probably felt the shift: AI went from “interesting” to “everywhere” in about a year. That speed is the story. The risk is not that a model occasionally gets a detail wrong. The risk is that AI is quietly weaving itself into the real work: internal training, workpapers, diligence analyses, and drafts that become client deliverables, without an equally serious layer of governance to match.

A recent example made the point uncomfortably clear. A partner at KPMG Australia was fined A$10,000 after using an AI tool to cheat on an internal AI training test. The same reporting noted that more than two dozen staff had been caught using AI tools to cheat on internal exams since July 2025.

It is tempting to treat this as an HR footnote. Leadership cannot. In professional services, the moment AI misuse becomes public, it stops being a technology conversation and becomes a brand conversation.

When AI misuse becomes a headline

The KPMG incident is a perfect example of why governance matters more than tool selection. It involved senior professionals, and it happened inside a control environment that is supposed to model integrity. It also illustrates something many leaders are learning the hard way: policies built for a pre-AI world do not automatically translate when AI is as accessible as a browser tab.

The broader signal is this: the line between “internal” and “external” risk is thinner than it looks. An internal training shortcut can become a public narrative about ethics, controls, and culture.

If you are leading a firm through AI adoption, you should assume this: if something goes wrong, the story will not be “a new tool caused a mistake.” The story will be “a trusted firm did not have control of its own process.”

Why accounting and M&A are especially exposed

Professional services firms sit on some of the most sensitive information in the economy. In accounting and M&A, AI is touching workflows where confidentiality, judgment, and auditability are not “nice to have” values. They are the product.

Even when AI use is limited to drafting or summarizing, the risk concentrates in a few places:

    • Sensitive data entering tools that were never approved for it
    • Overreliance on outputs that sound plausible but are not verified
    • Ambiguity about who owns the work when AI contributes

The Journal of Accountancy captured the core issue crisply: without human oversight, risks include blind trust in outputs and AI inadvertently accessing and sharing sensitive information, whether authorized or unauthorized.

For M&A leaders, the exposure often spikes under deadline pressure. Diligence moves fast. Teams are tired. The incentive to “just run it through a model” can outrun the discipline to ask, “Should this data be here at all, and who is accountable for the result?”

One of the most common misconceptions I see is that governance is a later stage: you adopt the tool, then you govern it. In reality, governance is what determines whether your AI adoption is defensible.

AI governance becomes urgent as AI moves into high-stakes, production use, and the common problems are exactly what professional services firms recognize: unclear ownership, rapidly evolving tools, fragmented processes, and limited auditability.

If that sounds familiar, it should. Most firms did not “roll out AI” in a clean, centralized way. People started experimenting. Then the experiments became habits. Then the habits became embedded in client work.

Governance is the work of catching up to that reality.

The cautionary tale professional services already knows: fabricated citations

If you want a preview of what “AI without verification” looks like under scrutiny, look at what has happened in the legal profession.

In May 2025, The Guardian reported that a law firm hired by Alabama to defend prison litigation used ChatGPT and submitted court filings with false legal citations, prompting a federal judge to consider sanctions and raising questions about how firms are controlling AI use in professional work.

There is a reason these stories travel. They map cleanly onto the fears clients already have: “Did anyone check this?” and “Can I trust your process?”

Accounting and M&A are not immune to the same failure mode. Replace “case citations” with “deal comps,” “tax positions,” or “diligence findings,” and you get the point quickly.

Five leadership moves that reduce AI brand risk

This is the part leaders often want: not a 40-page framework, but the moves that change outcomes fast.

1) Name an executive owner, then make it cross-functional

If AI governance lives only in IT, it will fail. Your risk profile is not an IT problem. It is a professional judgment problem, a confidentiality problem, and a quality-control problem.

An executive owner should be able to convene risk, legal, compliance, and practice leadership, and escalate issues to the same level you would for client data security or audit quality.

2) Put bright lines around the highest-risk work

Your firm does not need perfect rules for every scenario to reduce risk meaningfully. It needs clear boundaries where the cost of failure is highest.

For many accounting and M&A teams, bright lines typically include:

    • client identifiers and sensitive deal data
    • regulated personal information
    • final conclusions, opinions, or signed deliverables without documented human review

The Journal of Accountancy emphasizes “human in the loop” review, and strong protocols to safeguard sensitive data. That is the baseline.

3) Require human accountability for every output

This is where governance gets real. Someone must be accountable for the final work product, and the review must be more than a quick glance.

Databricks frames this as an ownership problem: responsibility for outcomes can become fragmented across teams unless governance explicitly assigns accountable individuals or teams.

In professional services, accountability must be named, documented, and enforceable. Otherwise, “AI helped” becomes a fog that hides errors.

4) Train for judgment under pressure, not just tool use

Training that only teaches prompts is incomplete. Your people need scenario training built around the moments they are most likely to slip:

    • the late-night diligence sprint
    • the “client wants it in an hour” email
    • the temptation to paste content that “probably isn’t sensitive”

The KPMG case is a reminder that even internal assessments and internal culture need to be designed for a world where AI makes shortcuts easy.

5) Treat incidents as governance tests, not isolated failures

When something goes wrong, your response tells your organization what you value. The most mature posture is to treat incidents as governance feedback:

    • What controls failed?
    • What training did not land?
    • Where is ownership unclear?
    • What boundary was missing?

That mindset is how governance becomes an operating system, not a policy document.

The question leaders should be asking right now

Most AI conversations still start with: “Which tool should we adopt?”

A better question is: If our AI use became public tomorrow, could we defend our oversight with confidence?

The U.S. Treasury’s AI Lexicon and Financial Services AI Risk Management Framework underline where the broader market is going: toward common definitions, risk-based oversight, and integrating AI governance into existing enterprise risk programs, with control objectives mapped across stages of adoption.

You do not have to be a bank to learn from that direction of travel. The expectation is becoming clearer: AI is not a side project. It is a risk domain that needs governance equal to its reach.


FAQ

What is AI governance in a professional services firm?

AI governance is the set of policies, oversight, and controls that ensure AI use remains accountable, secure, and auditable across workflows, especially where sensitive data and professional judgment are involved.

Why is AI a reputational risk for accounting and M&A leaders?

Because AI can change how work is produced faster than controls evolve. When errors, misuse, or data mishandling become public, the story becomes about trust and oversight, not technology.

Who should own AI governance?

It should have executive ownership with cross-functional oversight, typically involving risk, compliance, legal, and practice leadership, with clear accountability for outcomes.

Innovation without oversight is exposure

AI is not going away, and neither is the pressure to move fast. The differentiator will not be which firm adopts first. It will be which firm can show, consistently, that its standards of confidentiality, judgment, and accountability did not erode when the workflow changed.

Governance is how you keep your brand intact while you scale what AI can do.

How PwC’s New AI-First Training Model Signals a Paradigm Shift in Accounting

How PwC’s New AI-First Training Model Signals a Paradigm Shift in Accounting

PwC’s Bold Move: Fast-Tracking Juniors into Managerial Roles PwC is radically rethinking its approach to new talent, aiming to have junior...

Read More
Growth by Capability: Why Specialty Practices Are Becoming the New Currency in Accounting Firm M&A

Growth by Capability: Why Specialty Practices Are Becoming the New Currency in Accounting Firm M&A

When AAFCPAs announced its acquisition of McLaren & Associates CPAs, the headline was not simply about growth in size. The more meaningful shift was...

Read More
From Frozen Peas to AI-Powered Audit: What the Food That Built America Teaches Accounting Firms

From Frozen Peas to AI-Powered Audit: What the Food That Built America Teaches Accounting Firms

In the early 1900s, Clarence Birdseye wasn’t trying to invent a new food. He was trying to solve a problem: how to keep food fresh and make it...

Read More