What is an AI policy, and does your practice need one?

As AI adoption rapidly outpaces conversations around ethical, legal and risk implications, how can an AI policy help accounting businesses and their clients?

by | 18 Jun, 2024

Person types on a laptop
  • Practices that adopt AI tools without fully considering the impacts create risks for their practice and clients.
  • An appropriate AI policy can help manage risks related to confidentiality, responsible use, and ethical considerations.
  • Effective AI policies should be tailored to each practice, addressing data privacy, accuracy, employee training, compliance, and communication to ensure safe and responsible AI use.

Many organisations, large and small, are moving at breakneck speed to bring artificial intelligence (AI) initiatives on board, says Dr Niran Subramaniam, an Associate Professor in Financial Management and Systems at Henley Business School.

As a result, he says, some businesses risk not being as precise as they should be in constructing policies and frameworks in which such powerful technologies can be used.

“This introduces risk to the strategy, because they’re embracing technological toolsets and systems without having thought through the risks inherent in them,” Subramaniam says.

“Policies are there to protect organisations from such risks, including around confidentiality, responsible use of AI, ethical use of AI and so on.”

Subramaniam, who held senior finance and information systems roles in the financial services, telecommunications and higher education sectors before entering the academic sector, recently published a journal article titled Digital transformation and artificial intelligence in organisations.

The paper explores how businesses are discovering the revolutionary and transformative power of AI and proposes a framework for successful digital transformation.

An essential ingredient in the mix, Subramaniam says, is a comprehensive AI policy.

What is an AI policy?

It’s important to define “policy”, Subramaniam says, before discussing how it applies to AI and business.

A policy is a set of principles and rules that an organisation chooses to abide by. A policy draws boundaries and communicates important information to stakeholders.

“It says, this is what we are all about. This is what we stand for and care about. These are the things we watch out for,” he says. “All of those ideas packaged into one become a policy”

In developing an AI policy, small accounting firms and the businesses that make up their client lists should first decide what their core values are. This is not specifically in relation to the use of AI, but more generally.

Values can then be broken down into principles that must be upheld, and from those principles come policies.

“For example, if we were to look at data privacy and security, a policy might be to ensure the protection of sensitive client data,” Subramaniam says. “When it comes to accounting practices, that could be about maintaining compliance with data privacy laws.”

An AI policy, then, outlines behaviours and frameworks around the use of AI and the data, information and insights it produces, driven by the business’s values and principles.

What should an AI policy cover?

The content of a specific AI policy should be as unique as the organisation itself, Subramaniam says.

Every business’s reason for using AI will be different. There is no such thing as a one-size-fits-all approach to an AI policy.

“Do we want to look at patterns in data?” Subramaniam says. “Do we want to understand trends in the market? Do we want to make forecasts and predictions based on trends? Artificial intelligence is very good at assimilating data from individual data points to synthesise that data.”

And there’s the catch if individual client data is being analysed with AI.

“Perhaps you’re trying to understand the types of accounting challenges each client might have during the year-end audit, or during the course of their business,” he says.

“When I log all that client or product data together and run an AI algorithm, it gives me a pattern that shows every client has bad debt of around two per cent, for example. In that context, it is important that accountants and IT personnel do not spill that data outside of the organisation, because that still affects data privacy.”

If you misplace a single client’s file, the risk is limited to that single client. But leak a file containing all client names and transactions – the type that is required for AI to perform – and it introduces an entirely new level of risk.

What else should an AI policy cover, as well as privacy and confidentiality?

Subramaniam says other areas of focus for policies should include accuracy and integrity of data, data validation, data verification, employee training programs, and compliance and regulatory adherence.

Even other seemingly unrelated departments such as marketing should be involved, he says, as AI will inevitably influence the work of everybody in the business.

“Think of customer service and client communication,” Subramaniam says. “Who is going to communicate properly, promptly and adequately with clients to ensure they are well informed around the use of AI tools including what you’re using, how you’re using it and how personal data are being used and protected.”

Examples of AI policies

Many of AI’s shortcomings are well-known, and an appropriate AI policy will give a business a structure for minimising risk of harm

For instance, given known issues with AI accuracy and reliability, an AI policy might disallow autonomous decision making except for in controlled circumstances, defining where and when human oversight is required.

Concerns around data privacy might also mean setting benchmarks for how any AI tool a business engages with treats data. This might mean forbidding any client or personally identifiable data being put into systems that might use that data to train its model.

Reviewing terms of service and choosing AI tools that are in accordance with the policy is crucial here. For instance, the free version of ChatGPT trains its model on user data, whereas paid models have the option of turning this off.

“We have to think in terms of policies and protocols because of the unknown,” Subramaniam says.

“With AI, we are dealing with something that we have not seen before and that we actually have little knowledge of. We’re not fully aware of the risk and opportunity offered by AI, so when we work with it, we have to be as careful as we can.”


Want to learn more about AI and accounting? Don’t miss Capium co-founder Tushir Patel’s session at next week’s IFA Conference 2024 entitled Is AI going too fast? Things to be mindful of when using this fast-growing tech trend. Learn more about the conference here.

Share This