Auditors everywhere have been talking about Chat GPT.
Many are afraid of the implications it might have on audit reports.
If you haven’t heard of it, where have you been?
We are supposed to be aware of new emerging risks.
But okay.
No judgement from me.
Let’s talk about what it is, why auditors missed the real risks, and what to do about it.
What is Chat GPT?
Chat GPT, short for Generative Pre-training Transformer. It uses advanced machine learning techniques to understand and generate “natural language.” It is trained on a vast amount of text data. This allows it to generate human-like responses to questions that many believe are indistinguishable from those written by a human.
What are auditors initial Chat GPT Concerns?
As mentioned previously, many auditors were concerned that Chat GPT would replace them as audit report writers. This concern made it to the pages of the Institute of Internal Auditors (IIA) Internal Auditor Magazine.
Mike Jacka used it to create a sample audit report in the Internal Auditor Magazine. Frankly, the report was terrible. Mike agreed on a follow-up article. So please don’t think this article is disparaging Mike. It is not.
Now back to the topic at hand.
If your only or most pressing audit concern with Chat GPT is that it may write better reports than you, please leave the profession. This was one of the last things to cross my mind. There are many more risks that auditors should consider when it comes to artificial intelligence (AI) and on a bigger scale synthetic media at their organizations. Auditors should audit the organization’s artificial intelligence processes.
WHY?
AI is advancing rapidly and changing the way we live and work. One area seeing significant growth is content creation, such as videos, images, and text. While this technology has the potential to bring many benefits, it also raises some important concerns that organizations and auditors should be aware of. Auditing artificial intelligence will be important in 2023.
So let’s talk about it.
The Real Artificial Intelligence Risks for Auditors to Consider
Auditors Must Consider Misinformation Risk from AI
AI models, like Chat GPT, produce results based on the information “entered” into them. The AI trains on the data and learns from it. Like most systems, the output is only as good as the data entered.
With so many organizations producing content for social media, (ad campaigns, job postings and press releases, and more), it is important to ensure the information is accurate and complete.
One of the biggest concerns with artificially generated content is the spread of misinformation. Deepfake technology, for example, allows for the creation of realistic videos that depict people doing or saying things that never actually happened. These videos can be used to spread false information and manipulate public opinion.
In 2019, a deepfake video of Mark Zuckerberg, the CEO of Facebook, began circulating on the internet. The video depicted Zuckerberg making controversial statements about the company’s control over user data. This video was quickly debunked as fake, but it raised concerns about the potential for deepfakes to be used to spread misinformation and harm reputations.
Furthermore, text generated by AI, can be used to spread false information and manipulate public opinion. Articles written by your organization using AI may contain errors or omissions.
These things can lead to confusion, mistrust and can even harm people’s physical and mental health. This can also harm an organization’s reputation. Misinformation from artificial intelligence is a risk that should concern auditors.
Auditors Must Consider Security Risks from AI
AI-generated content can also be used to impersonate individuals or organizations. In 2019, researchers discovered an AI model that was being used to impersonate real people in phishing attempts. The AI-generated messages were so convincing, they tricked some people into disclosing sensitive information. Whether text, images or video, AI can be used to infiltrate you organization. This illustrates that AI-generated content can be used for malicious purposes and should be a concern for internal auditors.
Auditors Must Consider Privacy Risks from AI
Another concern with artificially generated content is the potential violation of privacy rights. AI systems can use personal information, such as images, text and videos, to generate new content without the consent content owners. This can lead privacy violations and potential legal action against your organization.
In 2018, a company called DeepNude created an app that used AI to generate realistic nude images of women. The app faced backlash and was ultimately taken down, but it highlighted the potential for AI to be used to violate privacy rights.
And while this is on the extreme end, consider the bank that uses account information to verify a customer’s identity and then return account information. Or the physician that enters symptoms into a system that returns a diagnosis.
Artificial intelligence is everywhere. Most organizations use it in some way to query customer accounts. Internal auditors should be concerned about the privacy risks of AI.
Auditors Must Consider Societal / Reputation Risks from Ai
AI-generated content can also perpetuate stereotypes and biases, and can be used to influence public opinion in harmful ways. For example, in 2016, Microsoft released an AI program called Tay, short for “Thinking About You.” Known as “The AI with zero chill”, Tay was released on Twitter on March 23, 2016, under the name TayTweets and handle @TayandYou.
The program was designed to learn from interactions with Twitter users and generate responses. However, within 24 hours of its release, Tay began generating racist and sexist tweets. Those who know me, know that I’m all for open communication. But I won’t post any of Tay’s responses here. If you want to see the things it said, google it. Some of it is very shocking.
This example illustrates the potential for AI to learn and reinforce harmful biases, and the importance for organizations to consider the societal impact of their AI-generated content and/or that generated by its communications team or third party providers. This should be on any internal auditors AI risk list.
Auditors Must Consider Legal Risks from AI
Finally, there may be legal implications for organizations that use or consume AI-generated content. For example, your organization can face law suits if it uses copyrighted images and video generated by one of many AI content generation tools. Another artificial intelligence risk auditors should be concerned about and have on their radar.
What can auditors do to identify, audit and respond to artificial intelligence (AI) risks?
Auditors should identify the risk and auditing their organization’s artificial intelligence processes. Auditors can take several steps to identify, audit, and respond to the risks associated with artificially generated content.
1. Auditors should Identify artificial intelligence risks
Auditors should first identify the potential risks associated with the use of AI-generated content by the organization. This seems like a no brainer, but since most of us are worried about AI’s report writing capabilities, I thought I’d include it.
So where do we start?
We can start by reviewing existing policies related to the use of AI. If the organization does not have any policies, then I think this would be a great place to start discussing the risks. After all, policies set the expectation at organizations.
Next, auditors can review laws and regulations related to the use of AI-generated content, especially those related to privacy and security.
Additionally, auditors can conduct interviews with relevant stakeholders. Auditors need to discover where AI is being used in the organization.
2. Auditors should review / determine artificial intelligence controls
Once risks have been identified, auditors should review the controls in place to mitigate those risks. This can include reviewing data input into AI models built by the organization and/or reviewing the process for ensuring content used by third party AI generators is accurate and complete. This is just a start, there are many more AI control points auditors should consider.
3. Auditors should test artificial intelligence controls
Auditors should perform testing to ensure that the controls in place are functioning effectively. This can include testing the organization’s process to validate the accuracy of data used in its AI models and the process for for obtaining consent for items used in its AI models. This also includes testing how the organization ensures the accuracy of the data used in AI models when it purchases services from third party providers.
4. Auditor should communicate artificial intelligence findings and recommendations
After completing the audit, auditors should communicate their findings and recommendations to the organization. This can include providing a report that outlines the risks identified, the effectiveness of existing controls, and any recommendations for improvement.
5. Auditors should Follow-up regularly on artificial intelligence audit results
Because AI is new(ish) and changing rapidly, auditors should follow up control issues fairly quickly and regularly. They should also consider re-auditing the organization’s AI-generated content processes periodically to ensure that the organization continues to effectively manage the risks associated with this technology.
As final words, here’s what I will say
- Auditors must understand current AI uses and future uses. I currently use AI in a variety of ways. So should you.
- By the way, the photo used in this blog post, it’s from my Saturday morning talk show. This episode was about AI where I show some of the things I do with it. We discuss risks and trends. Check it out here
- The cover photo (photo only now people) from my previous blog post was completely generated using AI.
- Oh and I did an entire episode of my Friday Fraudster Podcast using AI characters.
Whether we like it or not, AI is here. And it is going to have an impact on our organizations. It is important that auditors help organizations understand, identify and test controls surrounding the use of artificial intelligence.
I welcome your thoughts on the topic.