top of page

Shadow AI: What is it, what are the threats and how can I protect my business?

  • Writer: Callum Dunnington
    Callum Dunnington
  • Nov 11
  • 4 min read

In today’s tech industry, most companies are not building their own AI models. In fact, the majority of companies are choosing to either licence or inherit it: a recent KPMG study found that approximately 50% of organisations licence or buy pre-build generative AI solutions from vendors, with only 12% building them in-house. As you can imagine, this is causing some significant cybersecurity concerns.


ree

Recently, I had the chance to attend the ISACA Europe conference, where the threat of third-party AI was a hot topic of many presentations. Here are some of the key points I learned during the event:


What is driving AI adoption?

Many businesses are turning to open-source AI as they desire cost-efficiency and quicker results - why make expensive internal hires when AI can do the job itself? The truth is that AI models can be integrated quickly, without having to procure budget for additional staffing, platforms and tools.


Yet according to one ISACA panellist, there is another key driver: FOMO, or the ‘fear of missing out’. Many companies and organisations feel they are in a race not to be left behind by those using AI already, and in these efforts to remain competitive, many businesses are adopting AI without thinking of the governance and security issues that come with it.


Whatever the reason for adoption, it is leading to the rise of AI incidents.


What is Shadow AI?

Simply put, what businesses don’t know, they can’t govern. Shadow AI is defined as the unauthorised or unvetted use of AI tools and applications within an organisation. Similar to the Shadow IT trend that has also emerged in recent years, employees are increasingly using AI applications to support processes within hiring and accounting, for example, without the explicit approval of their bosses.


This isn’t typically malicious in itself, but can be harmful none the less. In essence, these employees are trying to be more efficient, but are opening the door to data leakages, especially once key information is shared with the models they are using. Once this is done, you lose control over who owns that data, with no visibility on who it is then shared with.


Examples of third-party AI issues

This was evidenced in 2023, when a Samsung engineer accidently leaked sensitive internal source code for a new product - and confidential meeting notes - when they uploaded it onto ChatGPT. The South Korean company detected three separate recorded incidents of employees leaking information to the AI model, which retains user input data to further train itself, leaving this sensitive information at the mercy of OpenAI. Samsung responded by banning employee usage of the platform, and has begun developing its own inhouse AI model to mitigate these incidents going forward.


Amazon has also been stung by their employees’ usage of AI models. The company were using an AI system that was trained on data submitted by applicants over a ten year period in order to find preferential candidates for roles. However, in 2015, it was discovered that the system was not rating candidates in a gender-neutral manner, because it was build on the data accumulated from CVs submitted mostly by males. As a result, it began penalising any CVs that included the word ‘woman’. While the program was amended to make it neutral to the term, further issues meant this project was abandoned by the company.


Incidents like these can quickly create reputational damage for businesses – if using AI means their confidential information may be leaked to models and dispersed to unauthorised users, or if you believe a company is prejudicial in any way, it can affect return business and hit a company’s profit margin. Ensuring these issues like these don’t arise is pivotal.


How can businesses combat rogue usage of third-party AI?

It’s for these reasons that businesses have begun to ban the use of models like ChatGPT outright. However, this doesn’t preclude employees to find and use other AI platforms which can cause harm to operations.


What is clear is that greater visibility is needed regarding the AI tools businesses use: which employees are using it, how are your vendors using it, and whether they’re obligated to tell you this. One solution was the use of AI inventories that can be built within companies to check which platforms are being used, where company data is flowing and assess how AI can be used in a manner to reduce data leakages.


While this may strike fear into the hearts of C-Level employees, the experts at ISACA made it clear that these do not have to be an all-encompassing process. Instead, they can start small and develop a checklist for the most important aspects of using AI tools, including data input (what information goes into the tool, and is it sensitive?), transparency, and explainability (can documentation be provided about how the AI model uses data?). Other key elements such as bias monitoring, cyber resilience and ethical and legal foundations – including the contractual agreements in place to procure and use third-party AI models – can all be factored into these inventories.


There is a perception that AI is essentially the wild west, and can’t be controlled. It’s certainly a scary topic, but in many cases, the frameworks that have protected our data for years can be adapted and tweaked to overcome issues here too.


How we use AI models

Here at Proactive PR, we understand that third-party AI models can be a useful research tool for our clients. What it lacks, however, is the human touch that’s needed to create compelling storytelling.


In October 2025, Deloitte was caught using AI in a $440,000 report for the Australian government. The report was found to contain AI hallucinations, with researchers noticing some major errors, including references to fake academic research and a fabricated quote from a federal court judgment. Subsequently, the auditing and consultancy firm was forced to give the government a partial refund.


This is exactly what Proactive PR will not do: we promise our clients that even if our work is ever guided by AI tools, it will be ratified and written by humans who understand your market and how to craft the content and campaigns to reach them.


If you’d be interested in hearing more about the work we’re doing for our clients, then book in a meeting.



Comments


Commenting on this post isn't available anymore. Contact the site owner for more info.
bottom of page