AI in Internal Comms: What Not To Do
Simon Rutter
External Contributor - Award-winning Sr Communications Strategist
27 Nov 2024
Use of AI by internal comms teams is on the increase, whether it’s to stimulate new ideas, create content, or automate processes. However, AI also poses many challenges to internal communicators, from questions on ethics to the proliferation of generic content.
In this article, I’ll share my top ten tips for how not to use AI in internal comms. For ease, we’ve also summarized them in a handy downloadable one-pager at the bottom of the article – you’re welcome!
Let’s dive right in.
1. Forget standards
According to the Institute of Internal Communications’ (IoIC) IC Index 2024, 17% of IC professionals are using AI to create content, including for CEOs and senior leaders. But with more than 50% of workers either not trusting or trusting only a little in such communications, the need for quality assurance is paramount.
IC teams can play a leading role in the ethical use of AI in their organizations by ensuring there are clear standards, policies, and principles in place for responsible AI use. By doing so, we can close the trust gap and take advantage of the efficiency gains provided by AI.
2. Copy and paste
AI content is by nature generic. While this can be useful when searching for an overview of a topic, for internal comms it means AI content lacks essential corporate context and doesn’t meet your audience-specific needs. As such, rehashing AI content means your messages will lack organizational specificity and personalization, and therefore won’t resonate with your employees.
If you want employees to engage with your AI-generated content, you have to add your own corporate messages into it, and tailor it to your audience – just like you would normally.
AI is a shortcut, not a cop-out.
3. Perpetuate the bias
AI bias, also referred to as machine learning or algorithm bias, refers to AI systems that produce biased results that reflect and perpetuate human biases within a society, including historical and current social inequalities. Bias can be found in the initial training data, the algorithm, or the predictions the algorithm produces.
When bias goes unchecked, it impacts your employees most vulnerable to it and further erodes their trust in your internal comms, and your organization. Internal comms teams must examine content for potential bias, and influence your company’s broader AI strategy to be diverse and inclusive.
4. Ignore cultural sensitivities
Connected to points 2 and 3, AI content is a cocktail of generic but also potentially biased content. In practice, this means AI may not take account of critical cultural nuances, leading to potentially offensive, inappropriate, or non-inclusive content. This controversy, sparked by Google’s Gen AI tool, Gemini, is just one example.
The last thing you want is to alienate your employees with divisive or derogatory content. Here it can be a good idea to check your AI content with subject matter experts or Employee Resource Groups to avoid any unintentional controversies.
5. Fail to check your facts
For internal comms teams, receiving and being able to rely on high-quality data from within your organization is vital. The better the data you put into AI, the better your content will be. Yet some organizations struggle with their data quality, which impacts internal communicators’ ability to create accurate, verifiable content.
My advice is simple – check your facts. Not just once, but at least twice. The first time is when you receive the internal data, the second is when you get your content back from AI.
Like us humans, AI is not perfect. It makes mistakes. AI-generated internal comms must pass through strict approval processes, because if it goes out unchecked misinformation can spread and decimate trust in your organization.
6. Sound inconsistent
In theory, organizations should speak with a consistent tone of voice, regardless of the audience. AI tools are not designed to pick up on such idiosyncrasies, instead generating formulaic, robotic information and content. If you don’t overlay your AI content with your company’s unique way of speaking to employees, your content will be inconsistent, it won’t reinforce your culture and values, and your people will be able to tell that the content has been written by AI from miles away.
For internal comms to be effective, employees need to hear the same message in the same tone of voice many times. So, use AI to help by all means, but your content must be consistent in this regard.
7. Neglect accessibility
With around 15% to 20% of the global population experiencing significant disability, and around the same number believed to be neuro-divergent, there is a growing awareness and understanding of various conditions and their impacts on how people consume content.
AI content is not created with accessibility in mind, and if you ignore the diverse needs of your workforce, you will alienate already marginalised groups, and affect your ability to attract and retain an inclusive workforce. It is up to IC teams to integrate accessibility considerations into your content and channels so that everyone can receive and understand your messages.
8. Fail to be transparent about AI content
One of the main reasons trust is such a challenge in organizations is that employees do not know where content has originated from – the leader themselves, the IC team, AI?
Content generated by AI is often not labeled as such, which makes it even more difficult to identify its source.
The best way to build trust is through transparency. If IC teams badge AI-generated content as such, or make it clear how AI has been used, then employees will start to trust what you are producing, and it may even help to create a broader organizational-wide debate on how everyone can use AI in an open way.
At the end of the day, AI is a tool we want to be using as communicators – it enables us to be more efficient, and to focus on higher-value, strategically impactful work. But we can only do that if we maintain employee trust in the content we are producing.
9. Forget about data privacy
According to the European Consumer Organization in 2020, a survey showed that 45% to 60% of Europeans agree that AI will lead to more abuse of personal data. And some 84% of workers who use generative AI at work said they have publicly exposed their company’s data in the last three months, according to a recent 16-country study of more than 15,000 adults by the Oliver Wyman Forum.
Whether it’s employee-specific or company-related information (or both), forgetting about data privacy puts your organization at massive risk. Internal comms teams have to lead by example by not entering any confidential information into AI platforms (if in doubt – leave it out), and by educating employees on the perils of doing so.
10. Run before you can walk
In many organizations, internal comms teams are proudly leading the conversation and activity around AI. They are navigating their companies through this new era of communication with confidence and a healthy dose of experimentation, learning as they go.
As with all change programs, the watch-outs are to properly plan, test, and train, and to bring your employees along with you on the journey. This is about working with various departments (Legal, HR, IT, Compliance etc) to agree on a strategy, and then communicate this clearly to your people. Rushing the deployment of AI systems without the supporting infrastructure in place could lead to widespread confusion, misinformation, and financial and reputational risks. Fail to prepare, prepare to fail.
Cheat sheet: AI in internal comms
AI is here to stay, and for IC teams it’s potentially a powerful tool that can free up time so we can focus on strategy, not slides. But as we know from Spiderman, with great power comes great responsibility, and we must continue to lead the way in our organizations to ensure AI use is ethical, inclusive, and trustworthy.