Principles and guidance on the use of AI within the business and with clients


Kaizo has always embraced the use of emerging technology to enhance and enable its work and welcomes the use of AI as an extension of this. Generative AI describes algorithms (such as ChatGPT) that can be used to create new content, including text, audio, images, simulations, and videos. Recent breakthroughs in the field have the potential to significantly change the way we approach content creation.

That said it is critical that we are clear and open about our use of specifically Generative AI. With this in mind, we have developed a short policy to provide a framework in relation to:

  • Ethics and transparency
  • Confidentiality and good governance
  • Creative process and content development
  • Productivity and efficiency

As AI becomes more advanced and refined (along with related regulations), we will ensure our policy and approach evolves accordingly and will regularly review it to ensure compliance with changing laws and regulations.

This document sets out boundaries for the use of AI at Kaizo; from insights and research to creative processes and content development, to enhanced productivity and collaboration. It ensures that AI will be used ethically, securely, and responsibly.

This policy applies to all Kaizo employees, contractors, and consultants who use or interact with AI in the workplace and governs the use of AI in all company operations.

The company is committed to complying with all relevant laws, regulations, and industry standards in AI usage and our teams are expected to comply with all aspects of this policy when using AI in their work. Clarification will be sought if consultants ever have insufficient knowledge regarding the proper use of AI.

The company will regularly review and update this policy to ensure that it remains current and relevant.

Where we reference AI in this policy, we are referring to Generative AI i.e., algorithms that can be used to create new content, including text, audio, images, simulations, and videos and enhance productivity through this.

This policy relates to any standalone Generative AI tools such as ChatGPT, DALL-E and Bard, or where AI is incorporated into tools such as Copilot in Office 365 apps or in Canva, or any content or productivity add on. Where new tools are used, these will be assessed internally first to assess risks and agree use boundaries.


1. We absolutely believe that AI can support and enhance communications, but it is not a substitute for original human thinking, advice, and consultancy.
2. Ethical use, client confidentiality, and legal governance needs will always come first when it comes to when and how AI might be used.
3. We will always be up front about where and how we have used AI in the development of campaigns, content, images, videos, and copy.
4. We will use AI to improve efficiency and to deliver better value to our clients within the boundaries noted above.



An overriding rule is that Kaizo will always ensure there is human oversight of any AI used to enhance operations and create outputs.

We will be transparent with our colleagues and clients where AI has been used, whether to source insights, refine content, sense check ideas or any other use. Visibility and honesty are paramount. This means that where we have used AI it will be clearly referenced.

This could be in any part of the creative, reporting, and decision-making process to clients and influencers, including individual pieces of work in which it played a substantive role.

Disclosures can be in writing or verbal, depending on the circumstances. We also expect colleagues to disclose the use when sharing internally, whether to sense check ideas, for research, or as part of the drafting or creation process etc.


As part of every contract and our consultancy standards, we are committed to protecting the privacy and security of client data. The use of AI will never cut across the normal legal and client confidentiality terms that we operate under.

As such we will never use AI if it risks the confidentiality of client data and information. This includes, but is not limited to, entering customer data or confidential information into an AI platform or generator. This could be information currently not in the public domain (such as company news, unreleased images, sensitive financial data), or opinions not already expressed by the client, such as insight on a specific sector, industry or topic that has not already been publicly expressed. This includes client business plans, PPTs or documents, paid analyst reports, market insights, confidential research data, and text related to sensitive internal employee communications.

Where we use information sourced through AI, we will ensure that this is factually accurate, verifiable, and does not contravene any copyright or trademark laws, including infringing on individual’s rights or their identity, or inadvertent plagiarism. This covers images, visuals (including video) or audio as well as copy.

As well as direct use, we will be vigilant regarding any sensitive, confidential, or proprietary information that might be accessed by apps or plugins that are used on both personal and company-owned devices.


We will continue to deliver original human thinking, creativity, advice, and consultancy. Human insights and originality will always come first before AI-generated work. AI can support us not lead us. We are confident in our abilities without AI, and will not rely on AI’s capabilities for any piece of work, whether a written article, research idea, or creation of images or videos, etc.

We will not use AI to develop original content but may use it to enhance or refresh what we do create. Examples could be rewording approved copy or creating summaries of approved articles, or iterations of imagery within brand guidelines.

We may use AI to enhance our work to ensure new content is as good as it can be. This could include grammar and flow for copy or to deepen external/third party proof points. We may use it to sense check thinking, or prompts, or to evaluate where ideas may extend, but not to replace original thinking.


We will take advantage of productivity opportunities that AI can provide, so within the boundaries set out above we may use tools and add-ons for reporting, summarising meeting notes and actions, etc. Where we use tools in a meeting for example, we will always be up-front and transparent with its use.

Whilst AI can be useful for research, we will ensure as we always do that the information we use, present, and publish is accurate and has been fact checked.

Our teams must ensure that if they are using an AI tool, they do so in a way that complies with this policy and does not infringe upon the rights of any individual either within Kaizo or externally.


We expect that individuals using AI do so in an ethical and responsible manner that recognises its limitations and its weaknesses.

We will be aware to look for biases incorporated in AI-generated outputs and review prompts and content created by generative AI tools to ensure no bias is overlooked or shared externally. The same applies to content such as imagery, likenesses, or avatars that create discriminatory content.

Generative AI will not be used as a replacement for diverse experiences, insights, or engagement.