2X AI Usage Policy

1. Guidelines for Responsible AI Usage

1.1 Overview

Generative artificial intelligence (“AI”) is a type of artificial intelligence that can generate various types of content, including human-like text, images, audio, synthetic data, or other media content using AI algorithms.

AI is an increasingly important tool in the marketing landscape. This policy is designed to outline how 2X employees can leverage AI in a secure, responsible, confidential, transparent, and ethical manner. This policy is also intended to ensure our use of AI aligns with our overall corporate values and those of our clients.

1.2 Transparency Statement

We use AI to support various functions at 2X. This includes use of AI to enhance content and creative production in terms of speed and output, as well as general efficiency gains through copilot capabilities for knowledge work. To ensure transparency, accountability, quality, and privacy, we adhere to the AI usage standards laid out in this policy. These standards help us safeguard against biases, maintain data security, and uphold our commitment to ethical marketing practices.

1.3 Tool Selection

The following AI tools have been approved for use in 2X. Employees are strongly advised to only use approved tools for company-related work. If you have tools to recommend, please reach out to the Principal, AI, or the IT Operations team. Any use of other tools for testing or experimenting purposes should be limited to non-sensitive information.

1.4 Accountability

Employees of 2X using AI should bear responsibility for the output of those programs or applications. AI should be used as an assistant, not a direct replacement of work production. In all instances of AI usage, employees should make best effort to ensure the output is reviewed for factual accuracy, brand/style considerations, and any necessary sensitivities. Where required by policy or regulations, use of AI to create content should be disclosed or tagged as stipulated. Employees must understand that AI tools may be useful but are not a substitute for human judgment and creativity.

1.5 Use Cases That Should Not Leverage AI

As AI has potential risks in terms of bias, privacy, and other unforeseen impact, we recommend employees limit or avoid use of AI for the following:

1.6 Training

All employees involved in creating content with AI should receive appropriate training. This should cover both the technical aspects of using AI, and the ethical considerations outlined in this policy.

2. Acceptable Use

2.1. Authorized Use

AI tools and platforms may only be used for business purposes approved by the organization. Such purposes may include content generation for marketing, product development, research, or other legitimate activities in line with Statements of Work.

2.2. Compliance with Laws and Regulations

All users of AI must comply with applicable laws, regulations, and ethical guidelines governing intellectual property, privacy, data protection, and other relevant areas.

2.3. Intellectual Property Rights

Users must respect and protect intellectual property rights, both internally and externally. Unauthorized use of copyrighted material or creation of content that infringes on the intellectual property of others is strictly prohibited.

2.4. Responsible AI Usage

Users are responsible for ensuring that the generated content produced using AI aligns with the organization’s values, ethics, and quality standards. AI must not be used to create content that is inappropriate, misleading, offensive discriminatory or otherwise harmful to others or the company. Such use may result in disciplinary action.

2.5. Data Protection

3. Addressing Specific Issues in AI

3.1 Bias

AI algorithms and programs learn from the training data they built on, which can lead to unintended biases in their output. While many AI tools have created filters and features to minimize the risk of such outputs, these guardrails are not foolproof. As such, it is the responsibility of every end user of AI to ensure that anything they produce with it is reviewed for potential bias and corrected accordingly.

3.2 Privacy

AI tools may use data that is inputted to enhance their training models or features. As such, we must take additional measures to protect the privacy of company data and data from our clients. This policy recommends using only approved tools which have additional enterprise-level security capabilities and are compliant with generally accepted standards.

3.3 Security

AI systems can be targets for cyber-attacks. Please review the approved list of AI tools and discuss any additional tools you subscribe to or use on company devices with the IT Operations team.

3.4 Ethical Considerations

AI should not be used to mislead, manipulate, or misinform. All content created using AI should be ethical and in line with 2X corporate values and best practices. AI-created content should be reviewed to check for bias, inaccuracies, and other risks.

AI can allow you to create “in the style” of public figures; designated employees may, with permission and review, use AI to mimic the writing style of a current 2X employee for the purposes of ghostwriting or editing content from that individual.

4. Acceptance

By using AI in their work, employees of 2X agree to comply with this policy. Non-compliance will be taken seriously and could lead to disciplinary action.

5. Credits

The text of this policy is adapted in part from a template provided by Jasper.ai. 2X is an Agency Partner of Jasper and received access to that template via the partnership program. 

6. Policy Review

This policy will be reviewed periodically and updated as necessary to address emerging risks, technological advancements, regulatory changes and changes in applicable law.

7. Version Log

VersionAuthorDateChanges
1.0Vikram Ramachandran – Principal, AIDec 11, 2023Policy drafted and reviewed.