Acceptable Use Policy
This Acceptable Use Policy (Policy) forms part of the Ethion Standard Terms and Conditions. Capitalised terms not defined in this Policy have the meaning given in the Agreement.
Purpose and scope
This Policy sets out mandatory rules governing use of the Services and Software, including use of artificial intelligence functionality powered by Third-Party AI Services.
This Policy is designed to:
- ensure lawful, ethical and responsible use of the Services and Software; and
- protect Enigma, Third-Party AI Providers and affected individuals from harm.
Compliance with this Policy is a material condition of the Agreement.
General use obligations
The Customer shall ensure that the Services and Software are used only:
- in accordance with applicable laws and regulations;
- for lawful, ethical and non-deceptive purposes; and
- in a manner consistent with the purpose of ethical influencing as specified in the Documentation.
The Customer shall be responsible for all acts and omissions of its Authorised Users, including all breaches of this Policy.
Prohibited uses – general
The Customer shall not, and shall not permit any Authorised user or third party to, use the Services and Software to:
- engage in unlawful, fraudulent or deceptive conduct;
- misrepresent the origin, nature or intent of AI-generated content; and
- circumvent, disable or attempt to bypass any safeguards, filters or limitations imposed by Enigma or any Third-Party AI Provider.
Ethical influencing restrictions
Given the nature of the Services and Software, the following uses are expressly prohibited, and the Services and Software must not be used to:
- manipulate, coerce or deceive individuals or groups;
- exploit vulnerabilities, including psychological, emotional, economic or social vulnerabilities;
- interfere with individuals’ ability to make autonomous and informed decisions;
- deploy dark-pattern techniques or subliminal persuasion; and
- mislead audiences as to whether content is AI-generated where such disclosure is required by law or regulation.
Political, civic and democratic processes
The Services and Software must not be used for:
- political campaigning, lobbying or advocacy aimed at influencing public opinion, elections or referenda;
- targeted political persuasion of specific individuals or demographic groups;
- voter suppression, intimidation or misinformation; and
- foreign or domestic election interference or demobilisation activities.
High-risk decision making
The Services and Software must not be used to automate, determine or materially influence high-stakes decisions without meaningful human review, including decisions relating to:
- employment, recruitment or workplace monitoring;
- education or academic assessment;
- housing or access to accommodation;
- credit, lending or financial services;
- insurance;
- legal risk or obligations;
- medical or health-related decisions;
- essential government services;
- law enforcement, national security or migration.
Safety, harms and illegal content
The Services and Software must not be used to generate, promote or facilitate:
- threats, harassment, intimidation or defamation;
- suicide, self-harm or eating disorder promotion;
- sexual violence or non-consensual content;
- terrorism or violent extremism;
- weapons development or use (including CBRNE);
- illicit activities, goods or services;
- malicious cyber activity or IP infringement.
Privacy and personal data
As stated in the Agreement, the Services and Software must not be used to process Personal Data.
Without limitation, the Services and Software must not be used for:
- facial recognition or biometric identification;
- profiling, monitoring or tracking individuals;
- inferring sensitive attributes;
- emotion recognition in workplace or educational settings;
- social scoring or behavioural classification.
Child safety
The Services and Software must not be used to:
- generate or distribute child sexual abuse material (CSAM);
- groom, exploit or sexualise minors;
- expose minors to age-inappropriate content;
- promote harmful behaviours to minors (including dangerous challenges or unhealthy dieting).
Any apparent CSAM must be reported immediately to Enigma.
Enforcement
Enigma may monitor use of the Services and Software to assess compliance with this Agreement, including this Policy. Enigma may, on reasonable notice, audit Customer’s use of the Services and Software, including review of access logs and usage patterns.
If Enigma reasonably believes that this Policy has been breached or there is a likelihood of serious breach of this Policy, it may, without any liability to the Customer and subject to the terms of the Agreement:
- suspend or restrict access to the Services;
- require remedial action in the form and format specified by Enigma;
- terminate the Agreement; and/or
- report misuse to the relevant Third-Party AI Provider or authorities.
Updates
Enigma may update this Policy from time to time to reflect changes in law, regulation or Third-Party AI Provider requirements. Continued use of the Services and/or Software constitutes acceptance of the updated Policy.
Enigma Strategic Communications Ltd (Company No. 12188078) is registered in England and Wales, whose registered office is Suite G04 1 Quality Court, London, England, WC2A 1HR – VAT No. 332104751