Anam is committed to the responsible, transparent, and secure development of artificial intelligence technologies as part of its Services (herein referred to as “AI Services”). This document outlines Anam’s approach to AI governance and compliance. Anam aligns its practices with international standards to ensure its AI Services are trustworthy, compliant, and suitable for the needs of its customers and society at large.
These policies and practices may change as the Services and industry evolve, so please check back regularly for updates. Capitalized terms used below but not defined in this policy have the meaning set forth in our Terms and Conditions.
Anam is guided by a framework of responsible artificial intelligence innovation whereby it adheres to principles of Transparency, Consent and Real-time Safety throughout its AI supply chain. This means Anam requires clear disclosure of AI nature in all interactions with AI Services, whether it's for public-facing deployments or customer-specific implementations. Anam also requires each person to consent before their voice or likeness is use to create for an AI Persona, whether it’s for one of the 6 Stock AI Personas available to all customers, or a Custom AI Persona that is created by and on behalf of a specific customer.
Anam’s platform and policies are integrated with a trust and safety layer that is designed to help prevent harmful interactions and ensure responsible use in real-time. This includes a prescriptive Acceptable Use Policy and further operationalized by content moderation tools.
Anam is committed to respecting the rights of organizations and businesses that use its Services, by communicating transparently and empowering them with choice. The Services, and Anam’s related practices, incorporate measures designed to respect their intellectual property rights, protect their data, and maintain confidentiality. As further described below in the section titled ‘Governing Agreements,’ Customers are not responsible for Anam’s separate research and development decisions. Anam is “Stateless”. Any Inputs and Interactions to the Anam Services are processed on a ‘stateless’ or ‘zero data retention’ basis, meaning that these Inputs and Interactions are only processed on a transient basis for the limited period necessary to generate the Interaction. You can read more about this in our Terms and Conditions.
Anam is deeply committed to upholding the rights of individuals and to protect the public from harmful content and misuse of AI technologies. Anam’s Acceptable Use Policy prohibits the use of Services for activities that infringe on individual rights, such as creating defamatory, inciteful, abusive, or discriminatory content. Anam enforces these restrictions to help ensure that the AI Services are used responsibly, aligning with the broader goal of safeguarding privacy, promoting freedom of expression within ethical bounds, and preventing discrimination.
Anam and its customers have a shared responsibility to prevent abuse and mitigate harm. To foster clarity and initiative between the parties in this regard, Anam integrates a structured approach to defining roles and reinforcing responsibilities along the entire AI supply chain, beginning with R&D and model creation and extending through to customer deployment.
Anam’s role and responsibilities, as well as those of its customers, shift based on the stage of the AI supply chain and the applicable legal framework. When creating and pre-training Anam’s AI Services, Anam serves as the "controller" under privacy frameworks like GDPR and the UK GDPR, and when making these services available to customers as part of the Services, it serves as a "provider" under AI frameworks like the EU AI Act.
However, once customers choose to use the Services, they take on the role of "deployer" under the EU AI Act, and under privacy law, they assume the responsibilities of the controller (as they determine the purposes and means of processing), while Anam transitions to the role of "processor." To the best of Anam's knowledge, its AI Services, when used as intended and in accordance with the Acceptable Use Policy, are not classified as High-Risk AI Systems under the EU AI Act.