1. Purpose #
Liffery is committed to protecting children from harm and has zero tolerance for any form of Child Sexual Abuse and Exploitation (CSAE), including the use, sharing, or creation of Child Sexual Abuse Material (CSAM), whether real or AI-generated.
2. Scope #
This policy applies to all users and all content hosted, shared, or generated on the Liffery platform, including private messages, public posts, and AI-generated media. It also governs Liffery’s employee conduct, technical infrastructure, and moderation processes.
3. Definitions #
- Child Sexual Abuse Material (CSAM): Any visual or written content that sexually exploits or depicts a child.
- AI-Generated CSAM (AIG-CSAM): CSAM generated via artificial intelligence or image generation technologies.
- Grooming: Any attempt to build trust with a minor to enable sexual abuse or exploitation.
- CSAE: Encompasses CSAM, grooming, trafficking, sextortion, and any related abusive behavior.
4. Prohibited Conduct #
Users are strictly prohibited from:
- Uploading, generating, or sharing any form of CSAM or AIG-CSAM.
- Engaging in grooming, sextortion, or trafficking-related activities.
- Using Liffery for the distribution or promotion of material involving the sexual abuse or exploitation of minors.
Violations will result in immediate account termination and will be reported to law enforcement.
5. Detection and Moderation #
Liffery implements layered detection systems to identify and prevent CSAE content, including:
- Industry-standard CSAM hash-matching databases (e.g. PhotoDNA).
- AI classifiers for detecting AIG-CSAM and grooming behavior.
- Manual content review by trained moderation staff.
- Red-teaming and risk assessments to stress-test moderation capabilities.
6. Reporting and Escalation #
- Liffery provides a clear reporting channel accessible via all user interfaces.
- All CSAE incidents are investigated immediately.
- Confirmed violations are reported to appropriate authorities, including local law enforcement and the National Center for Missing & Exploited Children (NCMEC) or international equivalents.
- User data associated with reported content may be preserved and disclosed in accordance with legal obligations.
7. Response and Enforcement #
- All flagged CSAE content is removed without notice.
- Users involved are permanently banned from Liffery.
- Reports are escalated to law enforcement within 24 hours of confirmation.
- An internal audit trail is maintained for all incidents.
8. Staff Training #
All Liffery staff involved in content moderation or platform development receive mandatory CSAE training, including:
- How to identify CSAE and grooming patterns.
- Appropriate escalation and reporting protocols.
- Handling user data with care and within legal frameworks.
9. AI Safety and Content Generation #
Liffery prohibits AI models hosted on its platform from generating any sexual content involving minors. All AI-generated output is subject to CSAE detection and moderation controls.
10. Legal Compliance #
This policy aligns with applicable child protection laws and standards in all regions where Liffery operates, including:
- The UK Online Safety Act
- GDPR and EU child protection directives
- COPPA (USA)
- Relevant national and international CSAE prevention frameworks
11. Transparency #
Liffery will publish an annual transparency report detailing CSAE-related moderation actions, law enforcement referrals, and improvements made to detection systems.