Google Play’s AI-Generated Content policy helps to ensure that AI-generated content is safe for all users. To protect user safety, developers should incorporate certain safeguards in their apps. This article shares best practices that developers can adopt to ensure their apps are compliant.
Overview
Developers are responsible for ensuring that their generative AI apps do not generate offensive content, including prohibited content listed under Google Play’s Inappropriate Content policies, content that may exploit or abuse children, and content that can deceive users or enable dishonest behaviors. Generative AI apps should also comply with all other policies in our Developer Policy Center.
Best Practices
Developers should adopt content safeguards in building their generative AI apps to ensure user safety. This includes adopting testing and security practices that are in alignment with industry standards.
For how to best conduct safety testing, developers are encouraged to reference Google’s Secure AI Framework (SAIF), and the GenAI Red Teaming Guide from OWASP Top 10 for LLM Applications & Generative AI Project Alternative: OWASP Top 10 for LLM & Gen AI Project.
There are different ways to ensure content safety in your generative AI app, such as adopting filters and classifiers to block harmful inputs and outputs. Many AI model providers publish resources for developers about how to adopt these safeguards. Below are links to some of the content safety offerings and guides from different providers for reference: