Posted by Prabhat Sharma – Director, Trust and Safety, Play, Android, and Chrome
The rapid advancements in generative AI unlock opportunities for developers to create new immersive and engaging app experiences for users everywhere. In this time of fast-paced change, we are excited to continue enabling developers to create innovative, high-quality apps while maintaining the safe and trusted experience people expect from Google Play. Our goal is to make AI helpful for everyone, enriching the app ecosystem and enhancing user experiences.
Ensuring safety for apps with generative AI features
Over the past year, we’ve expanded our review capabilities to address new complexities that come with apps with generative AI features. We’re using new technology like large language models (LLMs) to quickly analyze app submissions, including vast amounts of text to identify potential issues like sexual content or hate speech, and flag them for people on our global team to take a closer look. This combination of human expertise and increased AI efficiency helps us improve the app review experience for developers and create a safer app environment for everyone.
Additionally, we have strengthened Play’s existing policies to address emerging concerns and feedback from users and developers, and keep pace with evolving technologies like generative AI. For example, last October, we shared that all generative AI apps must give users a way to report or flag offensive content without having to leave the app.
Building apps with generative AI features in a responsible way
Google Play’s policies, which have long supported the foundation of our user safety efforts, are deeply rooted in a continuous collaboration between Play and developers. They provide a framework for responsible app development, and help ensure that Play remains a trusted platform around the world. As generative AI is still in its early stages, we have received feedback from developers seeking clarity on the requirements for apps on Play that feature AI-created content. Today we are responding to that feedback and providing guidance to help developers enhance the quality and safety of AI-powered apps, avoid potential issues or delays in app submissions, foster trust among users, and contribute to a thriving and responsible app ecosystem on Google Play:
- Review Google Play policies: Google Play’s policies help us provide a safe and high-quality experience, therefore we don’t allow apps that feature generative AI that can be inappropriate or harmful to users. Make sure you review our AI-Generated Content Policy and ensure that any of your apps meet these requirements to avoid them being rejected or removed from Google Play.
- In particular, apps that generate content using AI must:
- Give users a way to report or flag offensive content. Monitoring and prioritizing user feedback is especially important for apps with generative AI features, where user interactions directly shape the content and experience.
- Promote your app responsibly: Advertising your app is an important tool in growing your business, and it’s critical to do it in a way that’s safe and respectful of users. Ultimately you’re responsible for how your app is marketed and advertised, so review your marketing materials to ensure that your ads accurately represent your app’s capabilities, and that all ads and promotional content associated with your app, across all platforms, meet our App Promotion requirements. For example, advertising your app for an inappropriate use case may result in it being removed from Google Play.
- Rigorously test AI tools and models: You are accountable for the experience in your apps, so it’s critical for you to understand the underlying AI tools and models used to create media and to ensure that these tools are reliable and that the outputs are aligned with Google Play’s policies and respect user safety and privacy. Be sure to test your apps across various user scenarios and safeguard them against prompts that could manipulate your generative AI feature to create harmful or offensive content. For example, you can use our closed testing feature to share early versions of your app and ask for specific feedback on if your users get generated results that they expect.
- This thorough understanding and testing especially applies to generative AI, so we recommend that you start documenting this testing because we may ask to review it in the future to help us better understand how you keep your users protected.
As the AI landscape evolves, we will continue to update our policies and developer tools to address emerging needs and complexities. This includes introducing new app onboarding capabilities in the future to make the process of submitting a generative AI app to Play even more transparent and streamlined. We’ll also share best practices and resources, like our People + AI Guidebook, to support developers in building innovative and responsible apps that enrich the lives of users worldwide.
As always, we’re your partners in keeping users safe and are open to your feedback so we can build policies that help you lean into AI to scale your business on Play in ways that delight and protect our shared users.
You Might Also Like
How to Travel the World for (Almost) Free Using Miles and Points
Do you want to learn how to travel the world using miles and points? Today, I’m sharing an exciting interview...
Introducing Android XR SDK Developer Preview
Posted by Matthew McCullough – VP of Product Management, Android Developer Today, we're launching the developer preview of the Android...
Solitaire Clash Review – Is Solitaire Clash Legit?
The following Solitaire Clash Review is a sponsored partnership with Aviagames. Solitaire Clash is quickly becoming one of the most...
User-Agent Reduction on Android WebView
Posted by Mike Taylor (Privacy Sandbox), and Mihai Cîrlănaru (Web on Android) The User-Agent string has been reduced in Chrome...