“Today, we’re announcing updates to our developer policies to further elevate the quality of apps on Play and continue delivering the best experiences,” the company said.Citing its commitment to responsible AI practices, Google said that the new rules are aimed at ensuring that AI-generated content is safe for people and that their feedback is incorporated.
What are the new rules
From next year, developers are required to provide the ability to report or flag offensive AI-generated content without needing to exit the app.
“You [developers] should utilise these reports to inform content filtering and moderation in your apps – similar to the in-app reporting system required today under our User Generated Content policies,” Google said.
In addition, apps that generate content using AI must also prohibit and prevent the generation of restricted content, such as content that facilitates the exploitation or abuse of children, and content that enables deceptive behaviour.
In order to strengthen users’ privacy, Google has a new policy in place according to which apps will only be able to access photos and videos for purposes directly related to app functionality. For apps that have a one-time or infrequent need to access these files will be required to use a system picker, such as the Android photo picker.
Lastly, Google is setting stronger boundaries around the use of full screen intent notifications. Going forward, for the apps targeting Android 14, full screen intent notifications will be limited to high-priority use cases. These cases include alarm or receiving phone and video calls. For other notifications, apps must ask for user permission.