Meta is asking users to grant its AI access to their unpublished photo libraries as it tests a new feature on Facebook, as first reported by TechCrunch. The company, which also owns Instagram, has admitted to using photos and text from public posts since 2007 to train its generative AI models, raising concerns about how Meta might use expanded access to personal information.
Some Facebook app users are encountering pop-ups asking them to grant Meta AI access to their camera rolls for “cloud processing.” The apparently new feature uses AI to “restyle” photos, group images by themes, like “birthdays or graduations,” and make “personalized creative ideas, like travel highlights and collages.” The notification says that Meta will periodically “select media” from users’ camera rolls based on time, location, and theme. It will not be used for “ad targeting,” it says.
In a statement to Hyperallergic, the company denied that it is using data from users’ photo libraries to train its AI models. However, Meta’s AI Terms of Service, to which users must accept in order to access the service, reserves the right to use “personal information” to “improve AIs and related technology.”
“By tapping ‘Allow,’ you agree to Meta’s AI Terms,” the feature’s pop-up states when users attempt to upload a Story on the app. “Media and facial features can be analyzed by Meta AI. To create ideas, we’ll use info like date and presence of people or objects.”

Meta introduced the function with three visual examples at the top, including an apparently enhanced photo of the Eiffel Tower, a trio wearing saturated clothes, and a digital collage titled “weekend recap.”
“We’re exploring ways to make content sharing easier for people on Facebook by testing suggestions of ready-to-share and curated content from a person’s camera roll,” a Meta spokesperson told Hyperallergic in a statement. “These suggestions are opt-in only and only shown to you — unless you decide to share them — and can be turned off at any time. Camera roll media may be used to improve these suggestions, but are not used to improve AI models in this test.”
The spokesperson did not provide an on-the-record response to questions about whether the new feature would be rolled out on its other platforms, when it was introduced, or if it would use data collected from unpublished images to train AI in the future
In the United States, Meta was not required to notify users that it was using public posts to train their AI, according to the New York Times. There are no opt-out options for US users, either. In Europe, however, stricter laws allow those on Instagram and Facebook to opt out of Meta’s data scraping project.
Artists have long raised concerns over the practice of training AI on publicly available images on the internet more generally, which they say allows the artificial intelligence models to replicate their artistic style. Some argue that the generative technology’s learned ability to mimic styles of artists could be detrimental to artists’ livelihoods.