AI beauty boom sparks trust, privacy and false advertising concerns
Key takeaways
- Beauty brands are rapidly adopting AI tools to personalize shopping and marketing.
- Experts warn AI-generated images and advice could mislead consumers and hurt trust.
- Human overview is key to avoiding privacy, legal, and false advertising risks.

Beauty brands are racing to roll out AI-powered advisors and marketing tools to personalize the consumer journey, but experts warn the same technology could blur the line between aspirational beauty and deception. As AI use becomes more entrenched in consumer-facing materials, brands are presented with fresh questions about privacy, trust, and false advertising.
According to Ceren Canal Aruoba, managing director of consulting firm BRG, the beauty industry is already under scrutiny for exaggerated product claims and promoting unrealistic beauty standards.
With AI-generated visuals increasingly used across marketing channels, and AI-generated product recommendations becoming standards in many shopping journeys, she says that beauty brands face a heightened risk if the materials create expectations that products cannot meet.
“From a litigation and compliance perspective, beauty is often viewed as a category where increased use of AI in marketing can have meaningful implications for long‑term brand trust and credibility,” Canal Aruoba tells Personal Care Insights.
“AI-altered images may influence how consumers perceive product performance — such as smoother skin, fewer wrinkles, or more dramatic cosmetic effects — and, in some cases, may blur the line between what is visually compelling and what is realistically achievable with the product.”
She explains that, when AI-generated visuals suggest outcomes that consumers may not reasonably attain, potential legal concerns around deception, misleading advertising, and unsupported consumer takeaways can arise.
“Viewed in that light, AI governance in beauty marketing is not solely a creative or innovation issue — it also has implications for claims substantiation and risk management.”
Consumers increasingly use AI-powered beauty tools to personalize their shopping journey.Canal Aruoba tells us that the future of AI in beauty may depend on human oversight and governance to ensure that personalization and creativity do not come at the expense of credibility.
More data, more personalization
In the cosmetics retail space, Ulta Beauty has announced a collaboration with Google, using the multinational tech corporation’s Gemini AI to power a personalized shopping assistant in the Ulta app.
The experience uses insights from over 46 million Ulta Beauty members to help power the recommendations, similar to Sephora’s recently announced ChatGPT-powered shopping experience that uses consumers’ Beauty Insider account data for personalized recommendations.
Canal Aruoba tells us that the trend of using AI in beauty retail indicates a change in how the industry targets its consumers.
“It suggests a shift in how AI may be positioned, from a marketing enhancement toward a more integrated customer interface, potentially pointing to a future that relies less on static search and more on ongoing, conversation‑based shopping experiences that draw on loyalty data,” she says.
She explains that using AI to reach consumers reflects a move away from purely campaign-driven engagement toward more continuous and conversational forms of clienteling. The approach is particularly relevant to the personal care industry, as guidance and reassurance are becoming key differentiators in purchase journeys.
However, Canal Aruoba notes that deepening the personalization of a consumer’s purchasing journey increasingly relies on more granular and sensitive consumer data, reinforcing the importance of establishing trust and maintaining privacy.
Aspirational beauty marketing materials that appear unattainable can hurt consumer trust.“How brands manage those considerations [trust and privacy] may play a critical role in determining whether these tools strengthen or strain long-term customer relationships,” she says.
Balancing aspiration and attainability
Many beauty giants, including Estée Lauder and L’Oréal, are using AI to scale content production for campaigns and product pages. The technology allows them to reduce lead times, production times, and alter imagery for specific markets, while predictive analytics can estimate campaign effectiveness before launch.
Canal Aruoba explains that companies should draw a line between creative AI use and content that could create unrealistic consumer expectations.
“In beauty and personal care, consumers often cannot reliably distinguish between AI-generated and human-made imagery, but that does not mean the images are processed or interpreted in the same way,” she says.
Canal Aruoba cites research suggesting that AI-generated visuals are often seen as more polished and visually refined, which can be attractive in categories like skin care and cosmetics.
“Those same characteristics may also make the images feel less realistic or less authentic to some consumers. In beauty, that distinction can be meaningful, because consumers are not only evaluating a product — they are forming expectations about appearance, performance, and results.”
To find the balance between using AI and maintaining trust, Canal Aruoba says personal care brands must look beyond whether an image is visually effective, and ask whether it feels believable and aligns with outcomes consumers view as reasonably attainable.
Inaccuracies in AI-generated materials can relate directly to core product claims.“In categories closely tied to self-presentation and personal identity, authenticity tends to play an important role in how brand messages are received,” she explains, noting that outcomes that feel aspirational but implausible could subtly shift or strain consumer expectations over time.
Human safety net
Canal Aruoba says that human oversight plays an increasingly central role when brands integrate AI into customer-facing tools. While generative AI can drive efficiency and scalability, it also has the potential to amplify inaccuracies, and in the personal care industry, she explains that those inaccuracies may relate directly to core product claims.
“AI-generated copy, chatbot responses, and personalized recommendations can, at times, unintentionally exaggerate efficacy, imply scientific support that is not well substantiated, or blur the line between aspirational branding and measurable performance.”
Even where AI-generated outputs are automated, Canal Aruoba says brands generally remain responsible for the messages conveyed — particularly in highly scrutinized categories such as skin health, acne, anti-aging, and sun protection, where misleading information can cause long-lasting damage.
“As a result, a key operational consideration is that AI should not function outside existing claims-review frameworks,” she says.
Human-in-the-loop oversight can therefore play an important role in helping ensure that AI-driven content remains accurate, appropriately substantiated, and aligned with applicable legal and regulatory obligations.
“Without that oversight, there is a risk that AI may scale not only creativity, but also legal and reputational exposure.”











