Navigating legal terrain with Duane Morris: AI implementation risks in the beauty industry
19 Feb 2024 --- With the rise of personalized beauty technology and Web3 opportunities, legal issues may pose a challenge for personal care companies. From privacy concerns to legal disputes, Duane Morris sheds light on threats and offers strategies to mitigate potential legal pitfalls in AI-driven beauty solutions.
In the first of a two-part series, Personal Care Insights speaks with Agatha Liu, a partner at Duane Morris and a legal expert with an AI research background, who warns of potential risks as beauty companies integrate AI technologies. They include personalization, appearance bias and regulatory compliance.
How do you perceive the potential risks associated with integrating AI technologies to enhance customer experiences in the beauty industry?
Liu: In the beauty context, it’s important for companies to be aware of potential pitfalls in integrating AI technologies like virtual try-on technology (VTO), automated product or service applications or chatbots that act as virtual assistants and offer real-time, responsive product recommendations. These risks can include a lack of accuracy, lack of propriety (possibly giving offense), invasion of consumer privacy or possible IP infringement. What if the AI technologies are built from content owned by third parties and accessed without permission?
What major obstacles might beauty companies encounter when using AI-driven solutions to enhance the user experience from a legal standpoint?
Liu: Some fundamental challenges are the growing personalization of beauty, the fact that beauty can involve more reliance on appearance, which is generally subjective, and the regulation of beauty.
The collection of more than usual personal information to visualize new looks or offer personalized product recommendations may be unavoidable in building and utilizing AI technologies. Such personal information could range from physical features to health data. However, the management of personally identifiable information and whether a consumer has received sufficient information to consent to its collection is heavily regulated by privacy laws.
A Heavier reliance on images and presentations may be unavoidable. Such images and presentations can often be imprecise or incomplete, leading to bias when processed automatically by AI technologies and potentially discriminatory dealings with customers that are illegal under civil rights laws.
There might be more room for dispute regarding the accuracy or propriety of AI-generated content, including instructions for actions. Either the AI technologies are built to generate more quantitative, measurable information or customers may continue to question whether the described purposes or effects are realized and bring claims under consumer protection laws.
There may be difficulties in ensuring that any generative software avoids making product claims that are false or misleading, unsubstantiated or that blur the lines between products regulated by the FDA as cosmetics versus products that are regulated as over-the-counter drugs.
Can you provide examples of legal issues within the beauty industry due to AI, and how they were addressed?
Liu: Yes, recently we’ve seen a number of class action suits against beauty companies alleging that their VTO tools collect users’ biometric facial information without their informed consent, in violation of Illinois’s Biometric Information Privacy Act (BIPA), which requires that companies that collect certain biometric data first obtain informed consent before collecting the data, inform consumers about use, retention and destruction policies with respect to biometric data, and take steps to protect it.
In terms of the lawsuits related to privacy and specifically in violation of the BIPA, it appears that so far, the courts have found it a problem only when the collected personal information is tied to common, explicit personal identifiers, such as name or address. On the one hand, that simplifies the issues to some extent, as that has reduced the number of combinations of pieces of information that could be deemed to put customer privacy at risk. On the other hand, this appears to be an oversimplification that will need to evolve.
We’ve also seen instances where state attorneys’ generals have exercised their enforcement authority under state privacy laws to pursue claims against beauty companies for alleged failures to disclose the collection and sale of consumer personal information to third parties or failing to process consumer “opt-out” requests with regards to the sale of that data via privacy controls. In other contexts, we’ve seen significant litigation alleging that companies mislead young users about safety features and promote body or facial dysmorphia, appearance-related anxiety or depression or obsessive thinking through their filters. We expect these lawsuits to continue.
What steps do you advise beauty companies to take to reduce the legal risks that could arise from implementing AI?
Liu: Very generally, companies should understand the technology they’re adopting and implementing and balance the benefits of the technology against potential legal pitfalls. That means liaising with outside counsel and privacy teams to obtain clarity and mitigate risks. The measures normally include:
- If using existing tools, study and test them: Are existing tools safe regarding no IP infringement, producing accurate and appropriate results and protecting privacy?
- If developing tools in-house, adopt guardrails by seeking legitimate, proper training data or data cleansing techniques and reducing algorithmic bias.
- Offer careful education and training to employees and customers on the nature of AI tools, their usage and improvement.
- Establish agreements with engineers, partners, customers and all relevant parties to define IP ownership, clarify data handling and reduce liability.
- Follow AI-related legal developments, which often have an impact across industries.
- Various technologies have been developed to combat the risks AI brings. The use of such technologies is to be combined with human reviews.
Personal Care Insights recently connected with beauty SaaS solution providers EveLab Insight and Haut.AI in a two-part report on overcoming AI beauty biases.
By Venya Patel
To contact our editorial team please email us at editorial@cnsmedia.com
Subscribe now to receive the latest news directly into your inbox.