Artificial intelligence warning: Study flags beauty tech biases blocking inclusivity
24 Nov 2023 --- In the ever-evolving landscape of personal care, the integration of AI is revolutionizing how products are developed and personalized for consumers. However, recent research by Haut.AI and Novigo sheds light on the inherent limitations of this new digital transformation, explicitly addressing biases in the cosmetic skin care industry and beauty tech applications.
The research paper, titled “How Artificial Intelligence Adopts Human Biases: The Case of Cosmetic Skincare Industry,” delves into the challenges posed by biases in AI development throughout the industry. Despite the industry’s pursuit of optimal supply chains, high-quality products and personalized customer experiences, digital technologies come with inherent risks.
“AI has inherited a number of negative characteristics of the human mind. These include unjustified bias and discrimination against individuals, particularly in the context of non-conformity with global beauty standards,” flag the authors.
For instance, the authors spotlight Beauty.AI, a 2016 beauty contest held among women. Here, almost all the winners selected by AI had white skin.
Unveiling the shadows
The paper pinpoints potential sources of AI bias in the cosmetic skin care industry across various stages of the AI lifecycle.
By meticulously dissecting the AI development process, the research aims to create awareness among industry stakeholders, from executives and managers to developers and end-users.
To comprehend the nuances of AI bias in cosmetic skin care, the authors stress it is imperative to understand its definition. It is described as the inclination or prejudice of an AI system toward or against a particular person or group, leading to socially perceived unfair decisions.
In cosmetic skin care, bias can emerge in predicting outcomes related to beauty, introducing challenges such as ethnicity, gender, age and overall physical appearance.
According to the book “Race After Technology,” the introduced concept of a “New Jim Code” explains that human discrimination and biases can be reflected in technological systems, which can systematically perpetuate racial hierarchies and social injustices.
Decoding biases
With globalization, “the blurring of intercultural boundaries” has created an international beauty standard. The research paper identifies key stages where bias may infiltrate the cosmetic skin care industry.
Cultural differences in beauty perceptions, alongside the global standardization of beauty ideals, contribute to biases in AI algorithms targeting cosmetic skin care.
The multidimensionality of beauty introduces challenges, particularly in target variable selection, potentially leading to discrimination based on race, ethnicity, age, sex and gender, note the report authors.
Another source of bias comes from sampling, confounding and measurement, among others related to data acquisition that pose challenges. The lack of representative datasets, improper annotation and oversimplified training datasets can result in biased AI models.
“Digital tools in the field of beauty technology and cosmetic skin care include state-of-the-art techniques for collecting data on the characteristics of skin and hair. These tools interpret complex, multidimensional datasets and provide recommendations for a product or treatment,” explain the authors.
Additionally, objective optimization bias during model training impacts the correctness of AI algorithms. The paper highlights instances where inadequate objective functions led to incorrect outcomes, such as the transfer of makeup colors in BeautyGAN, “an algorithm for transferring makeup from one face image to another by the generative adversarial network.”
For example, BeautyGAN transferred black eye shadow to blue on some but not all faces.
The authors stress that the validation step is crucial for assessing prediction effectiveness, as it can be susceptible to the same biases as the training dataset.
Deployment and monitoring biases
According to the authors, post-production biases and continuous monitoring challenges arise once machine learning-based systems are integrated into workflows. Therefore, constant evaluation and adaptation are essential to identify errors, biases and societal changes that may impact the algorithm’s performance over time.
The research paper underscores the importance of structured approaches to AI development, classification of biases and continuous evaluation at each stage of the AI lifecycle.
Acknowledging that AI mirrors societal prejudices, the authors advocate for the responsible development and deployment of AI systems, ensuring inclusivity and treating diverse groups equitably.
Legal landscape
As AI is continually adopted across industries, the authors note countries worldwide are grappling with ethical frameworks and regulations necessary to ensure responsible AI use.
International organizations like UNESCO, G20, G7 and OECD also actively contribute to shaping the AI regulatory landscape, underscore the authors.
The EU, Australia, Japan, the UK, the US, Canada and Singapore have implemented standards for the responsible use of AI, emphasizing safety and protecting fundamental rights.
In the EU, MEPs adopted Parliament’s negotiating position on the AI Act on June 14 this year. The talks began with EU countries in the Council on the final form of the law to reach an agreement by the end of this year.
The authors argue the EU AI Act “does not provide specific criteria for limited risk AI applications in the same detailed manner as for high-risk AI systems.”
“All AI systems developed for the cosmetic skin care industry and/or beauty tech applications that are not classified as high-risk will obviously be classified as minimal or no risk. Even though the AI Act proposes stringent regulation only for high-risk AI systems, it should not be assumed that negative impacts on low-risk applications can be ignored.”
They stress the criticality of regulating bias in AI at a corporate level guided by ethical principles and legal frameworks, “ensuring fairness, transparency and accountability.”
“However, very few national AI strategies address human rights.”
By Venya Patel
To contact our editorial team please email us at editorial@cnsmedia.com
Subscribe now to receive the latest news directly into your inbox.