An inside look into gender bias in targeted ads

In this article, Ameya Deshpande and Amritha Arun Babu Mysore unravel the persistent challenge of gender bias in targeted ads, exploring the implications of biased ad personalization, and looking at practical solutions to promote inclusivity and ethical practices.

6 min read
Share on

Targeted advertising was supposed to revolutionize by increasing relevance and customer engagement. Yet far too often, these ads are biased for e.g. women still find themselves being served ads related to laundry detergent, cooking, and home goods, while men see ads for gadgets, financial services, and career development opportunities. This perpetual stereotyping reveals stubborn issues of gender bias emerging from ad personalization algorithms.

Artificial intelligence (AI) has permeated nearly every aspect of our lives, revolutionizing industries, transforming communication, and shaping our interactions with the digital world. However, this transformative power is accompanied by a growing concern that there is a potential for AI to amplify and perpetuate societal biases, leading to discrimination, unfair treatment, and harm.

Evidence of Gender Stereotypes [Source]

Marketers often rely on gender stereotypes when targeting advertisements. Research shows that women spend significantly more time and money on appearance enhancement than men. A survey found that 78% of American women spend an hour a day on their appearance, with the top two most viewed categories by women on YouTube being appearance-related. Another study revealed that the average woman spends 10 minutes applying makeup daily, uses 16 cosmetic products before leaving home, and spends around $300,000 on facial products over her lifetime. These statistics demonstrate that advertisers have substantial evidence for the business case to target beauty and cosmetic ads towards women rather than men.

Why is this an issue?

Seeing different ads based on gender in and of itself isn’t always problematic. However, the biased assumptions encoded in ad targeting perpetuate restrictive norms, limited exposure to diverse perspectives contribute to self-fulfilling prophecies that keep preferences constrained within gendered boxes. Even when conscious beliefs shift towards more progressive attitudes around gender, embedded machine learning models preserve relics of historic structural inequality.

This bias can manifest in various forms, including:

  • Algorithmic bias: Biases embedded in the algorithms themselves, stemming from factors such as the selection and preparation of training data.
  • Data bias: Biases present in the data used to train AI models, reflecting existing societal prejudices and inequalities.
  • Human bias: Biases introduced by humans during the development and implementation of AI systems, influenced by their own personal experiences and subconscious beliefs.

Today, most modern ad platforms now rely heavily on user segmentation and prediction powered by machine learning to forecast individual interests and preferences. These models are trained on behavioral engagement data and past purchase patterns. Herein lies the root issue—past barriers and norms mean usage behavior still skews strongly along traditional gender lines. Without thoughtful algorithm design, biased feedback loops emerge that simply recreate historical imbalances.

This issue pervades across consumer-focused ad platforms, impacting search, social networks, retail, travel and more. Employment and education advertising also demonstrate concerning bias, with far more men being shown technology career ads. Essentially any company using ML for user preference predictions at scale needs to recognize gender bias in monetization algorithms as an urgent priority.

One such example: Women browsing popular online clothing retailers often will find their feeds and recommendations dominated by styles and products either overly feminine or catered purely towards very slim body types, while men see a wider range of body types and aesthetic styles represented. This limits inclusiveness for women.

Example Scenario

Jenna is browsing an ecommerce fashion retailer. Because the recommendation models serve looks primarily based on similarity to what other women with her body type purchased rather than her individual style proclivities gleaned from her online wardrobe and likes, she keeps seeing the same restricted set of floral dresses rather than a true mix of her preferred edgier, hipster aesthetic.

Image source

Spotting biased ad targeting: The hosting platforms themselves should continuously track ads and search keywords shown disaggregated by gender, quantitatively measuring skews in distribution by topic and category. Manual auditing can also reveal issues, with a simple exercise of making searches and browsing while signed into male and female user accounts and tallying relative numbers of career-focused ads vs. domestic product ads.

Solutions

  • Expand data used for modeling beyond past user purchases and engagement, looking at stated preferences and digital content consumption more holistically.
  • Actively rebalance ad keyword targeting, running controlled experiments to quantify and incrementally improve gender neutrality.
  • Radically increase transparency into the features and algorithms used for ad personalization so biases can be spotted.

Product leaders crafting ad optimization algorithms need embracing inclusion and representation as a cornerstone to earning long-term community loyalty and trust. This means occasionally prioritizing offering the richest relevant user experiences over maximizing short-term optimized clicks. It also warrants establishing rigorous processes to continuously measure model output for unintended biased targeting.

Developers play a key role operationalizing these measures through creating tooling for bias detection, implementing algorithmic auditing checks, and translating product requirements focused on fairness into system capabilities. Together, product and engineering leaders can thoughtfully steer our intelligent systems toward promoting empowerment rather than preserving prejudice.

Business and metrics impact

While improving inclusion and representation should be ethical imperatives, addressing issues like gender bias can also directly hurt key business metrics:

  • More restricted/biased targeting can artificially limit reach and impressions, reducing potential addressable market
  • Perpetuating stale assumptions rather than capturing evolving interests reduces relevance and conversion rates
  • Turning off personalization may boost safety but can drastically slash clickthrough and engagement sensitive KPIs
  • Poor ad relevance risks damaging brand affinity amid excluded segments

Quantifying these revenue, engagement, and brand health goal risks brings urgency and helps secure resources for algorithm corrections.

As companies move to rapidly roll out fixes to biased ad targeting models, some key mitigations should be considered are:

  • Gradually phase in major changes to preserve stability and allow measurement
  • Closely monitor engagement metrics across segments to catch any unforeseen skew regressions
  • Proactively communicate with impacted customers to rebuild trust and preempt backlash risks
  • Frame adjustments as continuous improvement efforts given shifting societal mores

With care and cross-functional collaboration, brands can overcome detrimental impacts from algorithmic bias while responsibly enhancing experiences. Ongoing vigilance and being led by consumer insights helps sustain positive momentum.

Conclusion

Targeted advertising holds great potential to revolutionize customer engagement through personal relevance when executed responsibly. As the above analysis illuminates, left unchecked algorithmic and data biases risk perpetuating harmful assumptions limiting inclusiveness and violating user trust.

Thankfully many viable technical and process solutions exist, centered on expanding input data diversity, actively balancing model outputs, and radically increasing transparency. Executed collaboratively across product, engineering, marketing, and executive leadership committed to ethics, continuous improvements steadily compound. Guided by consumer intimacy, not operational metrics alone, our systems can empower rather than constrain possibility for all.

Disclaimer: The views and opinions expressed in this article are those of the authors solely and do not reflect the official policy or position of any institution, employer, or organization with which the authors may be affiliated.