Study Reveals Racial Bias In Facebook Ads for Education Opportunities
|
A 2024 research paper suggests that Facebook’s advertising algorithm has unevenly targeted Black users with ads for for-profit colleges.
Meta, the current parent company to both Facebook and Instagram, did not share why billions of users might see certain posts that were not seen by others. However, a group of academic professionals from Princeton and the University of Southern California took matters into their own hands, The Intercept reported.
The group purchased ads from Facebook and tracked their performance among real Facebook users, revealing “evidence of racial discrimination in Meta’s algorithmic delivery of ads for education opportunities, posing legal and ethical concerns.”
For-profit colleges like DeVry and Grand Canyon University were the targets of the study, especially since both schools have been listed among those fined or sued by the Department of Education for advertising trickery.
According to researchers, for-profit colleges have had a “long, demonstrable history of deceiving prospective students,” homing in on students of color through the use of predatory marketing “while delivering lackluster educational outcomes and diminished job prospects” in comparison to other educational institutions.
The group purchased sets of two ads paired together to conduct the study. Therefore, one campaign would be for a public institution like Colorado State University, and the other would focus on a for-profit company such as Strayer University, both of which the report claims were not involved in this project.
While advertisers can fine-tune campaigns on Facebook through a range of targeting options like age and location, race is no longer an option that can be selected when preparing to advertise on the social media network. The researchers, however, found a workaround using North Carolina voter registration data, which includes individuals’ races.
Through this strategy, the scholars could build a sample audience that was 50% Black and 50% white. Black users came from one region in North Carolina, and the white voters were from another part of the state.
By utilizing Facebook’s “custom audience” feature, the researchers were able to upload a roster of specific individuals to be targeted with the ads. While the race of the users who viewed the ads was not disclosed, what was revealed was allegedly the location in which each ad was seen.
“Whenever our ad is shown in Raleigh, we can infer it was shown to a Black person and, when it is shown in Charlotte—we can infer it was shown to a White person,” read the paper.
If the algorithm in question were genuinely unbiased, it would serve ads for each school to an equal number of Black and white users. However, the experiment revealed a bias because Facebook’s algorithm allegedly “disproportionately showed Black users ads for colleges like DeVry and Grand Canyon University.”
Conversely, more white users saw ads geared toward state colleges, per the study.
“Addressing fairness in ads is an industry-wide challenge, and we’ve been collaborating with civil rights groups, academics, and regulators to advance fairness in our ads system,” said Meta spokesperson Daniel Roberts.
“Our advertising standards do not allow advertisers to run ads that discriminate against individuals or groups of individuals based on personal attributes such as race, and we are actively building technology designed to make additional progress in this area.”
In 2016, a ProPublica report revealed that Facebook allows advertisers to “explicitly exclude users from advertising campaigns based on their race.”
Since then, the company has removed options allowing marketers to target users by race. However, even if the aforementioned for-profit programs refined their marketing efforts and “aimed for racially balanced ad targeting,” the team of Princeton and USC researchers found that “Meta’s algorithms would recreate historical racial skew in who the ads are shown to and would do so unbeknownst to the advertisers.”