BOULDER, Colo. — Everyone's social media feed is tailored to their own likes and preferences. The algorithms that make that possible also make it so that topics you don't interact with stop showing up on your feed.
With the help of a grant from the National Science Foundation, researchers at the University of Colorado Boulder (CU) want to change that.
Information science professor and department chair Robin Burke has been studying recommender systems since 1995. Recommender systems provide personalized access to items or content in large spaces where it would be difficult to look at everything. Burke said Twitter, Facebook and other social media platforms are also considered recommender systems.
"They’re ordering what you’re seeing in a personalized way and your list is different than somebody else, even if they have the same friends or same connections," he said. "Your news feed is different from my feed, things that get recommended to you on Amazon are different than things that are gonna get recommended to me."
CONCERNS WITH RECOMMENDER SYSTEMS
To some degree, systems should have that personalization to them, but Burke said it can come at the cost of fairness and the two should have a better balance.
"If the system is recommending good jobs to men and not so good jobs to women, that’s a problem," he said.
Burke points to a study that found men and women on Facebook were getting different job recommendations despite having the same qualifications. He said these algorithms can also affect artists and content creators from underrepresented communities.
"If the system is recommending music of some kinds of artist and not other kinds or artists, they don’t get heard, they don’t get opportunities, that’s a problem too," Burke said.
In June 2020, TikTok apologized to members of the Black community after what they said was a technical glitch that made it so posts containing #BlackLivesMatter or #GeorgeFloyd would receive 0 views.
Unfair and biased algorithms can also affect users who use social media as their primary source for news consumption.
"If it’s an algorithm deciding what’s on the front page, and that’s a different front page for everybody, then we really have this question about well, are all the viewpoints that need to be heard being heard by all the people that need to hear them," Burke added.
If you click on articles or posts that represent a particular viewpoint, the system will detect that and show you similar content to those viewpoints and perspectives.
"Eventually you can end up in a situation where you have very homogenous set of inputs," Burke said. "You might like that, like 'okay the world is full of people who think like me,' but it doesn’t really represent the world, and I think one of the things that happens...they somehow think that that’s representative of the world."
ADDRESSING THE ISSUE
Burke said there's no system that would be a one size fits all, but researchers will work with stakeholders to identify the fairness objectives of different organizations.
With their $930,000 grant from National Science Foundation, Burke and colleagues from Tulane University hope to create systems that companies and nonprofits can use to build their own fairness-aware systems, which, they say, will have "algorithms with a built-in notion of how to optimize fairness."
They plan to work with Kiva, an international non-profit that allows people to offer loans to underserved communities, to work on developing tools that will hopefully create a balance for competing ideas. Kiva will help the researchers test the methods they produce.
SUGGESTED VIDEOS: Full Episodes of Next with Kyle Clark