This is interesting - earlier this month, Twitter rolled out its updated, full image display format for visuals added to tweets, ensuring that users get the full context in image previews, as opposed to an automatically cropped version.
Which is a good update, even if it does shift around your tweet timelines a little. But what's particularly interesting is the logic behind the update, which Twitter has shed some new light on in an overview posted today.
Essentially, the update came about as a result of its efforts to address algorithmic bias, not to provide an improved user experience.
The investigation into this began last October, when users started sharing examples of how Twitter's image cropping algorithm favored white people over black people in attached images.
Twitter launched its own investigation as a result of these example tweets, which then lead to a broader analysis of its visual algorithms, and their suitability for this task.
And it did, indeed, find problems.
Twitter says that its previous image cropping process utilized a 'saliency algorithm' to determine image cropping.
"The saliency algorithm works by estimating what a person might want to see first within a picture so that our system could determine how to crop an image to an easily-viewable size. Saliency models are trained on how the human eye looks at a picture as a method of prioritizing what's likely to be most important to the most people. The algorithm, trained on human eye-tracking data, predicts a saliency score on all regions in the image and chooses the point with the highest score as the center of the crop."
After the user reports of potential bias within this process, Twitter conducted internal testing, which showed that its saliency algorithm did have a stronger preference for white people in images over black people, while it would also, sometimes, choose "a woman’s chest or legs as a salient feature" to determine the key elements of focus in pictures.
Both of which are obviously problems, which lead to Twitter updating its image cropping process, which has since seen it move to full image display in the mobile app.
To be clear, Twitter's investigations found the issues to be marginal:
- In comparisons of black and white individuals, there was a 4% difference from demographic parity in favor of white individuals.
- For every 100 images of women, about three cropped at a location other than the head.
- When images weren't cropped at the head, they were cropped to non-physical aspects of the image, such as a number on a sports jersey.
Yet, even so, it decided to take action, as any level of bias on these fronts is not acceptable.
Which also lead Twitter to an interesting finding:
"One of our conclusions is that not everything on Twitter is a good candidate for an algorithm, and in this case, how to crop an image is a decision best made by people."
Algortihms are very good at optimizing tasks, and showing people more of what they like, but they can also, as shown here, reinforce existing bias, and feed problematic behaviors.
It's interesting to see a major social network conclude that, maybe, in some instances, algorithms are not the way forward.
Maybe, that could prompt other platforms to put more consideration into the same.
You can read Twitter's full overview of its image cropping update here.