Facebook » Facebook’s ‘Not-so’ Fake News
With visual content such as memes becoming the language of the web it’s no surprise that Facebook want in.
With a great deal of contextual background, cultural artefacts such as memes are often difficult for the best of us to get our heads round, let alone a computer system with no sense of humour to understand. And, despite the social platform having extensive algorithms and optical character recognition (OCR), it is impossible for humans alone to sift through the 3.2 billion images being shared everyday.
So, *drum roll please* Facebook presents to you…. ROSETTA! I’ll explain what this is later in the article but since I’m explaining a logical process I should probably write in a logical sense too.
So to start with the basics, Facebook loves engagement. The current algorithm rewards content that receives engagement, and considering that people are more likely to comment and share controversial or nostalgic content, this is often the type of content that goes viral. However, whilst we all love to see great content, the problem is that this paves the way for ‘fake news’.
Whilst ‘fake news’ has become a term used in everyday humorous language, in reality, it’s not actually that funny at all (deep – I know). But, fabricated and inaccurate content leads to voicing strong opinions which could lead to broader, skewed perspectives and ultimately, societal divide! (again – deep, but you get the picture). Whether we like it or not, this sharing of information is an impulse that we’re all hard-wired with, and it’s a way to stay connected with friends. So, generally, people do tend to believe what their friends are engaging with and sharing on social media.
Take for example the image below which depicts that this is the way that MGM gets those famous shots of the lion roaring at the start of movies.
The image on the left was tweeted by Carrie Fisher (former Princess Leia) who was deemed to be a legitimate source, and rightly so considering her background as a Hollywood star. The image went viral and people believed it, yet in reality… it’s just a lion named Samson getting a scan after falling ill at an Israeli zoo.
Examples like this seem harmless, but this false representation of a lion being exploited within the film industry resulted in numerous accounts of online abuse targeted at specific individuals. If a user were to see this, then subsequently see a post about, say, Chanel still using fur, their response would probably be even stronger.
So, this is where Rosetta comes into play because ultimately, Facebook wants you to have the most enjoyable experience possible by showing you the real content that you want to see. Previously, Facebook only had the capability to scan an image and detect text, but it has struggled to understand the context of these words.
However, through Rosetta, Facebook is now able to detect text and place this in a bounding box to be analysed by convolutional neural nets that try to recognise the characters and determine what they are communicating.* The systems then pass on this content to fact checkers, to determine whether or not something is ‘fake news’. In a nutshell, this artificial intelligence helps to eliminate offensive material, improves search and discovery to give you content you want to see, and helps to create more content for the visually impaired.
Not bad Facebook, not bad!
*For more technical details of Rosetta go to http://www.kdd.org/kdd2018/accepted-papers/view/rosetta-large-scale-system-for-text-detection-and-recognition-in-images)
© 2022 Maze Media Ltd • All Rights Reserved