Disinformation manipulates accurate information deliberately to mislead the masses, and the spread of such information is not new. It has been in practice for quite some time, right from the imperial war propaganda and now in this digitalized world. Social media has brought with it its perils, and one of them is its usage to spread false information. It has the power to change opinions altogether, for example, the entire dynamics of the public elections. However, it is now being claimed that artificial intelligence systems could efficiently detect and simultaneously counter the spread of this disinformation on digital platforms. The Reconnaissance of Influence Operations (RIO) program built at the MIT Lincoln Laboratory promises to do just the same. It would automatically detect and analyze all the social media accounts used to spread disinformation across a network.
Aim and Initial Studies for the Development of the Program
The researchers’ main aim was to create a system that would automatically detect the misinformation and simultaneously all the individuals involved in the propaganda. The RIO program has been in play from the year 2014, and it originated from the study of how some groups were exploiting social media for their personal purposes. They primarily achieved from their research an increased and unusual activity from certain social media handles that were outwardly trying to push pro-Russian narratives. Then, the team went on to look into the upcoming French elections of 2017 and if social media handles would use similar techniques to manipulate the elections. The team achieved breakthrough results wherein they collected real-time social media data for searching and analyzing the spread of disinformation. In total, a whopping 28 million Twitter posts from 1 million accounts were compiled.
Achievements of the RIO System
Further, with the help of the RIO system developed, the team claimed that they could detect misinformation from the accounts with 96% precision. This system works uniquely wherein it combines an entire plethora of analytical techniques for achieving a comprehensive view on the spread of disinformation. It is also able to analyze how and from where exactly the disinformation is being spread. A break from the traditional view of activity counts was also taken for this system; instead, a statistical approach for the program has been developed. The main reason for this new and conventional approach is that merely the activity counts are not sufficient to understand how much of a role they’re playing in the social network on the whole. The new approach looks at various things and deciphers how much the account is causing the networks to change opinions and how much the message is spread further. This gives a bird’s eye view of the entire account.
Most of the automated systems in use today can detect only the accounts that are being handled by the bots, but this new RIO program claims that it can understand the behavior of the accounts being managed by humans as well. Not only this, but the system also can answer what countermeasures might work and the success that they would achieve in stopping these campaigns.
The Machine Learning Approach
A new machine learning approach has been applied to this program. This is one of the primary features of this program, and it is being claimed that with the help of machine learning, the accounts can be classified by looking at the data-related behaviors. These data-related behaviors range from how these accounts interact with the foreign media and the mode of communication (language) being used. By this, it becomes easier to understand the aim of these accounts and how, where, and in what domains the misinformation is being spread.
The team at MIT wanted to create an effective system not only for the purpose of national security but also for protecting democracy. If this RIO program is as successful as it is being claimed, it could be used by governments and industries. The team also envisions going beyond the realm of social media and coming into traditional modes of newspapers and television as well. A new follow-on program has already been launched to study the cognitive aspects of these disinformation campaigns and how they manage to influence the behavior of individuals.
Paper: https://www.pnas.org/content/118/4/e2011216118
Source: https://news.mit.edu/2021/artificial-intelligence-system-could-help-counter-spread-disinformation-0527
The post AI Researchers From MIT Lincoln Lab Developed RIO (Reconnaissance of Influence Operations) System That Would Counter The Spread Of Disinformation By Making Use Of Machine Learning appeared first on MarkTechPost.