Quantcast

Reno Reporter

Saturday, September 21, 2024

University of Nevada Reno professor develops model for filtering out online propaganda

Christin hume mfb1b1s4smc unsplash

Online advertising is an ever-worsening danger to national security, democratic institutions and public health. online advertising is an ever-worsening danger to national security, democratic institutions and public health. | Photo by Christin Hume on Unsplash

Online advertising is an ever-worsening danger to national security, democratic institutions and public health. online advertising is an ever-worsening danger to national security, democratic institutions and public health. | Photo by Christin Hume on Unsplash

A linguistic and game theory approach can help identify and explain online propaganda, says a Nevada professor of information systems. 

According to a news release shared by the University of Nevada Reno, online advertising is an ever-worsening danger to national security, democratic institutions and public health. 

The two most-recent notable cases of published misinformation concern alleged voter fraud that led to the Jan. 6, 2021, insurrection at the U.S. Capitol and anti-vaccination advertising, which continues to stand against scientific attempts to turn the tide of COVID-19, the release states.

“The real-time graphical explanation of the propaganda score could in part enhance user trust in the judgments and make the models accepted at large,” Arash Barfar, assistant professor of information systems at the University of Nevada Reno's business school, said in the release.

Barfar went into more detail regarding his model. 

“In addition to its superior predictive performance, the final predictive model outran the baseline models, which is especially important for timely detection of computational propaganda that exploits bots and algorithms for audience targeting," Barfar said. "It only takes a few seconds on a desktop computer to build the propaganda detection model from 205,000 news articles, each with nearly 100 linguistic features. Specifically, as complex models achieve state-of-the-art predictive performance, interpretation and explanation of their decisions become more difficult. 

“The inability to explain why a complex machine learning model makes a certain prediction can potentially lower the user’s trust in the model regardless of its accuracy,” he added.

Further, in an editorial published recently in Expert Systems with Applications, Barfar advanced and tested a model for the automated detection and explanation of propagandistic materials on the Internet. Barfar made a dataset that contained almost 205,000 articles from 39 punitive and 30 trusted news sources and calculated 92 linguistic features for each article, then constructed predictive models that find online propaganda.

ORGANIZATIONS IN THIS STORY

!RECEIVE ALERTS

The next time we write about any of these orgs, we’ll email you a link to the story. You may edit your settings or unsubscribe at any time.
Sign-up

DONATE

Help support the Metric Media Foundation's mission to restore community based news.
Donate

MORE NEWS