Lies spread faster than the truth

春风吹

桃花仙
注册
2017-02-24
消息
3,277
荣誉分数
1,456
声望点数
223
Lies spread faster than the truth
There is worldwide concern over false news and the possibility that it can influence political, economic, and social well-being. To understand how false news spreads, Vosoughi et al. used a data set of rumor cascades on Twitter from 2006 to 2017. About 126,000 rumors were spread by ∼3 million people. False news reached more people than the truth; the top 1% of false news cascades diffused to between 1000 and 100,000 people, whereas the truth rarely diffused to more than 1000 people. Falsehood also diffused faster than the truth. The degree of novelty and the emotional reactions of recipients may be responsible for the differences observed.

Science, this issue p. 1146

Abstract
We investigated the differential diffusion of all of the verified true and false news stories distributed on Twitter from 2006 to 2017. The data comprise ~126,000 stories tweeted by ~3 million people more than 4.5 million times. We classified news as true or false using information from six independent fact-checking organizations that exhibited 95 to 98% agreement on the classifications. Falsehood diffused significantly farther, faster, deeper, and more broadly than the truth in all categories of information, and the effects were more pronounced for false political news than for false news about terrorism, natural disasters, science, urban legends, or financial information. We found that false news was more novel than true news, which suggests that people were more likely to share novel information. Whereas false stories inspired fear, disgust, and surprise in replies, true stories inspired anticipation, sadness, joy, and trust. Contrary to conventional wisdom, robots accelerated the spread of true and false news at the same rate, implying that false news spreads more than the truth because humans, not robots, are more likely to spread it.

SCIENCE TABLE OF CONTENTS NEWSLETTER
Get the latest issue of Science delivered to your inbox weekly

Sign Up

By signing up, you agree to share your email address with the publication. Information provided here is subject to Science's Privacy Policy

Foundational theories of decision-making (13), cooperation (4), communication (5), and markets (6) all view some conceptualization of truth or accuracy as central to the functioning of nearly every human endeavor. Yet, both true and false information spreads rapidly through online media. Defining what is true and false has become a common political strategy, replacing debates based on a mutually agreed on set of facts. Our economies are not immune to the spread of falsity either. False rumors have affected stock prices and the motivation for large-scale investments, for example, wiping out $130 billion in stock value after a false tweet claimed that Barack Obama was injured in an explosion (7). Indeed, our responses to everything from natural disasters (8, 9) to terrorist attacks (10) have been disrupted by the spread of false news online.

New social technologies, which facilitate rapid information sharing and large-scale information cascades, can enable the spread of misinformation (i.e., information that is inaccurate or misleading). But although more and more of our access to information and news is guided by these new technologies (11), we know little about their contribution to the spread of falsity online. Though considerable attention has been paid to anecdotal analyses of the spread of false news by the media (12), there are few large-scale empirical investigations of the diffusion of misinformation or its social origins. Studies of the spread of misinformation are currently limited to analyses of small, ad hoc samples that ignore two of the most important scientific questions: How do truth and falsity diffuse differently, and what factors of human judgment explain these differences?

Current work analyzes the spread of single rumors, like the discovery of the Higgs boson (13) or the Haitian earthquake of 2010 (14), and multiple rumors from a single disaster event, like the Boston Marathon bombing of 2013 (10), or it develops theoretical models of rumor diffusion (15), methods for rumor detection (16), credibility evaluation (17, 18), or interventions to curtail the spread of rumors (19). But almost no studies comprehensively evaluate differences in the spread of truth and falsity across topics or examine why false news may spread differently than the truth. For example, although Del Vicario et al. (20) and Bessi et al. (21) studied the spread of scientific and conspiracy-theory stories, they did not evaluate their veracity. Scientific and conspiracy-theory stories can both be either true or false, and they differ on stylistic dimensions that are important to their spread but orthogonal to their veracity. To understand the spread of false news, it is necessary to examine diffusion after differentiating true and false scientific stories and true and false conspiracy-theory stories and controlling for the topical and stylistic differences between the categories themselves. The only study to date that segments rumors by veracity is that of Friggeri et al. (19), who analyzed ~4000 rumors spreading on Facebook and focused more on how fact checking affects rumor propagation than on how falsity diffuses differently than the truth (22).

In our current political climate and in the academic literature, a fluid terminology has arisen around “fake news,” foreign interventions in U.S. politics through social media, and our understanding of what constitutes news, fake news, false news, rumors, rumor cascades, and other related terms. Although, at one time, it may have been appropriate to think of fake news as referring to the veracity of a news story, we now believe that this phrase has been irredeemably polarized in our current political and media climate. As politicians have implemented a political strategy of labeling news sources that do not support their positions as unreliable or fake news, whereas sources that support their positions are labeled reliable or not fake, the term has lost all connection to the actual veracity of the information presented, rendering it meaningless for use in academic classification. We have therefore explicitly avoided the term fake news throughout this paper and instead use the more objectively verifiable terms “true” or “false” news. Although the terms fake news and misinformation also imply a willful distortion of the truth, we do not make any claims about the intent of the purveyors of the information in our analyses. We instead focus our attention on veracity and stories that have been verified as true or false.

We also purposefully adopt a broad definition of the term news. Rather than defining what constitutes news on the basis of the institutional source of the assertions in a story, we refer to any asserted claim made on Twitter as news (we defend this decision in the supplementary materials section on “reliable sources,” section S1.2). We define news as any story or claim with an assertion in it and a rumor as the social phenomena of a news story or claim spreading or diffusing through the Twitter network. That is, rumors are inherently social and involve the sharing of claims between people. News, on the other hand, is an assertion with claims, whether it is shared or not.

A rumor cascade begins on Twitter when a user makes an assertion about a topic in a tweet, which could include written text, photos, or links to articles online. Others then propagate the rumor by retweeting it. A rumor’s diffusion process can be characterized as having one or more cascades, which we define as instances of a rumor-spreading pattern that exhibit an unbroken retweet chain with a common, singular origin. For example, an individual could start a rumor cascade by tweeting a story or claim with an assertion in it, and another individual could independently start a second cascade of the same rumor (pertaining to the same story or claim) that is completely independent of the first cascade, except that it pertains to the same story or claim. If they remain independent, they represent two cascades of the same rumor. Cascades can be as small as size one (meaning no one retweeted the original tweet). The number of cascades that make up a rumor is equal to the number of times the story or claim was independently tweeted by a user (not retweeted). So, if a rumor “A” is tweeted by 10 people separately, but not retweeted, it would have 10 cascades, each of size one. Conversely, if a second rumor “B” is independently tweeted by two people and each of those two tweets is retweeted 100 times, the rumor would consist of two cascades, each of size 100.

Here we investigate the differential diffusion of true, false, and mixed (partially true, partially false) news stories using a comprehensive data set of all of the fact-checked rumor cascades that spread on Twitter from its inception in 2006 to 2017. The data include ~126,000 rumor cascades spread by ~3 million people more than 4.5 million times. We sampled all rumor cascades investigated by six independent fact-checking organizations (snopes.com, politifact.com, factcheck.org, truthorfiction.com, hoax-slayer.com, and urbanlegends.about.com) by parsing the title, body, and verdict (true, false, or mixed) of each rumor investigation reported on their websites and automatically collecting the cascades corresponding to those rumors on Twitter. The result was a sample of rumor cascades whose veracity had been agreed on by these organizations between 95 and 98% of the time. We cataloged the diffusion of the rumor cascades by collecting all English-language replies to tweets that contained a link to any of the aforementioned websites from 2006 to 2017 and used optical character recognition to extract text from images where needed. For each reply tweet, we extracted the original tweet being replied to and all the retweets of the original tweet. Each retweet cascade represents a rumor propagating on Twitter that has been verified as true or false by the fact-checking organizations (see the supplementary materials for more details on cascade construction). We then quantified the cascades’ depth (the number of retweet hops from the origin tweet over time, where a hop is a retweet by a new unique user), size (the number of users involved in the cascade over time), maximum breadth (the maximum number of users involved in the cascade at any depth), and structural virality (23) (a measure that interpolates between content spread through a single, large broadcast and that which spreads through multiple generations, with any one individual directly responsible for only a fraction of the total spread) (see the supplementary materials for more detail on the measurement of rumor diffusion).

As a rumor is retweeted, the depth, size, maximum breadth, and structural virality of the cascade increase (Fig. 1A). A greater fraction of false rumors experienced between 1 and 1000 cascades, whereas a greater fraction of true rumors experienced more than 1000 cascades (Fig. 1B); this was also true for rumors based on political news (Fig. 1D). The total number of false rumors peaked at the end of both 2013 and 2015 and again at the end of 2016, corresponding to the last U.S. presidential election (Fig. 1C). The data also show clear increases in the total number of false political rumors during the 2012 and 2016 U.S. presidential elections (Fig. 1E) and a spike in rumors that contained partially true and partially false information during the Russian annexation of Crimea in 2014 (Fig. 1E). Politics was the largest rumor category in our data, with ~45,000 cascades, followed by urban legends, business, terrorism, science, entertainment, and natural disasters (Fig. 1F).

Fig. 1 Rumor cascades.
(A) An example rumor cascade collected by our method as well as its depth, size, maximum breadth, and structural virality over time. “Nodes” are users. (B) The complementary cumulative distribution functions (CCDFs) of true, false, and mixed (partially true and partially false) cascades, measuring the fraction of rumors that exhibit a given number of cascades. (C) Quarterly counts of all true, false, and mixed rumor cascades that diffused on Twitter between 2006 and 2017, annotated with example rumors in each category. (D) The CCDFs of true, false, and mixed political cascades. (E) Quarterly counts of all true, false, and mixed political rumor cascades that diffused on Twitter between 2006 and 2017, annotated with example rumors in each category. (F) A histogram of the total number of rumor cascades in our data across the seven most frequent topical categories.


" data-icon-position="" data-hide-link-title="0" style="box-sizing: inherit; color: rgb(55, 88, 138); font-weight: bold;">

Fig. 1 Rumor cascades.
(A) An example rumor cascade collected by our method as well as its depth, size, maximum breadth, and structural virality over time. “Nodes” are users. (B) The complementary cumulative distribution functions (CCDFs) of true, false, and mixed (partially true and partially false) cascades, measuring the fraction of rumors that exhibit a given number of cascades. (C) Quarterly counts of all true, false, and mixed rumor cascades that diffused on Twitter between 2006 and 2017, annotated with example rumors in each category. (D) The CCDFs of true, false, and mixed political cascades. (E) Quarterly counts of all true, false, and mixed political rumor cascades that diffused on Twitter between 2006 and 2017, annotated with example rumors in each category. (F) A histogram of the total number of rumor cascades in our data across the seven most frequent topical categories.


When we analyzed the diffusion dynamics of true and false rumors, we found that falsehood diffused significantly farther, faster, deeper, and more broadly than the truth in all categories of information [Kolmogorov-Smirnov (K-S) tests are reported in tables S3 to S10]. A significantly greater fraction of false cascades than true cascades exceeded a depth of 10, and the top 0.01% of false cascades diffused eight hops deeper into the Twittersphere than the truth, diffusing to depths greater than 19 hops from the origin tweet (Fig. 2A). Falsehood also reached far more people than the truth. Whereas the truth rarely diffused to more than 1000 people, the top 1% of false-news cascades routinely diffused to between 1000 and 100,000 people (Fig. 2B). Falsehood reached more people at every depth of a cascade than the truth, meaning that many more people retweeted falsehood than they did the truth (Fig. 2C). The spread of falsehood was aided by its virality, meaning that falsehood did not simply spread through broadcast dynamics but rather through peer-to-peer diffusion characterized by a viral branching process (Fig. 2D).

Fig. 2 Complementary cumulative distribution functions (CCDFs) of true and false rumor cascades.
(A) Depth. (B) Size. (C) Maximum breadth. (D) Structural virality. (E and F) The number of minutes it takes for true and false rumor cascades to reach any (E) depth and (F) number of unique Twitter users. (G) The number of unique Twitter users reached at every depth and (H) the mean breadth of true and false rumor cascades at every depth. In (H), plot is lognormal. Standard errors were clustered at the rumor level (i.e., cascades belonging to the same rumor were clustered together; see supplementary materials for additional details).


" data-icon-position="" data-hide-link-title="0" style="box-sizing: inherit; color: rgb(55, 88, 138); font-weight: bold;">

Fig. 2 Complementary cumulative distribution functions (CCDFs) of true and false rumor cascades.
(A) Depth. (B) Size. (C) Maximum breadth. (D) Structural virality. (E and F) The number of minutes it takes for true and false rumor cascades to reach any (E) depth and (F) number of unique Twitter users. (G) The number of unique Twitter users reached at every depth and (H) the mean breadth of true and false rumor cascades at every depth. In (H), plot is lognormal. Standard errors were clustered at the rumor level (i.e., cascades belonging to the same rumor were clustered together; see supplementary materials for additional details).


It took the truth about six times as long as falsehood to reach 1500 people (Fig. 2F) and 20 times as long as falsehood to reach a cascade depth of 10 (Fig. 2E). As the truth never diffused beyond a depth of 10, we saw that falsehood reached a depth of 19 nearly 10 times faster than the truth reached a depth of 10 (Fig. 2E). Falsehood also diffused significantly more broadly (Fig. 2H) and was retweeted by more unique users than the truth at every cascade depth (Fig. 2G).

False political news (Fig. 1D) traveled deeper (Fig. 3A) and more broadly (Fig. 3C), reached more people (Fig. 3B), and was more viral than any other category of false information (Fig. 3D). False political news also diffused deeper more quickly (Fig. 3E) and reached more than 20,000 people nearly three times faster than all other types of false news reached 10,000 people (Fig. 3F). Although the other categories of false news reached about the same number of unique users at depths between 1 and 10, false political news routinely reached the most unique users at depths greater than 10 (Fig. 3G). Although all other categories of false news traveled slightly more broadly at shallower depths, false political news traveled more broadly at greater depths, indicating that more-popular false political news items exhibited broader and more-accelerated diffusion dynamics (Fig. 3H). Analysis of all news categories showed that news about politics, urban legends, and science spread to the most people, whereas news about politics and urban legends spread the fastest and were the most viral in terms of their structural virality (see fig. S11 for detailed comparisons across all topics).

Fig. 3 Complementary cumulative distribution functions (CCDFs) of false political and other types of rumor cascades.
(A) Depth. (B) Size. (C) Maximum breadth. (D) Structural virality. (E and F) The number of minutes it takes for false political and other false news cascades to reach any (E) depth and (F) number of unique Twitter users. (G) The number of unique Twitter users reached at every depth and (H) the mean breadth of these false rumor cascades at every depth. In (H), plot is lognormal. Standard errors were clustered at the rumor level.


" data-icon-position="" data-hide-link-title="0" style="box-sizing: inherit; color: rgb(55, 88, 138); font-weight: bold;">

Fig. 3 Complementary cumulative distribution functions (CCDFs) of false political and other types of rumor cascades.
(A) Depth. (B) Size. (C) Maximum breadth. (D) Structural virality. (E and F) The number of minutes it takes for false political and other false news cascades to reach any (E) depth and (F) number of unique Twitter users. (G) The number of unique Twitter users reached at every depth and (H) the mean breadth of these false rumor cascades at every depth. In (H), plot is lognormal. Standard errors were clustered at the rumor level.


One might suspect that structural elements of the network or individual characteristics of the users involved in the cascades explain why falsity travels with greater velocity than the truth. Perhaps those who spread falsity “followed” more people, had more followers, tweeted more often, were more often “verified” users, or had been on Twitter longer. But when we compared users involved in true and false rumor cascades, we found that the opposite was true in every case. Users who spread false news had significantly fewer followers (K-S test = 0.104, P ~ 0.0), followed significantly fewer people (K-S test = 0.136, P~ 0.0), were significantly less active on Twitter (K-S test = 0.054, P ~ 0.0), were verified significantly less often (K-S test = 0.004, P < 0.001), and had been on Twitter for significantly less time (K-S test = 0.125, P ~ 0.0) (Fig. 4A). Falsehood diffused farther and faster than the truth despite these differences, not because of them.

Fig. 4 Models estimating correlates of news diffusion, the novelty of true and false news, and the emotional content of replies to news.
(A) Descriptive statistics on users who participated in true and false rumor cascades as well as K-S tests of the differences in the distributions of these measures across true and false rumor cascades. (B) Results of a logistic regression model estimating users’ likelihood of retweeting a rumor as a function of variables shown at the left. coeff, logit coefficient; z, z score. (C) Differences in the information uniqueness (IU), scaled Bhattacharyya distance (BD), and K-L divergence (KL) of true (green) and false (red) rumor tweets compared to the corpus of prior tweets the user was exposed to in the 60 days before retweeting the rumor tweet. (D) The emotional content of replies to true (green) and false (red) rumor tweets across seven dimensions categorized by the NRC. (E) Mean and variance of the IU, KL, and BD of true and false rumor tweets compared to the corpus of prior tweets the user has seen in the 60 days before seeing the rumor tweet as well as K-S tests of their differences across true and false rumors. (F) Mean and variance of the emotional content of replies to true and false rumor tweets across seven dimensions categorized by the NRC as well as K-S tests of their differences across true and false rumors. All standard errors are clustered at the rumor level, and all models are estimated with cluster-robust standard errors at the rumor level.


" data-icon-position="" data-hide-link-title="0" style="box-sizing: inherit; color: rgb(55, 88, 138); font-weight: bold;">

Fig. 4 Models estimating correlates of news diffusion, the novelty of true and false news, and the emotional content of replies to news.
(A) Descriptive statistics on users who participated in true and false rumor cascades as well as K-S tests of the differences in the distributions of these measures across true and false rumor cascades. (B) Results of a logistic regression model estimating users’ likelihood of retweeting a rumor as a function of variables shown at the left. coeff, logit coefficient; z, z score. (C) Differences in the information uniqueness (IU), scaled Bhattacharyya distance (BD), and K-L divergence (KL) of true (green) and false (red) rumor tweets compared to the corpus of prior tweets the user was exposed to in the 60 days before retweeting the rumor tweet. (D) The emotional content of replies to true (green) and false (red) rumor tweets across seven dimensions categorized by the NRC. (E) Mean and variance of the IU, KL, and BD of true and false rumor tweets compared to the corpus of prior tweets the user has seen in the 60 days before seeing the rumor tweet as well as K-S tests of their differences across true and false rumors. (F) Mean and variance of the emotional content of replies to true and false rumor tweets across seven dimensions categorized by the NRC as well as K-S tests of their differences across true and false rumors. All standard errors are clustered at the rumor level, and all models are estimated with cluster-robust standard errors at the rumor level.


When we estimated a model of the likelihood of retweeting, we found that falsehoods were 70% more likely to be retweeted than the truth (Wald chi-square test, P ~ 0.0), even when controlling for the account age, activity level, and number of followers and followees of the original tweeter, as well as whether the original tweeter was a verified user (Fig. 4B). Because user characteristics and network structure could not explain the differential diffusion of truth and falsity, we sought alternative explanations for the differences in their diffusion dynamics.

One alternative explanation emerges from information theory and Bayesian decision theory. Novelty attracts human attention (24), contributes to productive decision-making (25), and encourages information sharing (26) because novelty updates our understanding of the world. When information is novel, it is not only surprising, but also more valuable, both from an information theoretic perspective [in that it provides the greatest aid to decision-making (25)] and from a social perspective [in that it conveys social status on one that is “in the know” or has access to unique “inside” information (26)]. We therefore tested whether falsity was more novel than the truth and whether Twitter users were more likely to retweet information that was more novel.

To assess novelty, we randomly selected ~5000 users who propagated true and false rumors and extracted a random sample of ~25,000 tweets that they were exposed to in the 60 days prior to their decision to retweet a rumor. We then specified a latent Dirichlet Allocation Topic model (27), with 200 topics and trained on 10 million English-language tweets, to calculate the information distance between the rumor tweets and all the prior tweets that users were exposed to before retweeting the rumor tweets. This generated a probability distribution over the 200 topics for each tweet in our data set. We then measured how novel the information in the true and false rumors was by comparing the topic distributions of the rumor tweets with the topic distributions of the tweets to which users were exposed in the 60 days before their retweet. We found that false rumors were significantly more novel than the truth across all novelty metrics, displaying significantly higher information uniqueness (K-S test = 0.457, P ~ 0.0) (28), Kullback-Leibler (K-L) divergence (K-S test = 0.433, P ~ 0.0) (29), and Bhattacharyya distance (K-S test = 0.415, P ~ 0.0) (which is similar to the Hellinger distance) (30). The last two metrics measure differences between probability distributions representing the topical content of the incoming tweet and the corpus of previous tweets to which users were exposed.

Although false rumors were measurably more novel than true rumors, users may not have perceived them as such. We therefore assessed users’ perceptions of the information contained in true and false rumors by comparing the emotional content of replies to true and false rumors. We categorized the emotion in the replies by using the leading lexicon curated by the National Research Council Canada (NRC), which provides a comprehensive list of ~140,000 English words and their associations with eight emotions based on Plutchik’s (31) work on basic emotion—anger, fear, anticipation, trust, surprise, sadness, joy, and disgust (32)—and a list of ~32,000 Twitter hashtags and their weighted associations with the same emotions (33). We removed stop words and URLs from the reply tweets and calculated the fraction of words in the tweets that related to each of the eight emotions, creating a vector of emotion weights for each reply that summed to one across the emotions. We found that false rumors inspired replies expressing greater surprise (K-S test = 0.205, P ~ 0.0), corroborating the novelty hypothesis, and greater disgust (K-S test = 0.102, P ~ 0.0), whereas the truth inspired replies that expressed greater sadness (K-S test = 0.037, P~ 0.0), anticipation (K-S test = 0.038, P ~ 0.0), joy (K-S test = 0.061, P ~ 0.0), and trust (K-S test = 0.060, P ~ 0.0) (Fig. 4, D and F). The emotions expressed in reply to falsehoods may illuminate additional factors, beyond novelty, that inspire people to share false news. Although we cannot claim that novelty causes retweets or that novelty is the only reason why false news is retweeted more often, we do find that false news is more novel and that novel information is more likely to be retweeted.

Numerous diagnostic statistics and manipulation checks validated our results and confirmed their robustness. First, as there were multiple cascades for every true and false rumor, the variance of and error terms associated with cascades corresponding to the same rumor will be correlated. We therefore specified cluster-robust standard errors and calculated all variance statistics clustered at the rumor level. We tested the robustness of our findings to this specification by comparing analyses with and without clustered errors and found that, although clustering reduced the precision of our estimates as expected, the directions, magnitudes, and significance of our results did not change, and chi-square (P ~ 0.0) and deviance (d) goodness-of-fit tests (d = 3.4649 × 10–6, P ~ 1.0) indicate that the models are well specified (see supplementary materials for more detail).

Second, a selection bias may arise from the restriction of our sample to tweets fact checked by the six organizations we relied on. Fact checking may select certain types of rumors or draw additional attention to them. To validate the robustness of our analysis to this selection and the generalizability of our results to all true and false rumor cascades, we independently verified a second sample of rumor cascades that were not verified by any fact-checking organization. These rumors were fact checked by three undergraduate students at Massachusetts Institute of Technology (MIT) and Wellesley College. We trained the students to detect and investigate rumors with our automated rumor-detection algorithm running on 3 million English-language tweets from 2016 (34). The undergraduate annotators investigated the veracity of the detected rumors using simple search queries on the web. We asked them to label the rumors as true, false, or mixed on the basis of their research and to discard all rumors previously investigated by one of the fact-checking organizations. The annotators, who worked independently and were not aware of one another, agreed on the veracity of 90% of the 13,240 rumor cascades that they investigated and achieved a Fleiss’ kappa of 0.88. When we compared the diffusion dynamics of the true and false rumors that the annotators agreed on, we found results nearly identical to those estimated with our main data set (see fig. S17). False rumors in the robustness data set had greater depth (K-S test = 0.139, P ~ 0.0), size (K-S test = 0.131, P~ 0.0), maximum breadth (K-S test = 0.139, P~ 0.0), structural virality (K-S test = 0.066, P ~ 0.0), and speed (fig. S17) and a greater number of unique users at each depth (fig. S17). When we broadened the analysis to include majority-rule labeling, rather than unanimity, we again found the same results (see supplementary materials for results using majority-rule labeling).

Third, although the differential diffusion of truth and falsity is interesting with or without robot, or bot, activity, one may worry that our conclusions about human judgment may be biased by the presence of bots in our analysis. We therefore used a sophisticated bot-detection algorithm (35) to identify and remove all bots before running the analysis. When we added bot traffic back into the analysis, we found that none of our main conclusions changed—false news still spread farther, faster, deeper, and more broadly than the truth in all categories of information. The results remained the same when we removed all tweet cascades started by bots, including human retweets of original bot tweets (see supplementary materials, section S8.3) and when we used a second, independent bot-detection algorithm (see supplementary materials, section S8.3.5) and varied the algorithm’s sensitivity threshold to verify the robustness of our analysis (see supplementary materials, section S8.3.4). Although the inclusion of bots, as measured by the two state-of-the-art bot-detection algorithms we used in our analysis, accelerated the spread of both true and false news, it affected their spread roughly equally. This suggests that false news spreads farther, faster, deeper, and more broadly than the truth because humans, not robots, are more likely to spread it.

Finally, more research on the behavioral explanations of differences in the diffusion of true and false news is clearly warranted. In particular, more robust identification of the factors of human judgment that drive the spread of true and false news online requires more direct interaction with users through interviews, surveys, lab experiments, and even neuroimaging. We encourage these and other approaches to the investigation of the factors of human judgment that drive the spread of true and false news in future work.

False news can drive the misallocation of resources during terror attacks and natural disasters, the misalignment of business investments, and misinformed elections. Unfortunately, although the amount of false news online is clearly increasing (Fig. 1, C and E), the scientific understanding of how and why false news spreads is currently based on ad hoc rather than large-scale systematic analyses. Our analysis of all the verified true and false rumors that spread on Twitter confirms that false news spreads more pervasively than the truth online. It also overturns conventional wisdom about how false news spreads. Though one might expect network structure and individual characteristics of spreaders to favor and promote false news, the opposite is true. The greater likelihood of people to retweet falsity more than the truth is what drives the spread of false news, despite network and individual factors that favor the truth. Furthermore, although recent testimony before congressional committees on misinformation in the United States has focused on the role of bots in spreading false news (36), we conclude that human behavior contributes more to the differential spread of falsity and truth than automated robots do. This implies that misinformation-containment policies should also emphasize behavioral interventions, like labeling and incentives to dissuade the spread of misinformation, rather than focusing exclusively on curtailing bots. Understanding how false news spreads is the first step toward containing it. We hope our work inspires more large-scale research into the causes and consequences of the spread of false news as well as its potential cures.

Supplementary Materials
www.sciencemag.org/content/359/6380/1146/suppl/DC1

Materials and Methods

Figs. S1 to S20

Tables S1 to S39

References (3775)

http://www.sciencemag.org/about/science-licenses-journal-article-reuse
This is an article distributed under the terms of the Science Journals Default License.

References and Notes
    1. L. J. Savage
    , The theory of statistical decision. J. Am. Stat. Assoc. 46, 55–67 (1951).doi:10.1080/01621459.1951.10500768
    CrossRefGoogle Scholar
  1. H. A. Simon, The New Science of Management Decision (Harper & Brothers Publishers, New York, 1960).
    Google Scholar
    1. R. Wedgwood
    , The aim of belief. Noûs 36,267–297 (2002). doi:10.1111/1468-0068.36.s16.10
    CrossRefGoogle Scholar
    1. E. Fehr,
    2. U. Fischbacher
    , The nature of human altruism. Nature 425, 785–791 (2003).doi:10.1038/nature02043pmid:14574401
    CrossRefPubMedWeb of ScienceGoogle Scholar
    1. C. E. Shannon
    , A mathematical theory of communication. Bell Syst. Tech. J. 27, 379–423 (1948). doi:10.1002/j.1538-7305.1948.tb01338.x
    CrossRefWeb of ScienceGoogle Scholar
    1. S. Bikhchandani,
    2. D. Hirshleifer,
    3. I. Welch
    , A theory of fads, fashion, custom, and cultural change as informational cascades. J. Polit. Econ. 100, 992–1026 (1992).doi:10.1086/261849
    CrossRefGoogle Scholar

  2. K. Rapoza, “Can ‘fake news’ impact the stock market?” Forbes, 26 February 2017;www.forbes.com/sites/kenrapoza/2017/02/26/can-fake-news-impact-the-stock-market/.
    Google Scholar

  3. M. Mendoza, B. Poblete, C. Castillo, “Twitter under crisis: Can we trust what we RT?” inProceedings of the First Workshop on Social Media Analytics (Association for Computing Machinery, ACM, 2010), pp. 71–79.
    Google Scholar

  4. A. Gupta, H. Lamba, P. Kumaraguru, A. Joshi, “Faking Sandy: Characterizing and identifying fake images on Twitter during Hurricane Sandy,” in Proceedings of the 22nd International Conference on World Wide Web (ACM, 2010), pp. 729–736.
    Google Scholar

  5. K. Starbird, J. Maddock, M. Orand, P. Achterman, R. M. Mason, “Rumors, false flags, and digital vigilantes: Misinformation on Twitter after the 2013 Boston Marathon bombing,” in iConference 2014 Proceedings(iSchools, 2014).
    Google Scholar

  6. J. Gottfried, E. Shearer, “News use across social media platforms,” Pew Research Center, 26 May 2016;www.journalism.org/2016/05/26/news-use-across-social-media-platforms-2016/.
    Google Scholar

  7. C. Silverman, “This analysis shows how viral fake election news stories outperformed real news on Facebook,” BuzzFeed News, 16 November 2016;www.buzzfeed.com/craigsilverman/viral-fake-election-news-outperformed-real-news-on-facebook/.
    Google Scholar
    1. M. De Domenico,
    2. A. Lima,
    3. P. Mougel,
    4. M.Musolesi
    , The anatomy of a scientific rumor.Sci. Rep. 3, 2980 (2013).doi:10.1038/srep02980pmid:24135961
    CrossRefPubMedGoogle Scholar

  8. O. Oh, K. H. Kwon, H. R. Rao, “An exploration of social media in extreme events: Rumor theory and Twitter during the Haiti earthquake 2010,” in Proceedings of the International Conference on Information Systems(International Conference on Information Systems, ICIS, paper 231, 2010).
    Google Scholar

  9. M. Tambuscio, G. Ruffo, A. Flammini, F. Menczer, “Fact-checking effect on viral hoaxes: A model of misinformation spread in social networks,” in Proceedings of the 24th International Conference on World Wide Web(ACM, 2015), pp. 977–982.
    Google Scholar

  10. Z. Zhao, P. Resnick, Q. Mei, “Enquiring minds: Early detection of rumors in social media from enquiry posts,” in Proceedings of the 24th International Conference on World Wide Web (ACM, 2015), pp. 1395–1405.
    Google Scholar

  11. M. Gupta, P. Zhao, J. Han, “Evaluating event credibility on Twitter,” in Proceedings of the 2012 Society for Industrial and Applied Mathematics International Conference on Data Mining (Society for Industrial and Applied Mathematics, SIAM, 2012), pp. 153–164.
    Google Scholar
    1. G. L. Ciampaglia,
    2. P. Shiralkar,
    3. L. M. Rocha,
    4. J.Bollen,
    5. F. Menczer,
    6. A. Flammini
    , Computational fact checking from knowledge networks. PLOS ONE 10, e0128193 (2015).doi:10.1371/journal.pone.0128193pmid:26083336
    CrossRefPubMedGoogle Scholar

  12. A. Friggeri, L. A. Adamic, D. Eckles, J. Cheng, “Rumor cascades,” in Proceedings of the International Conference on Weblogs and Social Media (Association for the Advancement of Artificial Intelligence, AAAI, 2014)
    Google Scholar
    1. M. Del Vicario,
    2. A. Bessi,
    3. F. Zollo,
    4. F. Petroni,
    5. A. Scala,
    6. G. Caldarelli,
    7. H. E. Stanley,
    8. W.Quattrociocchi
    , The spreading of misinformation online. Proc. Natl. Acad. Sci. U.S.A. 113, 554–559 (2016).doi:10.1073/pnas.1517441113pmid:26729863
    Abstract/FREE Full TextGoogle Scholar
    1. A. Bessi,
    2. M. Coletto,
    3. G. A. Davidescu,
    4. A.Scala,
    5. G. Caldarelli,
    6. W. Quattrociocchi
    , Science vs conspiracy: Collective narratives in the age of misinformation. PLOS ONE 10, e0118093(2015).doi:10.1371/journal.pone.0118093pmid:25706981
    CrossRefPubMedGoogle Scholar
  13. Friggeri et al. (19) do evaluate two metrics of diffusion: depth, which shows little difference between true and false rumors, and shares per rumor, which is higher for true rumors than it is for false rumors. Although these results are important, they are not definitive owing to the smaller sample size of the study; the early timing of the sample, which misses the rise of false news after 2013; and the fact that more shares per rumor do not necessarily equate to deeper, broader, or more rapid diffusion.
    1. S. Goel,
    2. A. Anderson,
    3. J. Hofman,
    4. D. J. Watts
    ,The structural virality of online diffusion.Manage. Sci. 62, 180–196 (2015).
    Google Scholar
    1. L. Itti,
    2. P. Baldi
    , Bayesian surprise attracts human attention. Vision Res. 49, 1295–1306(2009).doi:10.1016/j.visres.2008.09.007pmid:18834898
    CrossRefPubMedWeb of ScienceGoogle Scholar
    1. S. Aral,
    2. M. Van Alstyne
    , The diversity-bandwidth trade-off. Am. J. Sociol. 117, 90–171 (2011). doi:10.1086/661238
    CrossRefGoogle Scholar
    1. J. Berger,
    2. K. L. Milkman
    , What makes online content viral? J. Mark. Res. 49, 192–205(2012). doi:10.1509/jmr.10.0353
    CrossRefGoogle Scholar
    1. D. M. Blei,
    2. A. Y. Ng,
    3. M. I. Jordan
    , Latent Dirichlet allocation. J. Mach. Learn. Res. 3,993–1022 (2003).
    CrossRefWeb of ScienceGoogle Scholar

  14. S. Aral, P. Dhillon, “Unpacking novelty: The anatomy of vision advantages,” Working paper, MIT–Sloan School of Management, Cambridge, MA, 22 June 2016;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2388254.
    Google Scholar

  15. T. M. Cover, J. A. Thomas, Elements of Information Theory (Wiley, 2012).
    Google Scholar
    1. T. Kailath
    , The divergence and Bhattacharyya distance measures in signal selection. IEEE Trans. Commun. Technol. 15,52–60 (1967).doi:10.1109/TCOM.1967.1089532
    CrossRefGoogle Scholar
    1. R. Plutchik
    , The nature of emotions. Am. Sci.89, 344–350 (2001). doi:10.1511/2001.4.344
    CrossRefWeb of ScienceGoogle Scholar
    1. S. M. Mohammad,
    2. P. D. Turney
    ,Crowdsourcing a word-emotion association lexicon. Comput. Intell. 29, 436–465 (2013).doi:10.1111/j.1467-8640.2012.00460.x
    CrossRefGoogle Scholar
    1. S. M. Mohammad,
    2. S. Kiritchenko
    , Using hashtags to capture fine emotion categories from tweets. Comput. Intell. 31, 301–326(2015). doi:10.1111/coin.12024
    CrossRefGoogle Scholar

  16. S. Vosoughi, D. Roy, “A semi-automatic method for efficient detection of stories on social media,” in Proceedings of the 10th International AAAI Conference on Weblogs and Social Media (AAAI, 2016), pp. 707–710.
    Google Scholar

  17. C. A. Davis, O. Varol, E. Ferrara, A. Flammini, F. Menczer, “BotOrNot: A system to evaluate social bots,” in Proceedings of the 25th International Conference Companion on World Wide Web (ACM, 2016), pp. 273–274.
    Google Scholar
  18. For example, this is an argument made in recent testimony by Clint Watts—Robert A. Fox Fellow at the Foreign Policy Research Institute and Senior Fellow at the Center for Cyber and Homeland Security at George Washington University—given during the U.S. Senate Select Committee on Intelligence hearing on “Disinformation: A Primer in Russian Active Measures and Influence Campaigns” on 30 March 2017; www.intelligence.senate.gov/sites/default/files/documents/os-cwatts-033017.pdf.
    1. D. Trpevski,
    2. W. K. Tang,
    3. L. Kocarev
    , Model for rumor spreading over networks. Phys. Rev. E Stat. Nonlin. Soft Matter Phys. 81, 056102(2010).doi:10.1103/PhysRevE.81.056102pmid:20866292
    CrossRefPubMedGoogle Scholar
    1. B. Doerr,
    2. M. Fouz,
    3. T. Friedrich
    , Why rumors spread so quickly in social networks. Commun. ACM 55, 70–75 (2012).doi:10.1145/2184319.2184338
    CrossRefGoogle Scholar
  19. F. Jin, E. Dougherty, P. Saraf, Y. Cao, N. Ramakrishnan, “Epidemiological modeling of news and rumors on Twitter,” in Proceedings of the 7th Workshop on Social Network Mining and Analysis (ACM, 2013).
    Google Scholar
  20. J. Cheng, L. A. Adamic, J. M. Kleinberg, J. Leskovec, “Do cascades recur?” in Proceedings of the 25th International Conference on World Wide Web (ACM, 2016).
    Google Scholar
  21. V. Qazvinian, E. Rosengren, D. R. Radev, Q. Mei, “Rumor has it: Identifying misinformation in microblogs,” in Proceedings of the Conference on Empirical Methods in Natural Language Processing (Association for Computational Linguistics, ACL, 2011).
    Google Scholar
    1. S. Vosoughi,
    2. M. Mohsenvand,
    3. D. Roy
    , Rumor gauge: Predicting the veracity of rumors on Twitter. ACM Trans. Knowl. Discov. Data 11, 50(2017). doi:10.1145/3070644
    CrossRefGoogle Scholar
  22. W. Xu, H. Chen, “Scalable rumor source detection under independent cascade model in online social networks,” in 2015 11th International Conference on Mobile Ad-hoc and Sensor Networks (MSN) (IEEE, 2015).
    Google Scholar
  23. T. Takahashi, N. Igata, “Rumor detection on Twitter,” in 2012 Joint 6th International Conference on Soft Computing and Intelligent Systems (SCIS) and 13th International Symposium on Advanced Intelligent Systems (ISIS) (IEEE, 2012).
    Google Scholar
  24. C. Castillo, M. Mendoza, B. Poblete, “Information credibility on Twitter,” inProceedings of the 20th International Conference on World Wide Web (ACM, 2011).
    Google Scholar
  25. R. M. Tripathy, A. Bagchi, S. Mehta, “A study of rumor control strategies on social networks,” inProceedings of the 19th ACM International Conference on Information and Knowledge Management (ACM, 2010).
    Google Scholar
    1. J. Shin,
    2. L. Jian,
    3. K. Driscoll,
    4. F. Bar
    , Political rumoring on Twitter during the 2012 U.S. presidential election: Rumor diffusion and correction. New Media Soc. 19, 1214–1235(2017).doi:10.1037/h0031619
    CrossRefGoogle Scholar
  26. P. Ozturk, H. Li, Y. Sakamoto, “Combating rumor spread on social media: The effectiveness of refutation and warning,” in 2015 48th Hawaii International Conference on System Sciences (HICSS) (IEEE, 2015).
    Google Scholar
    1. A. Bessi,
    2. F. Petroni,
    3. M. Del Vicario,
    4. F. Zollo,
    5. A.Anagnostopoulos,
    6. A. Scala,
    7. G. Caldarelli,
    8. W.Quattrociocchi
    , Homophily and polarization in the age of misinformation. Eur. Phys. J. Spec. Top. 225, 2047–2059 (2016).doi:10.1140/epjst/e2015-50319-0
    CrossRefGoogle Scholar
    1. A. Bessi,
    2. A. Scala,
    3. L. Rossi,
    4. Q. Zhang,
    5. W.Quattrociocchi
    , . The economy of attention in the age of (mis)information. J. Trust Manage.1, 12 (2014). doi:10.1140/epjst/e2015-50319-0
    CrossRefGoogle Scholar
  27. A. Mitchell, J. Gottfried, J. Kiley, K. E. Matsa, “Political polarization & media habits,” Pew Research Center;www.journalism.org/2014/10/21/political-polarization-media-habits/.
    Google Scholar
    1. J. L. Fleiss
    , Measuring nominal scale agreement among many raters. Psychol. Bull.76, 378–382 (1971). doi:10.1037/h0031619
    CrossRefGoogle Scholar
  28. Q. Le, T. Mikolov, “Distributed representations of sentences and documents,” in Proceedings of the 31st International Conference on Machine Learning (ICML-14) (Journal of Machine Learning Research, 2014).
    Google Scholar
  29. S. Vosoughi, P. Vijayaraghavan, D. Roy, “Tweet2vec: Learning tweet embeddings using character-level cnn-lstm encoder-decoder,” inProceedings of the 39th International ACM SIGIR Conference on Research and Development in Information Retrieval (ACM, 2016).
    Google Scholar
  30. C. A. Davis, O. Varol, E. Ferrara, A. Flammini, F. Menczer, “Botornot: A system to evaluate social bots,” in Proceedings of the 25th International Conference Companion on World Wide Web (ACM, 2016).
    Google Scholar
  31. J. Maddock, K. Starbird, R. M. Mason, “Using historical Twitter data for research: Ethical challenges of tweet deletions,” in CSCW 2015 Workshop on Ethics for Studying Sociotechnical Systems in a Big Data World (ACM, 2015).
    Google Scholar
  32. S. Goel, D. J. Watts, D. G. Goldstein, “The structure of online diffusion networks,” inProceedings of the 13th ACM conference on Electronic Commerce (ACM, 2012).
    Google Scholar
    1. J. M. Wooldridge
    , Cluster-sample methods in applied econometrics. Am. Econ. Rev. 93, 133–138 (2003).doi:10.1257/000282803321946930
    CrossRefGoogle Scholar
    1. A. C. Cameron,
    2. D. L. Miller
    , A practitioner’s guide to cluster-robust inference. J. Hum. Resour. 50, 317–372 (2015).doi:10.3368/jhr.50.2.317
    CrossRefGoogle Scholar
  33. P. Vijayaraghavan, S. Vosoughi, D. Roy, “Twitter demographic classification using deep multi-modal multi-task learning,” in Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL) (Volume 2: Short Papers) (ACL, 2017).
    Google Scholar
  34. A. Gupta, H. Lamba, P. Kumaraguru, A. Joshi, “Faking Sandy: Characterizing and identifying fake images on Twitter during Hurricane Sandy,” in Proceedings of the 22nd International Conference on World Wide Web (ACM, 2013).
    Google Scholar
  35. S. M. Mohammad, P. D. Turney, “Emotions evoked by common words and phrases: Using mechanical turk to create an emotion lexicon,” in Proceedings of the NAACL HLT 2010 Workshop on Computational Approaches to Analysis and Generation of Emotion in Text(ACL, 2010).
    Google Scholar
  36. S. M. Mohammad, “# emotional tweets,” inProceedings of the First Joint Conference on Lexical and Computational Semantics (ACL, 2012).
    Google Scholar
  37. S. Bird, E. Klein, E. Loper, Natural Language Processing with Python: Analyzing Text with the Natural Language Toolkit (O’Reilly Media, ed. 1, 2009).
    Google Scholar
    1. J. W. Pennebaker,
    2. M. E. Francis,
    3. R. J. Booth
    ,Linguistic inquiry and word count: LIWC 2001.Mahway: Lawrence Erlbaum Associates 71,2001 (2001).
    Google Scholar
  38. M. Mendoza, B. Poblete, C. Castillo, “Twitter under crisis: Can we trust what we RT?” inProceedings of the First Workshop on Social Media Analytics (ACM, 2010).
    Google Scholar
  39. L. Zeng, K. Starbird, E. S. Spiro, “Rumors at the speed of light? Modeling the rate of rumor transmission during crisis,” in 2016 49th Hawaii International Conference on System Sciences (HICSS) (IEEE, 2016).
    Google Scholar
  40. W. X. Zhao, J. Jiang, J. Weng, J. He, E.-P. Lim, H. Yan, X. Li, “Comparing Twitter and traditional media using topic models,” in European Conference on Information Retrieval (ECIR)(ECIR, 2011).
    Google Scholar
  41. S. Aral, P. Dhillon, “Unpacking novelty: The anatomy of vision advantages,” Working paper, MIT–Sloan School of Management, Cambridge, MA, 22 June 2016.
    Google Scholar
  42. T. M. Cover, J. A. Thomas, Elements of Information Theory (Wiley, ed. 2, 2012).
    Google Scholar
    1. S. Kullback,
    2. R. A. Leibler
    , On information and sufficiency. Ann. Math. Stat. 22, 79–86 (1951).doi:10.1214/aoms/1177729694
    CrossRefGoogle Scholar
    1. V. D. Blondel,
    2. J.-L. Guillaume,
    3. R. Lambiotte,
    4. E.Lefebvre
    , Fast unfolding of communities in large networks. J. Stat. Mech. 2008, P10008(2008).doi:10.1214/aoms/1177729694
    CrossRefGoogle Scholar
  43. S. Vosoughi, D. Roy, “A semi-automatic method for efficient detection of stories on social media,” in 10th International AAAI Conference on Web and Social Media (AAAI, 2016).
    Google Scholar
    1. E. Ferrara,
    2. O. Varol,
    3. C. Davis,
    4. F. Menczer,
    5. A.Flammini
    , The rise of social bots. Commun. ACM 59, 96–104 (2016).doi:10.1145/2818717
    CrossRefGoogle Scholar
    1. A. Almaatouq,
    2. E. Shmueli,
    3. M. Nouh,
    4. A.Alabdulkareem,
    5. V. K. Singh,
    6. M. Alsaleh,
    7. A.Alarifi,
    8. A. Alfaris,
    9. A. Pentland
    , If it looks like a spammer and behaves like a spammer, it must be a spammer: Analysis and detection of microblogging spam accounts. Int. J. Inf. Secur. 15, 475–491 (2016).doi:10.1007/s10207-016-0321-5
    CrossRefGoogle Scholar
Acknowledgments: We are indebted to Twitter for providing funding and access to the data. We are also grateful to members of the MIT research community for invaluable discussions. The research was approved by the MIT institutional review board. The analysis code is freely available at https://goo.gl/forms/AKIlZujpexhN7fY33. The entire data set is also available, from the same link, upon signing an access agreement stating that (i) you shall only use the data set for the purpose of validating the results of the MIT study and for no other purpose; (ii) you shall not attempt to identify, reidentify, or otherwise deanonymize the data set; and (iii) you shall not further share, distribute, publish, or otherwise disseminate the data set. Those who wish to use the data for any other purposes can contact and make a separate agreement with Twitter.
 

耶书仑

世事洞明皆学问
管理成员
注册
2008-09-27
消息
10,527
荣誉分数
6,759
声望点数
393
Interesting topic
 

春风吹

桃花仙
注册
2017-02-24
消息
3,277
荣誉分数
1,456
声望点数
223
谎言比真言传播的快。
以twitter为例。
原因不是机器人。
原因是人。
 

无耻的校长

高级会员
注册
2017-10-19
消息
2,860
荣誉分数
328
声望点数
83
针对加拿大人的十大骗局 这些你都知道吗?
2018-03-08 来源:温哥华港湾 分类:头条新闻 阅读(216) 评论(0)


20180307_15204732465121.jpg


2017年商业改进局今天发布了2017年针对加拿大人的十大骗局。去年,风险最高的骗局与网上购物有关,损失超过1300万加币。

BC省大陆地区服务局高级通讯顾问Evan Kelly告诉列治文新闻,网上购物骗局不仅是低陆平原的头号骗局,而且正在成为北美最流行的骗局。

“这背后的原因是互联网的普及,”他说。 “现在人们经常在网上泄露大量的个人信息,而且骗子建立一个虚假的网站也更容易。” Kelly提到检查网上商店的可信度很重要。

“最好去一家有良好声誉的网上商店,”他说。“另外,确保在购买之前阅读顾客评论和投诉。”

去年,BBB收到了47,000多份诈骗报告,并根据曝光,易感性和经济损失对其进行分析。

与2016年相比,税收诈骗案下降了60%,2016年头号骗局的家装诈骗案在2017年跌至第6位。

BBB还指出,尽管老年人的经济损失增加,但易感性随着年龄的增长而下降。 BBB研究所项目和运营总监Melissa Trumpower在新闻发布会上表示:”在一场骗局中,实际亏钱的那些报告所占的百分比从2016年的18.8%下降到2017年的15.8%。”

BBB报道说,大多数网上购物诈骗都与宠物,服装,化妆品,电子产品和汽车有关,他们通常从免费试用开始。

去年的旅游和度假骗局进入前10名,这些“旅行套餐”中最常见的目的地是Orlando,迪斯尼,墨西哥和Bahamas。

BBB还提到,诈骗者使用的最常见的策略是承诺一项“大笔交易”,需要得到即时反应,就使用恐吓和孤立。他们还表示,骗子试图掩盖身份让自己看上去“善良且有风度”,这使得目标面临更大的风险。

以下是2017年前10名骗局列表:

网上购物骗局

投资骗局

职业骗局

预付费贷款骗局

假支票骗局

家装骗局

技术支持骗局

旅游/度假骗局 家人/朋友紧急情况骗局

政府拨款骗局

要了解有关诈骗,提示或举报诈骗的更多信息,请访问BBB ScamTips或ScamTracker网站。

  1. 安省拨款9亿 节能改造社会住房
  2. 朝鲜否认金正男被毒杀 要求释放“无辜”嫌犯(图)
  3. 女童被两次碾轧 竟然自己爬起来 网友惊呼(组图)
AD:【渥太华新闻】渥太华新闻
 

cvictor

本站元老
VIP
注册
2005-06-12
消息
17,327
荣誉分数
1,076
声望点数
373
好事不出门,坏事传千里。
文字这么细致,这研究精神值得学习。
 
顶部