Journalist and Media 2017

I have written on journalism and media trends eariler few years ago. So it is time for update. What is the state of journalism and news publishing in 2017? NiemanLab’s predictions for 2017 are a good place to start thinking about what lies ahead for journalism. There, Matt Waite puts us in our place straight away by telling us that the people running the media are the problem

There has been changes on tech publishing. In January 2017 International Data Group, the owner of PCWorld magazine and market researcher IDC, on Thursday said it was being acquired by China Oceanwide Holdings Group and IDG Capital, the investment management firm run by IDG China executive Hugo Shong. In 2016 Arrow bought EE Times, EDN, TechOnline and lots more from UBM.

 

Here are some article links and information bits on journalist and media in 2017:

Soothsayers’ guides to journalism in 2017 article take a look at journalism predictions and the value of this year’s predictions.

What Journalism Needs To Do Post-Election article tells that faced with the growing recognition that the electorate was uniformed or, at minimum, deeply in the thrall of fake news, far too many journalists are responding not with calls for change but by digging in deeper to exactly the kinds of practices that got us here in the first place.

Fake News Is About to Get Even Scarier than You Ever Dreamed article says that what we saw in the 2016 election is nothing compared to what we need to prepare for in 2020 as incipient technologies appear likely to soon obliterate the line between real and fake.

YouTube’s ex-CEO and co-founder Chad Hurley service sees a massive amount of information on the problem, which will lead to people’s backlash.

Headlines matter article tells that in 2017, headlines will matter more than ever and journalists will need to wrest control of headline writing from social-optimization teams. People get their news from headlines now in a way they never did in the past.

Why new journalism grads are optimistic about 2017 article tells that since today’s college journalism students have been in school, the forecasts for their futures has been filled with words like “layoffs,” “cutbacks,” “buyouts” and “freelance.” Still many people are optimistic about the future because the main motivation for being a journalist is often “to make a difference.”

Updating social media account can be a serious job. Zuckerberg has 12+ Facebook employees helping him with posts and comments on his Facebook page and professional photographers to snap personal moments.
Wikipedia Is Being Ripped Apart By a Witch Hunt For Secretly Paid Editors article tells that with undisclosed paid editing on the rise, Wikipedians and the Wikimedia Foundation are working together to stop the practice without discouraging user participation. Paid editing is permissible under Wikimedia Foundation’s terms of use as long as they disclose these conflicts of interest on their user pages, but not all paid editors make these disclosures.

Big Internet giants are working on how to make content better for mobile devices. Instant Articles is a new way for any publisher to create fast, interactive articles on Facebook. Google’s AMP (Accelerated Mobile Pages) is a project that it aims to accelerate content on mobile devices. Both of those systems have their advantages and problems.

Clearing Out the App Stores: Government Censorship Made Easier article tells that there’s a new form of digital censorship sweeping the globe, and it could be the start of something devastating. The centralization of the internet via app stores has made government censorship easier. If the app isn’t in a country’s app store, it effectively doesn’t exist. For more than a decade, we users of digital devices have actively championed an online infrastructure that now looks uniquely vulnerable to the sanctions of despots and others who seek to control information.

2,357 Comments

  1. Tomi Engdahl says:

    Amazon launches a Polly WordPress plugin that turns blog posts into audio, including podcasts
    https://techcrunch.com/2018/02/08/amazon-launches-a-polly-wordpress-plugin-that-turns-blog-posts-into-audio-including-podcasts/?ncid=rss&utm_source=tcfbpage&utm_medium=feed&utm_campaign=Feed%3A+Techcrunch+%28TechCrunch%29&sr_share=facebook

    Amazon today is launching a new Amazon Polly WordPress plugin that gives your blog a voice by creating audio versions of your posts. The resulting audio can be played from within the blog post itself, or accessed in podcast form using a feature called Amazon Pollycast, the company says.

    The plugin itself was jointly designed by Amazon’s AWS team and managed WordPress platform provider WP Engine, and takes advantage of Amazon’s text-to-speech service, Polly.

    The Polly speech engine launched with 47 male and female voice and support for 24 languages.

    Reply
  2. Tomi Engdahl says:

    ‘Our minds can be hijacked’: the tech insiders who fear a smartphone dystopia
    https://www.theguardian.com/technology/2017/oct/05/smartphone-addiction-silicon-valley-dystopia

    Google, Twitter and Facebook workers who helped make technology so addictive are disconnecting themselves from the internet. Paul Lewis reports on the Silicon Valley refuseniks alarmed by a race for human attention

    Reply
  3. Tomi Engdahl says:

    Here’s how Google Chrome’s new ad blocker works
    Published: 2018-02-02
    https://www.ctrl.blog/entry/chrome-adblocker

    Google Chrome will begin blocking ads on some websites by default on 15th February 2018. I took a look at Chromium source-code to find out a bit more about how this new ad blocker will work.

    At the end of 2017, Google Chrome had nearly 55 % of the web browser market share across all devices worldwide, according to StatCounter

    Reply
  4. Tomi Engdahl says:

    Wired:
    A behind-the-scenes look at two tumultuous years at Facebook as it battled with fake news, its impact on the election, global affairs, and users’ minds — ONE DAY IN late February of 2016, Mark Zuckerberg sent a memo to all of Facebook’s employees to address some troubling behavior in the ranks.

    Inside the Two Years that Shook Facebook—and the World
    https://www.wired.com/story/inside-facebook-mark-zuckerberg-2-years-of-hell

    How a confused, defensive social media giant steered itself into a disaster, and how Mark Zuckerberg is trying to fix it all.

    Reply
  5. Tomi Engdahl says:

    Benjamin Mullin / Wall Street Journal:
    Google launches dev preview of AMP stories for publishers today with Snapchat-like swipeable text, photos, and videos, considers integrating stories into search

    Google’s New AMP Stories Bring Snapchat-Like Content to Mobile Web
    The format doesn’t support advertising yet, which could slow its adoption among publishers
    https://www.wsj.com/articles/googles-new-amp-stories-bring-snapchat-like-content-to-the-mobile-web-1518499801

    Alphabet Inc.’s GOOGL 0.79% Google unveiled new technology that lets publishers create visual-oriented stories in a mobile-friendly format similar to the style popularized by Snapchat and Instagram.

    Starting Tuesday, publishers will be able to try out a developer preview of AMP stories, which feature swipeable slides of text, photos, graphics and videos, Google announced in a blog post.

    Publishers including Vox Media, Condé Nast, Meredith Corp. MDP 0.18% and Time Warner Inc.’s CNN were involved in the early development of the technology and have already begun creating such stories for the mobile web.

    Reply
  6. Tomi Engdahl says:

    Josh Constine / TechCrunch:
    Facebook to allow publisher paywalls in its iOS app starting 3/1; users get five articles before being asked to pay, publishers get 100% subscription revenue — Apple is allowing Facebook to bend the subscription rules. Starting March 1st, news publishers will be able to use their paywalls inside Facebook’s iOS app.

    Facebook works it out with Apple to test news paywalls on iOS
    https://techcrunch.com/2018/02/12/facebook-paywall/

    Apple is allowing Facebook to bend the subscription rules. Starting March 1st, news publishers will be able to use their paywalls inside Facebook’s iOS app. Facebook started testing paywalls on Android in October, but at the time it couldn’t come to an agreement with Apple about how subscription revenue would be taxed. TechCrunch has confirmed publishers will get 100 percent of subscription revenue on iOS, too.

    Reply
  7. Tomi Engdahl says:

    Kurt Wagner / Recode:
    Facebook hasn’t figured out how to measure “meaningful social interactions” after big News Feed update, wants to build section inside Watch tab for news video — “The metric is definitely evolving.” — When Facebook made its big News Feed update last month to prioritize posts …

    Facebook wants News Feed to create more ‘meaningful social interactions.’ It’s still trying to figure out what that means.
    https://www.recode.net/2018/2/12/17006362/facebook-news-feed-change-meaningful-interactions-definition-measure

    “The metric is definitely evolving.”

    When Facebook made its big News Feed update last month to prioritize posts from users’ friends and family — and thus cut down on posts from brands and publishers — it did so with the explanation that it was optimizing its service to create more “meaningful social interactions.”

    “I’m changing the goal I give our product teams from focusing on helping you find relevant content to helping you have more meaningful social interactions,” CEO Mark Zuckerberg said at the time. He used the same wording on the company’s Q4 earnings call in February to explain why Facebook cut down the reach of some viral videos on the service.

    The only problem: No one really knows how to measure “meaningful social interactions” — including Facebook.

    “We’re trying to figure out how to best measure and understand that,” Adam Mosseri, the Facebook exec in charge of News Feed, said Monday at Recode’s Code Media conference in Huntington Beach, Calif. “The key components are any interactions between two people. So it’s about people-to-people, not people-to-publisher or people-to-business or people-to-page.”

    Reply
  8. Tomi Engdahl says:

    Unilever warns social media to clean up “toxic” content
    https://techcrunch.com/2018/02/12/unilever-warns-social-media-to-clean-up-toxic-content/?ncid=rss&utm_source=tcfbpage&utm_medium=feed&utm_campaign=Feed%3A+Techcrunch+%28TechCrunch%29&utm_content=FaceBook&sr_share=facebook

    MenuTechCrunch
    Unilever warns social media to clean up “toxic” content
    Posted yesterday by Natasha Lomas (@riptari)

    Consumer goods giant Unilever, a maker of branded soaps, foodstuffs and personal care items and also one of the world’s biggest online advertisers, has fired a warning shot across the bows of social media giants by threatening to pull ads from digital platforms if they don’t do more to mitigate the spread of what it dubs “toxic” online content — be it fake news, terrorism or child exploitation.

    “It is critical that our brands remain not only in a safe environment, but a suitable one,”

    Reply
  9. Tomi Engdahl says:

    Salon’s Monero mining project might be crazy like a fox
    https://techcrunch.com/2018/02/13/salon-coinhive-cryptocurrency-mining/?utm_source=tcfbpage&sr_share=facebook

    In the age of altcoins, at least one news site is taking a novel approach to making ends meet. Salon announced today that it would give readers a choice between turning off ad-blocking software or “allowing Salon to use your unused computing power” in order to access their content. If you say yes to the latter deal, Salon will then invite you to install Coinhive, a software plugin that mines the cryptocurrency known as Monero.

    The offering is a clever if controversial way to recoup lost ad revenue. It’s no secret that digital media companies are hurting, and crowdsourcing the process that generates some virtual currencies is certainly an innovative solution, though definitely an experimental one.

    Still, running software like this, which is often inserted onto unsuspecting machines via malware, is a big ask from readers

    https://www.salon.com/about/faq-what-happens-when-i-choose-to-suppress-ads-on-salon/

    Reply
  10. Tomi Engdahl says:

    INTRODUCING AMP STORIES, A WHOLE NEW WAY TO READ WIRED
    https://www.wired.com/story/introducing-amp-stories/

    AMP Stories are an entirely new beast. They’re WIRED stories, hosted on WIRED, but while you can read them on a laptop or desktop, they’re made with mobile consumption in mind. In some ways, they’re not unlike Instagram Stories or our Snapchat Discover editions—you tap through them, rather than scrolling—except they don’t disappear, and they’re searchable.

    Reply
  11. Tomi Engdahl says:

    Google takes AMP beyond basic posts with its new story format
    https://techcrunch.com/2018/02/13/google-takes-amp-beyond-basic-posts-with-its-new-story-format/?ncid=rss&utm_source=tcfbpage&utm_medium=feed&utm_campaign=Feed%3A+Techcrunch+%28TechCrunch%29&utm_content=FaceBook&sr_share=facebook

    This new format allows publishers to build image-, video- and animation-heavy stories for mobile that you can easily swipe through. “It’s a mobile-focused format for creating visually rich stories,”

    To launch this format, Google partnered with CNN, Conde Nast, Hearst, Mashable, Meredith, Mic, Vox Media and The Washington Post. Like all of AMP, this is an open-source project and publishers can extend it as needed.

    Reply
  12. Tomi Engdahl says:

    Google’s Chrome ad blocking arrives tomorrow and this is how it works
    https://www.theverge.com/2018/2/14/17011266/google-chrome-ad-blocker-features

    Google is enabling its built-in ad blocker for Chrome tomorrow (February 15th). Chrome’s ad filtering is designed to weed out some of the web’s most annoying ads, and push website owners to stop using them. Google is not planning to wipe out all ads from Chrome, just ones that are considered bad using standards from the Coalition for Better Ads. Full page ads, ads with autoplaying sound and video, and flashing ads will be targeted by Chrome’s ad filtering, which will hopefully result in less of these annoying ads on the web.

    Initial Better Ads Standards: Least preferred ad experiences for desktop web and mobile web
    https://www.betterads.org/standards

    Reply
  13. Tomi Engdahl says:

    How Chrome’s built-in ad blocker will work when it goes live tomorrow
    https://techcrunch.com/2018/02/14/how-chromes-built-in-ad-blocker-will-work-when-it-goes-live-tomorrow/

    Chrome’s built-in ad blocker will go live tomorrow. It’s the first time Google will automatically block some ads in Chrome, but while quite a few online publishers are fretting about this move, as a regular user, you may not even notice it.

    The most important thing to know is that this is not an alternative to AdBlock Plus or uBlock Origin. Instead, it’s Google’s effort to ban the most annoying ads from your browser. So it won’t block all ads — just those that don’t conform to the Coalition for Better Ads guidelines.

    Reply
  14. Tomi Engdahl says:

    We Become What We Behold
    https://www.eetimes.com/author.asp?section_id=36&doc_id=1332942

    In this first column of 2018, the publisher of ASPENCORE reflects on our journey the last three years and glimpses ASPENCORE’s future.

    Reply
  15. Tomi Engdahl says:

    Bogus Paper Based On Star Trek Is Published In “Scientific” Journal
    http://www.iflscience.com/editors-blog/bogus-paper-based-on-star-trek-is-published-in-scientific-journal/

    That’s basically the plot of a Star Trek episode. (Star Trek: Voyager “Threshold”, to be precise.)

    But the real story here is how this – clearly fake – study came to be published in a “science” journal and accepted by a further three. While it is undeniably hilarious, it is clearly concerning and reveals serious flaws in the paper selection process, specifically that of predatory journals.

    Predatory journals (like the American Research Journal of Biosciences) exist to make money from less-established scientists who pay to have their work published and seen, but don’t provide a thorough scientific review of the studies they receive.

    paid $50 to have his dodgy paper published in the journal

    Predatory journals “are essentially counterfeit journals, mimicking the look and feel of legitimate online journals, but with the singular goal of making easy money,”

    Reply
  16. Tomi Engdahl says:

    Klint Finley / Wired:
    Google says 42% of sites notified made preemptive changes ahead of Chrome ad blocker launch; fewer than 1% of most popular sites violate suggested ad guidelines

    Google’s New Ad Blocker Changed the Web Before It Even Switched On
    https://www.wired.com/story/google-chrome-ad-blocker-change-web

    You might see fewer ads on the web from now on. But you probably won’t.

    On Thursday, Google Chrome, the most popular browser by a wide margin, began rolling out a feature that will block ads on sites that engage in particularly annoying behavior, such as automatically playing sound, or displaying ads that can’t be dismissed until a certain amount of time has passed. Google is essentially blacklisting sites that violate specific guidelines, and then trying to filter all ads that appear on those sites, not just the particularly annoying ones.

    Despite the advance hype, the number of sites Chrome will actually block ads on turns out to be quite small.

    But even if Chrome never blocks ads on a page you visit, Google’s move has already affected the web. The company notified sites in advance that they would be subject to the filtering, and 42 percent made preemptive changes

    A survey published by the industry group Interactive Advertising Bureau in 2016 found that about 26 percent of web users had installed ad-blockers on their computers, and about 15 percent had ad-blockers on their smartphones.

    The new Chrome ad-filtering feature doesn’t directly address privacy or page speed. Instead, it focuses only on blocking ads that violate guidelines published by the Coalition for Better Advertising

    Reply
  17. Tomi Engdahl says:

    Thomas Grove / Wall Street Journal:
    Profile of Yevgeny Prigozhin, alleged IRA owner, who, sources say, first hired internet trolls to bury complaints about food supplied to Moscow schools in 2011

    Kremlin Caterer Accused in U.S. Election Meddling Has History of Dishing Dark Arts
    https://www.wsj.com/articles/kremlin-caterer-accused-in-u-s-election-meddling-has-history-of-dishing-dark-arts-1518823765

    An online army accused of sowing discord among American voters in the 2016 election emerged from a corner of the business empire of Yevgeny Prigozhin, the Kremlin’s favorite restaurateur.

    Yevgeny Prigozhin, the Kremlin’s favorite restaurateur, won a lucrative government contract to deliver school lunches across Moscow in 2011. Parents were soon up in arms. Their children wouldn’t eat the food, saying it smelled rotten.

    As the bad publicity mounted, Mr. Prigozhin’s company, Concord Catering, launched a counterattack…

    Reply
  18. Tomi Engdahl says:

    Shawn Musgrave / Politico:
    How 4chan trolls organized to spread misinformation about Florida shooter’s ties to a white supremacist group; ADL, ABC, AP, and others spread the false story

    How white nationalists fooled the media about Florida shooter
    https://www.politico.com/story/2018/02/16/florida-shooting-white-nationalists-415672

    ABC, AP and others ran with false information on shooter’s ties to extremist groups.

    Following misrepresentations by a white nationalist leader and coordinated efforts by internet trolls, numerous researchers and media outlets spread a seemingly false claim that the man charged with killing more than a dozen people at a Florida high school belonged to an extremist group.

    Law enforcement agencies say they have no evidence so far to support this claim, and the rumor appears to have been perpetrated by white nationalist trolls themselves.

    Donovan called this an instance of “source hacking,” a tactic by which fringe groups coordinate to feed false information to authoritative sources such as ADL researchers. These experts, in turn, disseminate the information to reporters, and it reaches thousands of readers before it can be debunked.

    “It’s a very effective way of getting duped,” Donovan said.

    The ADL traced its original tip to posts on 4chan, where researchers found “self-described ROF members” claiming that Cruz was a brother-in-arms. But many of those posts seem to have been written specifically to deceive reporters and researchers.

    “Prime trolling opportunity,”

    “Say you are scared to tell her in case you get blamed, it will get her excited you know something big.”

    Members swapped links to articles

    “All it takes is a single article,” the first user wrote back. “And everyone else picks up the story.”

    As the story spread even further, one Discord user posted a tweet from the AP: “BREAKING: Leader of white nationalist group has confirmed suspect in Florida school shooting was member of his organization.” It had been retweeted more than 35,000 times at that point.

    The group crowed at such a quantifiable achievement.

    “Those 35 thousand people aren’t going to change their minds,” wrote one member, mocking those who would read the press coverage he and his friends concocted.

    “They’re lemmings. … They will go to the grave convinced that the shooter was a white nationalist.”

    By Thursday evening, 4chan users were celebrating their efforts

    “[T]hey are so hungry for a story that they’ll just believe anything as long as its corroborated by a few people and seems legit,” wrote the creator of one 4chan thread.

    Donovan, the disinformation researcher, said reporters need to be more vigilant against these kinds of campaigns, which are going to get only more common and more sophisticated.

    Reply
  19. Tomi Engdahl says:

    Explaining or exploiting? A mass shooting raises questions about media coverage.
    https://www.washingtonpost.com/lifestyle/style/explaining-or-exploiting-a-mass-shooting-raises-questions-about-media-coverage/2018/02/15/18a00da6-1274-11e8-9065-e55346f6de81_story.html?utm_term=.551a33ce8b88

    News organizations constantly wrestle with how much to tell the public about a grisly event.

    Newspapers and TV stations regularly interview the friends and relatives of murder victims. Doing so serves not only to inform the public but pays homage by presenting a full, human portrait of the deceased. It can also spark public interest in the case, perhaps generating tips helpful to law enforcement. For these reasons, survivors typically appreciate the opportunity to speak to the news media amid personal tragedy.

    But interviewing such eyewitnesses raises a secondary issue: Can a teenager, especially one who has so recently experienced trauma, really give informed consent to be interviewed?

    Under certain circumstances, yes

    “As journalists, we have conflicting ethical obligations here,” he said. “A core obligation is to tell the story as thoroughly and accurately as possible, and that may mean talking to witnesses.”

    But he adds: “I think reporters need to recognize that ideally, this should be a family decision. If a teenager is going to be subjected to the stress of an interview, you want to know that the family supports her in that choice.”

    News organizations typically have unspoken rules about how to approach such situations.

    The Society of Professional Journalists’ code of ethics gives only vague guidance: “Balance the public’s need for information against potential harm or discomfort. Pursuit of the news is not a license for arrogance or undue intrusiveness.”

    Of course, that leaves a lot of gray area.

    Reply
  20. Tomi Engdahl says:

    Special counsel Robert Mueller indicts Russian bot farm for election meddling
    https://techcrunch.com/2018/02/16/mueller-indictment-internet-research-agency-russia/?utm_source=tcfbpage&sr_share=facebook

    The indictment names the Internet Research Agency, a bot farm and disinformation operation based out of St. Petersburg, as one of the sources of the fake accounts meant to create divisions in American society. Those accounts were active on Facebook, Twitter and Instagram,

    Reply
  21. Tomi Engdahl says:

    Only the EU can break Facebook and Google’s dominance
    George Soros
    https://www.theguardian.com/business/2018/feb/15/eu-facebook-google-dominance-george-soros?CMP=share_btn_fb

    Social media giants have left the US government impotent – Europe must lead the way

    The current moment in world history is a painful one. Open societies are in crisis, and forms of dictatorships and mafia states, exemplified by Vladimir Putin’s Russia, are on the rise. In the United States, President Donald Trump would like to establish his own mafia-style state but cannot, because the constitution, other institutions, and a vibrant civil society won’t allow it.

    These companies have often played an innovative and liberating role. But as Facebook and Google have grown ever more powerful, they have become obstacles to innovation, and have caused a variety of problems of which we are only now beginning to become aware.

    social media companies exploit the social environment. This is particularly nefarious, because these companies influence how people think and behave without them even being aware of it. This interferes with the functioning of democracy and the integrity of elections.

    Because internet platform companies are networks, they enjoy rising marginal returns, which accounts for their phenomenal growth. The network effect is truly unprecedented and transformative, but it is also unsustainable. It took Facebook eight and a half years to reach a billion users, and half that time to reach the second billion. At this rate, Facebook will run out of people to convert in less than three years.

    Facebook and Google effectively control over half of all digital advertising revenue.

    providing users with a convenient platform

    Moreover, because content providers cannot avoid using the platforms and must accept whatever terms they are offered, they, too, contribute to the profits of social media companies.

    Reply
  22. Tomi Engdahl says:

    Fake news is not the real problem
    https://techcrunch.com/2018/02/18/fake-news-is-not-the-real-problem/?ncid=rss&utm_source=tcfbpage&utm_medium=feed&utm_campaign=Feed%3A+Techcrunch+%28TechCrunch%29&utm_content=FaceBook&sr_share=facebook

    It’s the Internet’s fault, we’re told. Brexiters and Remainers, Republicans and Democrats — every side of every political dispute now lives in its own separate reality, bellowing “fake news!” at every attempt to breach their borders of belief. The fragmentation of the media, coupled with the filter-bubble effect and the dominance of Facebook and Google, means that we no longer share any consensus view of reality.

    …But I saw The Post this week, and it struck me: we never did. We used to have an imposed view of reality, not a consensus one.

    Iraq was not particularly different from Vietnam: in both cases, the White House (and, this time, Downing Street) lied through their teeth to the people; the media accepted and promoted those lies

    It was assumed that people had an engineering mindset, where one’s worldview can and will be adjusted by new evidence. That mindset, that willingness to allow contrary evidence to adjust what you believe, is why science and engineering work. It is arguably why democracy works, too.

    And it would work in a world of fake news. Again, falsified evidence is not new. The US government falsified (by omission) the evidence about Vietnam for a very long time. Politicized “yellow journalism” dates to at least the nineteenth century.

    Fake news is a problem that could and would be fixed by a genuine, widespread, good-faith desire for true news.

    The real problem isn’t fake news; it’s that people have given up on that search for truth. The real problem is that the engineer’s mindset, wherein one weighs the available evidence, and accept and incorporate new evidence even if it contradicts what you previously believed, has never been more rare.

    The engineer’s mindset has been replaced by the lawyer’s mindset, wherein you pick a side in advance of getting any evidence, and then do absolutely everything you can to belittle, dismiss, and ignore any opposing data, while trumping up every scrap that might support your own side as if it were written on stone tables brought down from the mountain by Moses.

    court doesn’t exist in a democracy, or, rather, the democracy is the court … and so, in order for democracy to work, it requires the engineer’s mindset.

    So there’s a certain irony in blaming the tech industry for this, when tech is, for all its many flaws and blind spots, perhaps the last remaining bastion where the engineer’s mindset is (at least in theory) celebrated.

    let’s consider the distinct possibility that the so-called scourge of “fake news” is merely a symptom, not the problem.

    Reply
  23. Tomi Engdahl says:

    Fake news is an existential crisis for social media
    https://techcrunch.com/2018/02/18/fake-news-is-an-existential-crisis-for-social-media/?ncid=rss&utm_source=tcfbpage&utm_medium=feed&utm_campaign=Feed%3A+Techcrunch+%28TechCrunch%29&sr_share=facebook

    Te funny thing about fake news is how mind-numbingly boring it can be.

    Not the fakes themselves — they’re constructed to be catnip clickbait to stoke the fires of rage of their intended targets. Be they gun owners. People of color. Racists. Republican voters. And so on.

    The claim and counter claim that spread out around ‘fake news’ like an amorphous cloud of meta-fakery, as reams of additional ‘information’ — some of it equally polarizing but a lot of it more subtle in its attempts to mislead

    This bottomless follow-up fodder generates yet more FUD in the fake news debate. Which is ironic, as well as boring, of course.

    Truly fake news is the inception layer cake that never stops being baked. Because pouring FUD onto an already polarized debate

    Why would social media platforms want to participate in this FUDing? Because it’s in their business interests not to be identified as the primary conduit for democracy damaging disinformation.

    And because they’re terrified of being regulated on account of the content they serve. They absolutely do not want to be treated as the digital equivalents to traditional media outlets.

    And by failing to be pro-active about the existential threat posed by digitally accelerated disinformation, social media platforms have unwittingly made the case for external regulation of their global information-shaping and distribution platforms louder and more compelling than ever.

    Every gun outrage in America is now routinely followed by a flood of Russian-linked Twitter bot activity.

    Russian digital meddling connected to the UK’s 2016 Brexit referendum, which we now know for sure existed

    But more than a year and a half after the vote itself, many, many questions remain.

    Clearly, there’s no such thing as ‘bad propaganda’ if you’re a Kremlin disinformation node.

    Even a study decrying Russian election meddling presents an opportunity for respinning and generating yet more FUD — in this instance by calling 89up biased

    Fake news thrives on shamelessness, clearly.

    It also very clearly thrives in the limbo of fuzzy accountability where politicians and journalists essentially have to scream at social media firms until blue in the face to get even partial answers to perfectly reasonable questions.

    Frankly, this situation is looking increasingly unsustainable.

    The user-bases of Facebook, Twitter and YouTube are global. Their businesses generate revenue globally. And the societal impacts from maliciously minded content distributed on their platforms can be very keenly felt outside the US too.

    One problem is fake news. The other problem is the lack of incentive for social media companies to robustly investigate fake news.

    The partial data about Russia’s Brexit dis-ops
    is unhelpful exactly because it cannot clear the matter up either way. It just introduces more FUD, more fuzz, more opportunities for purveyors of fake news

    The UK, like the US, has become a very visibly divided society since the narrow 52: 48 vote to leave the EU.

    it doesn’t matter whether 89up’s study is accurate or overblown; what really matters is no one except the Kremlin and the social media firms themselves are in a position to judge

    But social media firms also cannot be trusted to truth tell on this topic, because their business interests have demonstrably guided their actions towards equivocation and obfuscation.

    Self interest also compellingly explains how poorly they have handled this problem to date

    A game of ‘uncertain claim vs self-interested counter claim’, as competing interests duke it out

    It’s just more FUD for the fake news mill.

    Especially as this stuff really isn’t rocket science. Human nature is human nature. And disinformation has been shown to have a more potent influencing impact than truthful information when the two are presented side by side.

    Yes, some of the platforms in the disinformation firing line have taken some preventative actions since this issue blew up so spectacularly, back in 2016. Often by shifting the burden of identification to unpaid third parties (fact checkers).

    Publishers have their own biases too, of course, but those biases tend to be writ large — vs social media platforms’ faux claims of neutrality when in fact their profit-seeking algorithms have been repeatedly caught preferring (and thus amplifying) dis- and misinformation over and above truthful but less clickable content.

    Indeed, human nature actively works against critical thinking. Fakes are more compelling, more clickable than the real thing. And thanks to technology’s increasing potency, fakes are getting more sophisticated, which means they will be increasingly plausible — and get even more difficult to distinguish from the truth.

    So, no, education can’t fix this on its own. And for Facebook to try to imply it can is yet more misdirection and blame shifting.

    you’ll very likely find the content compelling because the message is crafted with your specific likes and dislikes in mind

    That’s what makes this incarnation of propaganda so potent and insidious vs other forms of malicious disinformation (of course propaganda has a very long history — but never in human history have we had such powerful media distribution platforms

    Russia is still operating ranks of bots on social media which are actively working to divide public opinion

    All of this is why fake news is an existential problem for social media.

    And why Zuckerberg’s 2018 yearly challenge will be his toughest ever.

    Little wonder, then, that these firms are now so fixed on trying to narrow the debate and concern to focus specifically on political advertising. Rather than malicious content in general.

    The threat posed by info-cyberwarfare on tech platforms that straddle entire societies and have become attention-sapping powerhouses

    Facebook’s user base is a staggering two billion+ at this point

    What does this seismic shift in media distribution and consumption mean for societies and democracies?

    Regulating such massive, global platforms would clearly not be easy. In some countries Facebook is so dominant it essentially is the Internet.

    Reply
  24. Tomi Engdahl says:

    Why nobody writes about your press releases
    https://sanfrancisco.fi/why-nobody-writes-about-your-press-releases/

    The humble press release has been an important tool for sharing information with the media – and the public at large – for more than a century. Throughout that time, its structure, format, and intended use have remained remarkably intact. Even so, is the tool still fit for the job?

    for the job
    Every day, literally thousands of press releases are issued across the globe – the vast majority of which do not receive a single piece of media coverage. This seems to indicate there’s a problem with the press release itself. Wrong.

    As with most tools, the problem isn’t with the tool itself – rather with how it’s being used. The press release is still the most efficient way of imparting stories to others in a format that’s easy to digest and repurpose. A great press release will get people talking, boost your SEO, and build relationships between your organization and the public.

    Reply
  25. Tomi Engdahl says:

    Snap stock sinks as Kylie Jenner says she doesn’t use Snapchat anymore
    https://techcrunch.com/2018/02/22/snap-stock-sinks-as-kylie-jenner-says-she-doesnt-use-snapchat-anymore/?utm_source=tcfbpage&sr_share=facebook

    The words were significant not only because of who was saying it, but because it fits a pretty clear narrative that Snap is slowly losing its bread-and-butter users to Instagram and that its controversial redesign has alienated the core who were still holding on.

    Kylie Jenner is just about as influential a celebrity as they come; many of the product advances of social media companies like Snap have been borne on the backs on influencers like Jenner who have hundreds of millions of followers across these platforms

    “Live by the influencer, die by the influencer.”

    Reply
  26. Tomi Engdahl says:

    In One Tweet, Kylie Jenner Wiped Out $1.3 Billion of Snap’s Market Value
    https://www.bloomberg.com/news/articles/2018-02-22/snap-royalty-kylie-jenner-erased-a-billion-dollars-in-one-tweet

    Kylie Jenner tweets she hasn’t been using the app lately
    Users pile on with feedback that echoes Wall Street concerns

    Reply
  27. Tomi Engdahl says:

    LittleThings blames its shutdown on Facebook algorithm change
    https://techcrunch.com/2018/02/28/littlethings-shutdown/?utm_source=tcfbpage&sr_share=facebook

    A recent Facebook algorithm change seems to have claimed a high-profile casualty: LittleThings, a digital publisher focused on inspirational and how-to content for women, which shut down yesterday.

    Reply
  28. Tomi Engdahl says:

    Facebook launches a local news accelerator for publishers
    https://beta.techcrunch.com/2018/02/27/facebook-launches-a-local-news-accelerator-for-publishers/?ncid=rss&utm_source=tcfbpage&utm_medium=feed&utm_campaign=Feed%3A+Techcrunch+%28TechCrunch%29&sr_share=facebook

    Facebook is trying to play extra nice with local news publishers by putting $3 million behind the launch of the Local News Subscriptions Accelerator. The three-month pilot program will help 10 to 15 U.S.-based metropolitan news organizations gain more digital subscribers both on and off Facebook.

    “We often talk to publishers about what the future of journalism looks like and local news publishers tell us that digital subscribers are critical to the long-term sustainability of their business,” Facebook Head of News Partnerships Campbell Brown wrote in a blog post. “We know Facebook is one part of the strategy to engage readers and ultimately drive paid subscriptions.”

    This comes about a month after Facebook announced it would make it easier for people to see local news and other information that is relevant to where people live.

    Reply
  29. Tomi Engdahl says:

    BuzzFeed:
    Many news reports blaming Russian Twitter bots, some of which are not Russian and others not bots, are based on a single source and are likely overblown

    Stop Blaming Russian Bots For Everything
    https://www.buzzfeed.com/miriamelder/stop-blaming-russian-bots-for-everything?utm_term=.fhdJVWQoB#.kiPXg0Lod

    “I’m not convinced on this bot thing,” said one of the men behind the Russian bot thing.

    By now you know the drill: massive news event happens, journalists scramble to figure out what’s going on, and within a couple hours the culprit is found — Russian bots.

    Russian bots were blamed for driving attention to the Nunes memo, a Republican-authored document on the Trump-Russia probe. They were blamed for pushing for Roy Moore to win in Alabama’s special election. And here they are wading into the gun debate following the Parkland shooting. “[T]he messages from these automated accounts, or bots, were designed to widen the divide and make compromise even more difficult,” wrote the New York Times in a story following the shooting, citing little more than “Twitter accounts suspected of having links to Russia.”

    This is, not to mince words, total bullshit.

    The thing is, nearly every time you see a story blaming Russian bots for something, you can be pretty sure that the story can be traced back to a single source: the Hamilton 68 dashboard, founded by a group of respected researchers, including Clint Watts and JM Berger, and currently run under the auspices of the German Marshall Fund.

    But even some of the people who popularized that metric now acknowledge it’s become totally overblown.

    So that’s strike one: In what other world would we rely on a single source tool of anonymous provenance?

    And then there’s strike two. Let’s say, despite that, you still really want to put your faith in those conclusions about Russian influence. Why would you do that?

    And here we get to strike three. One of the hardest things to do — either with the accounts “linked to Russian influence efforts online,” whatever that means, or with the Internet Research Agency trolls who spent many months boosting Donald Trump and denigrating Hillary Clinton — is to measure how effective they really were. Did Russian troll efforts influence any votes? How do we even qualify or quantify that? Did tweets from “influencers” actually affect the gun debate in the United States, already so toxic and partisan before “bot” was a household word?

    Even Watts thinks the “blame the bots” shtick has gotten out of control.

    “When Julian Assange says something, Russian influence networks always repeat it,” Watts said.

    Further complicating the Russian bot narrative? The notion that plenty of automated social media influence campaigns are orchestrated right here in the United States. As BuzzFeed News has reported, MicroChip, “a notorious pro-Trump Twitter ringleader,” has and continues to orchestrate automated networks of Twitter accounts to help push trending topics and advance pro-Trump narratives

    One of the most disorienting parts of today’s geopolitical information warfare is that all sides feel and act as if they’re winning, and it’s not hard to know who or what has the most influence.

    The Great Bot Panic, for instance, poses a series of contradictions. It is true that bots are a serious problem. It is also true that the bot problem is exaggerated. It is true that Russian bots are a conspiracy theory that provides a tidy explanation for complicated developments. It is also true that Russian influence efforts may be happening before our eyes without us really knowing the full scope in the moment.

    Reply
  30. Tomi Engdahl says:

    Paul Mozur / New York Times:
    China is prosecuting a citizen who shared a critical message on WhatsApp; message was likely obtained by hacking a phone or via spy in a WhatsApp group

    China Presses Its Internet Censorship Efforts Across the Globe
    https://www.nytimes.com/2018/03/02/technology/china-technology-censorship-borders-expansion.html

    Within its digital borders, China has long censored what its people read and say online. Now, it is increasingly going beyond its own online realms to police what people and companies are saying about it all over the world.

    For years, China has exerted digital control with a system of internet filters known as the Great Firewall, which allows authorities to limit what people see online. To broaden its censorship efforts, Beijing is venturing outside the Great Firewall and paying more attention to what its citizens are saying on non-Chinese apps and services.

    As part of that shift, Beijing has at times pressured foreign companies like Google and Facebook, which are both blocked in China, to take down certain content. At other times, it has bypassed foreign companies entirely and instead directly pushed users of global social media to encourage self-censorship.

    This effort is accelerating as President Xi Jinping consolidates his power.

    Reply
  31. Tomi Engdahl says:

    Craig Silverman / BuzzFeed:
    Some sites change domain names, a practice known as domain hopping, to avoid ad agency blacklists and losing traffic as a result of Facebook’s algorithm changes — Domain hopping is one way publishers are combatting declining traffic from Facebook while also staying one step ahead of advertising blacklists.

    Publishers Are Switching Domain Names To Try And Stay Ahead Of Facebook’s Algorithm Changes

    Domain hopping is one way publishers are combatting declining traffic from Facebook while also staying one step ahead of advertising blacklists. How much longer will it work?
    https://www.buzzfeed.com/craigsilverman/publishers-are-switching-domain-names-to-try-and-stay-ahead?utm_term=.yp2eNMX57#.ay6GyVDQO

    “Once any site is found on a blacklist due to fraud, hate, fake news, or a variety of reasons and see monetization impacted, they drop the domain, acquire a new [one] and start over using a lot of the same content,” Marc Goldberg, the CEO of TrustMetrics, a company that evaluates online publishers and apps for quality, told BuzzFeed News. “This is a common practice among all sites, not just limited to news. Sometimes they keep the domain dormant for a period of time and then use it to try and get back into the ad ecosystem with another trick.”

    “Facebook has recently made changes that negatively affect publishers and we are testing different ways to try and combat those changes,” said the statement from the company, which was founded by former Dartmouth College students Joshua Riddle and David Rufful.

    “Publishers have found that their domains are often penalized by Facebook’s algorithm over policy changes that they don’t announce publicly. As a result you can see your traffic go from 300K people a day to 100K overnight,” he told BuzzFeed News. “Then, after setting up a new domain the traffic almost instantly returns to exactly where it was.”

    The spokesperson also pointed to the company’s recent announcement of an effort to identify “trusted sources” of information that will see their content prioritized in the News Feed.

    Other domain hoppers

    Conservative sites aren’t the only ones shifting domain names. Occupy Democrats, the most influential liberal partisan Facebook page according to a BuzzFeed News analysis last year, hasn’t updated occupydemocrats.com since October.

    Though domain hopping is taking hold among some publishers, it’s possible that it may already be past its prime.

    Reply
  32. Tomi Engdahl says:

    Joshua Benton / Nieman Lab:
    With Smarticles, The Guardian tests a format where algorithms provide new information based on the reader’s last visit and the importance of new developments — Like Circa before it, The Guardian aims to atomize a big breaking story into its individual part

    What The Guardian has learned trying to build a more intelligent story format — one that knows what you know
    http://www.niemanlab.org/2018/02/what-the-guardian-has-learned-trying-to-build-a-more-intelligent-story-format-one-that-knows-what-you-know/

    Like Circa before it, The Guardian aims to atomize a big breaking story into its individual parts — and then be smart about showing you the right ones at the right time.

    Reply
  33. Tomi Engdahl says:

    American Press Institute:
    Survey of 4,113 subscribers to 90 local newspapers: subscriptions are triggered by promotions, quality and accuracy matter, print and digital subscribers differ — This research was conducted by the Media Insight Project — an initiative of the American Press Institute and the Associated Press-NORC Center for Public Affairs Research

    Paths to Subscription: Why recent subscribers chose to pay for news
    Published 02/27/18 8:00 am
    https://www.americanpressinstitute.org/publications/reports/survey-research/paths-to-subscription/

    This research was conducted by the Media Insight Project — an initiative of the American Press Institute and the Associated Press-NORC Center for Public Affairs Research

    Reply
  34. Tomi Engdahl says:

    Paul Blumenthal / HuffPost:
    A look at parallels between Internet Research Agency’s tactics and Mic.com’s: using emotional, divisive content that increases clicks on social media

    Russian Trolls Used This One Weird Trick To Infiltrate Our Democracy. You’ll Never Believe Where They Learned It.
    https://www.huffingtonpost.com/entry/russian-trolls-internet-research-agency_us_5a96f8cae4b07dffeb6f3434

    What Russia’s dirty tricks campaign had in common with online media.

    The troll operation liked divisive, emotionally charged content because the social media platforms liked divisive, emotionally charged content. Culture-war stuff — issues of race and gender and identity in general, issues that got a rise out of people. Facebook and Twitter were and still are optimized to send traffic to posts about these subjects, and so the troll outfit optimized its content accordingly.

    This was the story of Mic.com, one of the first online media startups to capitalize on left-wing millennial outrage culture. It went hard on social-justice issues and directed ire at everyone from manspreaders to Nazis. Clicky, empurpled headlines oversold conflict and outrage.

    Reply
  35. Tomi Engdahl says:

    Eric Johnson / Recode:
    Interview with Katie Couric on her time at Yahoo: though it spent money on big-name reporters, it didn’t know how to distribute and scale its quality content

    Why Katie Couric left Yahoo
    “They hired some big names, and yet they were in the witness protection program.”
    https://www.recode.net/2018/2/28/17059678/katie-couric-yahoo-journalism-marketing-verizon-marissa-mayer-tim-armstrong-kara-swisher-podcast

    Reply
  36. Tomi Engdahl says:

    Ricardo Bilton / Nieman Lab:
    BuzzSumo study of 100M articles posted in 2017 shows social sharing halved since 2015 and Google sites drove 2x more referrals to publishers than social media — Publishers may be getting dinged — and in some cases destroyed — by Facebook’s move to decrease the amount of publisher content …

    New data shows just how much social sharing has decreased since 2015 (and News Feed tweaks are just one factor)
    http://www.niemanlab.org/2018/03/new-data-shows-just-how-much-social-sharing-has-decreased-since-2015-and-news-feed-tweaks-are-just-one-factor/

    Publishers may be getting dinged — and in some cases destroyed — by Facebook’s move to decrease the amount of publisher content in the News Feed, but the declines in social sharing have long been in motion.

    — Multiple factors are at hand here, according to BuzzSumo. One is that there’s just more competition among publisher content overall, particularly in popular topics like bitcoin, which causes a reduction in average shares as the number of published articles climbs. Private sharing via email and Slack is also on the rise, reducing public sharing further. And then there’s Facebook, which has repeatedly tweaked its algorithm to reduce the spread of viral stories.

    — Search’s share of referral traffic continues to climb. Google sites are now driving twice as many referral to publishers as social media, as previous data from Parsely and Shareholic has shown. Time is a flat circle, etc.

    Reply
  37. Tomi Engdahl says:

    EU Warns Tech Giants To Remove Terror Content in 1 Hour — or Else
    https://tech.slashdot.org/story/18/03/01/1825224/eu-warns-tech-giants-to-remove-terror-content-in-1-hour—-or-else

    The European Union issued internet giants an ultimatum to remove illegal online terrorist content within an hour, or risk facing new EU-wide laws.

    The European Commission on Thursday issued a set of recommendations for companies and EU nations that apply to all forms of illegal internet material, “from terrorist content, incitement to hatred and violence, child sexual abuse material, counterfeit products and copyright infringement. Considering that terrorist content is most harmful in the first hours of its appearance online, all companies should remove such content within one hour from its referral as a general rule.â The commission last year called upon social media companies, including Facebook, Twitter and Google owner Alphabet, to develop a common set of tools to detect, block and remove terrorist propaganda and hate speech. Thursday’s recommendations aim to “further step up” the work already done by governments and push firms to “redouble their efforts to take illegal content off the web more quickly and efficiently.”

    EU Warns Tech Giants to Remove Terror Content in 1 Hour—or Else
    https://www.bloomberg.com/news/articles/2018-03-01/remove-terror-content-in-1-hour-or-else-eu-warns-tech-giants

    EU lays out rules for firms to fight online illegal material
    Facebook, Google among tech firms under pressure to react

    The European Union issued internet giants an ultimatum to remove illegal online terrorist content within an hour, or risk facing new EU-wide laws.

    The European Commission on Thursday issued a set of recommendations for companies and EU nations that apply to all forms of illegal internet material, “from terrorist content, incitement to hatred and violence, child sexual abuse material, counterfeit products and copyright infringement.”

    Too Short

    One hour to take down terrorist content is too short, the Computer & Communications Industry Association, which speaks for companies like Google and Facebook, said in a statement that criticized the EU’s plans as harming the bloc’s technology economy.

    “Such a tight time limit does not take due account of all actual constraints linked to content removal and will strongly incentivize hosting services providers to simply take down all reported content,” the group said in a statement.

    The EU stressed that its recommendations send a clear signal to internet companies that the voluntary approach remains the watchdog’s favorite approach for now and that the firms “have a key role to play.”

    Reply
  38. Tomi Engdahl says:

    InfoWars Blacklisted By Major Advertisers After CNN’s Guerrilla Activism
    https://www.zerohedge.com/news/2018-03-04/infowars-blacklisted-major-advertisers-after-cnns-guerrilla-activism

    CNN has taken it upon themselves to contact companies whose advertisements have been played on the InfoWars-linked Alex Jones Channel on YouTube, resulting in a flood of major brands blacklisting the channels for future ad spending.

    Companies often don’t know where their ads are displayed, however they can use exclusion filters to black list channels or content they don’t wish to advertise with.

    Scores of conservative and “conspiracy” YouTube channels have been hit with strikes and bans over the last several weeks following the Parkland Florida School Shooting. In particular, videos suggesting that survivor David Hogg was being coached or is a paid crisis actor have been struck from the platform and measures taken against uploaders.

    YouTube admitted they had been overly aggressive in their recent enforcement, blaming “newer members” of its 10,000 large team of moderators.

    Reply
  39. Tomi Engdahl says:

    Fake news spreads faster than true news on Twitter—thanks to people, not bots
    http://www.sciencemag.org/news/2018/03/fake-news-spreads-faster-true-news-twitter-thanks-people-not-bots

    From Russian “bots” to charges of fake news, headlines are awash in stories about dubious information going viral. You might think that bots—automated systems that can share information online—are to blame. But a new study shows that people are the prime culprits when it comes to the propagation of misinformation through social networks. And they’re good at it, too: Tweets containing falsehoods reach 1500 people on Twitter six times faster than truthful tweets, the research reveals.

    We generally think that bots distort the types of information that reaches the public, but—in this study at least—they don’t seem to be skewing the headlines toward false news, he notes. They propagated true and false news roughly equally.

    As it turned out, tweets containing false information were more novel—they contained new information that a Twitter user hadn’t seen before—than those containing true information. And they elicited different emotional reactions, with people expressing greater surprise and disgust. That novelty and emotional charge seem to be what’s generating more retweets. “If something sounds crazy stupid you wouldn’t think it would get that much traction,” says Alex Kasprak, a fact-checking journalist at Snopes in Pasadena, California. “But those are the ones that go massively viral.”

    Reply
  40. Tomi Engdahl says:

    Google promises publishers an alternative to AMP
    https://beta.techcrunch.com/2018/03/08/google-promises-publishers-an-alternative-to-amp/?ncid=rss&utm_source=tcfbpage&utm_medium=feed&utm_campaign=Feed%3A+Techcrunch+%28TechCrunch%29&utm_content=FaceBook&sr_share=facebook

    Google’s AMP project is not uncontroversial. Users often love it because it makes mobile sites load almost instantly. Publishers often hate it because they feel like they are giving Google too much control in return for better placement on its search pages. Now Google proposes to bring some of the lessons it learned from AMP to the web as a whole. Ideally, this means that users will profit from Google’s efforts and see faster non-AMP sites across the web (and not just in their search engines).

    Reply
  41. Tomi Engdahl says:

    The Largest Ever Study On “Fake News” Was Just Published, And The Results Are Truly Horrifying
    http://www.iflscience.com/brain/on-twitter-lies-spread-faster-than-truth/

    “A lie can travel halfway around the world before the truth can get its boots on.” Appropriately enough, this quote, and others like it, have been wrongly attributed to Mark Twain, Winston Churchill, Benjamin Franklin, and many others, although the sentiment dates back at least to Jonathan Swift. It is doubtful any of them scientifically tested the claim, but in the social media age, we can.

    The Internet has certainly accelerated the rate at which stories can travel the world. For lies, truth, and everything in between

    “Falsehood diffused significantly farther, faster, deeper, and more broadly than the truth in all categories of information, and the effects were more pronounced for false political news than for false news about terrorism, natural disasters, science, urban legends, or financial information,” the study authors report in Science. “Whereas the truth rarely diffused to more than 1,000 people, the top 1 percent of false-news cascades routinely diffused to between 1,000 and 100,000 people.”

    All this was despite the fact that the people who routinely spread false rumors had significantly fewer followers than those who mostly spoke the truth

    the problem mainly lies with human tweeters.

    the false ones inspired greater surprise and disgust, while the true ones were more likely to be met with sadness or anticipation. The authors suspect the novelty value of false news, indicated by the surprised response, encourages its spread, but it also appears disgust motivates retweeting more than sadness.

    Reply
  42. Tomi Engdahl says:

    Some hard truths about Twitter’s health crisis
    https://beta.techcrunch.com/2018/03/10/some-hard-truths-about-twitters-health-crisis/?ncid=rss&utm_source=tcfbpage&utm_medium=feed&utm_campaign=Feed%3A+Techcrunch+%28TechCrunch%29&sr_share=facebook

    ‘fake news’ is an existential crisis for platforms whose business model requires them to fence vast quantities of unverified content uploaded by, at best, poorly verified users.

    AdChoices

    Some hard truths about Twitter’s health crisis
    Natasha Lomas
    @riptari / 12 hours ago

    twitter-diversity
    It’s a testament to quite how control freaky and hermetically sealed to criticism the tech industry is that Twitter’s CEO Jack Dorsey went unscripted in front of his own brand livestreaming service this week, inviting users to lob awkward questions at him for the first time ever.

    It’s also a testament to how much trouble social media is in. As I’ve written before, ‘fake news’ is an existential crisis for platforms whose business model requires them to fence vast quantities of unverified content uploaded by, at best, poorly verified users.

    No content, no dice, as it were. But things get a whole lot more complicated when you have to consider what the content actually is; who wrote it; whether it’s genuine or not; and what its messaging might be doing to your users, to others and to society at large.

    As a major MIT study looking at a decade’s worth of tweets — and also published this week — underlines: Information does not spread equally.

    More specifically, fact-checked information that has been rated true seems to be less sharable than fact-checked information that has been rated false. Or to put it more plainly: Novel/outrageous content is more viral.

    Reply
  43. Tomi Engdahl says:

    Columbia Journalism Review:
    Study: mainstream news outlets that embedded IRA tweets were generally using the tweets to reflect provocative, partisan public opinion on social issues

    Most major outlets have used Russian tweets as sources for partisan opinion: study
    https://www.cjr.org/analysis/tweets-russia-news.php

    The New York Times’s Bari Weiss was in the news again yesterday, this time for citing a hoax Twitter account as an example of liberal intolerance. Just how often do such Twitter accounts make it into mainstream media, as @OfficialAntifa did in Weiss’s column?

    While it is well-established that Russians have imitated US citizens on social media, and that they bought thousands of dollars’ worth of social media advertising, the impact of those attempts is not well understood. Special Counsel Robert Mueller’s indictment of 13 Russian agents last month suggests that the agents saw themselves as conducting “information warfare” against the United States to delegitimize the American political process and “sow discord” online.

    Twitter accounts from the Internet Research Agency—a St. Petersburg-based organization directed by individuals with close ties to Vladimir Putin, and subject to Mueller’s scrutiny—successfully made their way from social media into respected journalistic media.

    We found at least one tweet from an IRA account embedded in 32 of the 33 outlets—a total of 116 articles—including in articles published by institutions with longstanding reputations, like The Washington Post, NPR, and the Detroit Free Press, as well as in more recent, digitally native outlets such as BuzzFeed, Salon, and Mic (the outlet without IRA-linked tweets was Vice).

    This past fall, Recode, with the aid of the media-intelligence firm Meltwater, found that stories with IRA tweets appeared even in mainstream news.

    IRA accounts typically made their way into articles when news outlets wanted to illustrate “current opinion” by quoting tweets: Nearly 70 percent of the time, the information contained in the reproduced tweets were opinions about ongoing events.

    IRA tweets were especially prominent in stories about Twitter users’ views or reactions, a growing subgenre of online reporting.

    News and fake news

    Though more often quoted for their expressive opinions, IRA-linked handles did occasionally function as sources of information. In most of these instances, the information provided was real. Only a small fraction (6 percent) of articles contained IRA-linked tweets with empirically false claims, or “fake news.” Instead, real stories were used to stoke outrage—a finding consistent with recent work describing IRA tweets’ use of news links.

    The IRA-linked accounts that made it into news media often espoused stereotypically partisan sentiments on both the right and left.

    Reply
  44. Tomi Engdahl says:

    Tim Berners-Lee says regulation of the Web may be needed
    https://www.theregister.co.uk/2018/03/12/tim_bernerslee_says_regulation_of_the_web_may_be_needed/

    Social networks have too much power, says Web Daddy, and their profit motive means they won’t act for the good of all

    Sir Timothy Berners-Lee has used the 29th anniversary of the publication of his proposal for an “information management” system that became the World Wide Web to warn his creation is in peril.

    “The web that many connected to years ago is not what new users will find today,” Berners-Lee wrote in his regular birthday letter. “What was once a rich selection of blogs and websites has been compressed under the powerful weight of a few dominant platforms. This concentration of power creates a new set of gatekeepers, allowing a handful of platforms to control which ideas and opinions are seen and shared.”

    “These dominant platforms are able to lock in their position by creating barriers for competitors. They acquire startup challengers, buy up new innovations and hire the industry’s top talent. Add to this the competitive advantage that their user data gives them and we can expect the next 20 years to be far less innovative than the last.”

    “What’s more, the fact that power is concentrated among so few companies has made it possible to weaponise the web at scale. In recent years, we’ve seen conspiracy theories trend on social media platforms, fake Twitter and Facebook accounts stoke social tensions, external actors interfere in elections, and criminals steal troves of personal data.”

    Reply
  45. Tomi Engdahl says:

    Zeynep Tufekci / New York Times:
    YouTube could be one of the most powerful radicalizing tools of this century, given its billion users and algorithms that recommend ever more extreme videos — At one point during the 2016 presidential election campaign, I watched a bunch of videos of Donald Trump rallies on YouTube.

    YouTube, the Great Radicalizer
    https://www.nytimes.com/2018/03/10/opinion/sunday/youtube-politics-radical.html

    It seems as if you are never “hard core” enough for YouTube’s recommendation algorithm. It promotes, recommends and disseminates videos in a manner that appears to constantly up the stakes. Given its billion or so users, YouTube may be one of the most powerful radicalizing instruments of the 21st century.

    This is not because a cabal of YouTube engineers is plotting to drive the world off a cliff. A more likely explanation has to do with the nexus of artificial intelligence and Google’s business model. (YouTube is owned by Google.) For all its lofty rhetoric, Google is an advertising broker, selling our attention to companies that will pay for it. The longer people stay on YouTube, the more money Google makes.

    What keeps people glued to YouTube? Its algorithm seems to have concluded that people are drawn to content that is more extreme than what they started with — or to incendiary content in general.

    Is this suspicion correct? Good data is hard to come by; Google is loath to share information with independent researchers. But we now have the first inklings of confirmation

    The Wall Street Journal conducted an investigation of YouTube content with the help of Mr. Chaslot. It found that YouTube often “fed far-right or far-left videos to users who watched relatively mainstream news sources,” and that such extremist tendencies were evident with a wide variety of material. If you searched for information on the flu vaccine, you were recommended anti-vaccination conspiracy videos.

    It is also possible that YouTube’s recommender algorithm has a bias toward inflammatory content.

    YouTube has recently come under fire for recommending videos promoting the conspiracy theory that the outspoken survivors of the school shooting in Parkland, Fla., are “crisis actors” masquerading as victims.

    What we are witnessing is the computational exploitation of a natural human desire: to look “behind the curtain,” to dig deeper into something that engages us. As we click and click, we are carried along by the exciting sensation of uncovering more secrets and deeper truths. YouTube leads viewers down a rabbit hole of extremism, while Google racks up the ad sales.

    So too our natural curiosity about the unknown can lead us astray on a website that leads us too much in the direction of lies, hoaxes and misinformation.

    In effect, YouTube has created a restaurant that serves us increasingly sugary, fatty foods, loading up our plates as soon as we are finished with the last meal.

    This situation is especially dangerous given how many people — especially young people — turn to YouTube for information.

    Reply
  46. Tomi Engdahl says:

    Tech giants should take the rap for enabling fake news, boffins tell EU
    Yes, yeesss, give us lots of new things to do, say academics
    https://www.theregister.co.uk/2018/03/12/tech_giants_are_fuelling_fake_news_eu_told/

    Giant US tech platforms are spreading misinformation and deliberately hiding their algorithms to evade scrutiny, according to a report for the European Commission.

    The expert working group on fake news and disinformation headed by law professor Dr Madeleine de Cock Buning warned there’s no quick fix to the problem – and avoided quantifying how big the problem is. She noted the phrase “fake news” has been “appropriated and used misleadingly by powerful actors to dismiss coverage that is simply found disagreeable” – whoever that may be.

    The professor defined the problem – but avoided attempting to quantify how much of a problem it might be. The commission cited a pan-EU survey from last month where 83 per cent thought misinformation was a threat to democracy, although on closer inspection that revealed enormous differences in perception across the 27 member states (more below).

    Final report of the High Level Expert Group on Fake News and Online Disinformation
    https://ec.europa.eu/digital-single-market/en/news/final-report-high-level-expert-group-fake-news-and-online-disinformation

    The HLEG advises the Commission against simplistic solutions. Any form of censorship either public or private should clearly be avoided. The HLEG’s recommendations aim instead to provide short-term responses to the most pressing problems, longer-term responses to increase societal resilience to disinformation, and a framework for ensuring that the effectiveness of these responses is continuously evaluated, while new evidence-based responses are developed.

    The multi-dimensional approach recommended by the HLEG is based on a number of interconnected and mutually reinforcing responses.
    These responses rest on five pillars designed to:

    enhance transparency of online news, involving an adequate and privacy-compliant sharing of data about the systems that enable their circulation online;
    promote media and information literacy to counter disinformation and help users navigate the digital media environment;
    develop tools for empowering users and journalists to tackle disinformation and foster a positive engagement with fast-evolving information technologies;
    safeguard the diversity and sustainability of the European news media ecosystem, and
    promote continued research on the impact of disinformation in Europe to evaluate the measures taken by different actors and constantly adjust the necessary responses.

    Reply
  47. Tomi Engdahl says:

    Tom Miles / Reuters:
    UN human rights experts investigating a possible genocide in Myanmar say that Facebook has played a role in spreading hate speech in the country — GENEVA (Reuters) – U.N. human rights experts investigating a possible genocide in Myanmar said on Monday that Facebook had played a role in spreading hate speech there.

    U.N. investigators cite Facebook role in Myanmar crisis
    https://www.reuters.com/article/us-myanmar-rohingya-facebook/u-n-investigators-cite-facebook-role-in-myanmar-crisis-idUSKCN1GO2PN

    Facebook had no immediate comment on the criticism on Monday, although in the past the company has said that it was working to remove hate speech in Myanmar and kick off people who shared such content consistently.

    More than 650,000 Rohingya Muslims have fled Myanmar’s Rakhine state into Bangladesh since insurgent attacks sparked a security crackdown last August. Many have provided harrowing testimonies of executions and rapes by Myanmar security forces.

    Marzuki Darusman, chairman of the U.N. Independent International Fact-Finding Mission on Myanmar, told reporters that social media had played a “determining role” in Myanmar.

    “It has … substantively contributed to the level of acrimony and dissension and conflict, if you will, within the public. Hate speech is certainly of course a part of that. As far as the Myanmar situation is concerned, social media is Facebook, and Facebook is social media,” he said.

    Reply
  48. Tomi Engdahl says:

    Foo Yun Chee / Reuters:
    EU panel calls for voluntary code of conduct for platforms to fight disinformation, but consumer group says panel ignored a core cause: ad-based business models

    EU experts’ fake news report draws false conclusions: consumer group
    https://www.reuters.com/article/us-eu-internet-fakenews/eu-experts-fake-news-report-draws-false-conclusions-consumer-group-idUSKCN1GO2FO

    BRUSSELS (Reuters) – A consumer group on Monday said experts appointed by the European Commission to advise on tackling fake news had ignored the business model that gave the likes of Google (GOOGL.O) and Facebook (FB.O) a motive for disseminating it.

    The EU executive will publish a non-binding policy paper on the subject in the spring.

    The group of 39 experts it appointed, including representatives from social media, news media and the public, called for a code of principles for online platforms and social networks.

    The experts also said authorities should let the companies regulate themselves.

    “The burden for de-bunking fake news should not rest on people,” she said, adding that the non-binding code of principles could turn out to be toothless.

    Reply

Leave a Reply to Tomi Engdahl Cancel reply

Your email address will not be published. Required fields are marked *

*

*