Journalist and Media 2017

I have written on journalism and media trends eariler few years ago. So it is time for update. What is the state of journalism and news publishing in 2017? NiemanLab’s predictions for 2017 are a good place to start thinking about what lies ahead for journalism. There, Matt Waite puts us in our place straight away by telling us that the people running the media are the problem

There has been changes on tech publishing. In January 2017 International Data Group, the owner of PCWorld magazine and market researcher IDC, on Thursday said it was being acquired by China Oceanwide Holdings Group and IDG Capital, the investment management firm run by IDG China executive Hugo Shong. In 2016 Arrow bought EE Times, EDN, TechOnline and lots more from UBM.

 

Here are some article links and information bits on journalist and media in 2017:

Soothsayers’ guides to journalism in 2017 article take a look at journalism predictions and the value of this year’s predictions.

What Journalism Needs To Do Post-Election article tells that faced with the growing recognition that the electorate was uniformed or, at minimum, deeply in the thrall of fake news, far too many journalists are responding not with calls for change but by digging in deeper to exactly the kinds of practices that got us here in the first place.

Fake News Is About to Get Even Scarier than You Ever Dreamed article says that what we saw in the 2016 election is nothing compared to what we need to prepare for in 2020 as incipient technologies appear likely to soon obliterate the line between real and fake.

YouTube’s ex-CEO and co-founder Chad Hurley service sees a massive amount of information on the problem, which will lead to people’s backlash.

Headlines matter article tells that in 2017, headlines will matter more than ever and journalists will need to wrest control of headline writing from social-optimization teams. People get their news from headlines now in a way they never did in the past.

Why new journalism grads are optimistic about 2017 article tells that since today’s college journalism students have been in school, the forecasts for their futures has been filled with words like “layoffs,” “cutbacks,” “buyouts” and “freelance.” Still many people are optimistic about the future because the main motivation for being a journalist is often “to make a difference.”

Updating social media account can be a serious job. Zuckerberg has 12+ Facebook employees helping him with posts and comments on his Facebook page and professional photographers to snap personal moments.
Wikipedia Is Being Ripped Apart By a Witch Hunt For Secretly Paid Editors article tells that with undisclosed paid editing on the rise, Wikipedians and the Wikimedia Foundation are working together to stop the practice without discouraging user participation. Paid editing is permissible under Wikimedia Foundation’s terms of use as long as they disclose these conflicts of interest on their user pages, but not all paid editors make these disclosures.

Big Internet giants are working on how to make content better for mobile devices. Instant Articles is a new way for any publisher to create fast, interactive articles on Facebook. Google’s AMP (Accelerated Mobile Pages) is a project that it aims to accelerate content on mobile devices. Both of those systems have their advantages and problems.

Clearing Out the App Stores: Government Censorship Made Easier article tells that there’s a new form of digital censorship sweeping the globe, and it could be the start of something devastating. The centralization of the internet via app stores has made government censorship easier. If the app isn’t in a country’s app store, it effectively doesn’t exist. For more than a decade, we users of digital devices have actively championed an online infrastructure that now looks uniquely vulnerable to the sanctions of despots and others who seek to control information.

2,357 Comments

  1. Tomi Engdahl says:

    More anti-immigrant posts = more violence

    Study ties Facebook engagement to attacks on refugees
    https://techcrunch.com/2018/08/21/study-ties-facebook-engagement-to-attacks-on-refugees/?utm_source=tcfbpage&sr_share=facebook

    More anti-immigrant posts, more violence

    A tudy of circumstances and demographics attendant on attacks against refugees and immigrants in Germany has shown that Facebook use appears to be deeply linked with the frequency of violent acts. Far from being mere trolling or isolated expressions of controversial political opinions, spikes in anti-refugee posts were predictive of violent crimes against those groups.

    Now, if the anti-refugee rhetoric spreads via social media, then we can expect more crimes to occur in areas where there is more social media use, right? And specifically, areas where there is more activity among anti-refugee groups would see the most.

    A quick estimate on their part suggests that the social media activity may have increased attacks by 13 percent or so

    Correlation versus causation
    No doubt many readers will be skeptical of any study like this one; after all, these are very complex issues with many moving parts, and correlations may appear between things regardless of whether those things directly cause or effect one another.

    As the researchers say, Facebook isn’t just plain causing violence to happen. The places where it happens are often historically right-wing places that have had higher incidence of violence and hate crimes in the past. But it seems inescapable that Facebook is nevertheless an important way that refugee-related hatred and vitriol in particular is spread, as evidenced by the lack of increases in violence when the social network is unavailable.

    Reply
  2. Tomi Engdahl says:

    Out-of-control censorship machines removed my article warning of out-of-control censorship machines
    https://juliareda.eu/2018/08/censorship-machines-gonna-censor/

    A few days ago, about a dozen articles and campaign sites criticising EU plans for copyright censorship machines silently vanished from the world’s most popular search engine. Proving their point in the most blatant possible way, the sites were removed by exactly what they were warning of: Copyright censorship machines.

    Among the websites that were made impossible to find: A blog post of mine in which I inform Europeans about where their governments stand on online censorship in the name of copyright and a campaign site warning of copyright law that favors corporations over free speech.

    Reply
  3. Tomi Engdahl says:

    Sara Fischer / Axios:
    Google says it removed 39 YouTube channels, 6 Blogger blogs and 13 Google+ accounts tied to a misinformation campaign by Iran state media with help from FireEye — Google has uncovered a misinformation attack across several of its properties that it has connected to the Islamic Republic of Iran Broadcasting …

    Google finds evidence of attack linked to Iran state media
    https://www.axios.com/google-finds-evidence-of-attack-linked-to-iran-backed-media-1535046370-573d2b45-06eb-4499-802f-dcd8e8076bf7.html

    Google has uncovered a misinformation attack across several of its properties that it has connected to the Islamic Republic of Iran Broadcasting (IRIB), the company said in a statement Thursday.

    Why it matters: Facebook and Twitter announced earlier this week that they have uncovered coordinated misinformation campaigns linked to Iran, but this is the first time any tech firm has said it found a direct link between Iran state media and the attack.

    The company was aided by FireEye, the same cybersecurity firm that helped Facebook uncover misinformation attacks earlier this week, to identify three of the Youtube channels connected to Iran.

    What’s next? Google warns that phishing attacks, attempts by bad actors to trick users into hacking their devices and accounts, remain a threat to all email users and it is reminding Gmail users to be vigilant.

    Reply
  4. Tomi Engdahl says:

    New York Times:
    A look at how FireEye helped Facebook identify Iran-linked fake accounts, after working on the DNC hack in 2016 — SAN FRANCISCO — FireEye, a cybersecurity company that has been involved in a number of prominent investigations, including the 2016 attack on the Democratic National Committee …

    How FireEye Helped Facebook Spot a Disinformation Campaign
    https://www.nytimes.com/2018/08/23/technology/fireeye-facebook-disinformation.html

    FireEye, a cybersecurity company that has been involved in a number of prominent investigations, including the 2016 attack on the Democratic National Committee, alerted Facebook in July that it had a problem.

    Security analysts at the company noticed a cluster of inauthentic accounts and pages on Facebook that were sharing content from a site called Liberty Front Press. It looked like a news site, but most of its content was stolen from outlets like Politico and CNN. The small amount of original material was written in choppy English.

    FireEye’s tip eventually led Facebook to remove 652 fake accounts and pages. And Liberty Front Press, the common thread among much of that sham activity, was linked to state media in Iran, Facebook said on Tuesday.

    Facebook’s latest purge of disinformation from its platforms highlighted the key role that cybersecurity outfits are playing in policing the pages of giant social media platforms. For all of their wealth and well-staffed security teams, companies like Facebook often rely on outside firms and researchers for their expertise.

    The discovery of the disinformation campaign also represented a shift in the bad behavior that independent security companies are on the lookout for. Long in the business of discovering and fending off hacking attempts and all sorts of malware, security companies have expanded their focus to the disinformation campaigns that have plagued Facebook and other social media for the past few years.

    Attributing attacks to Iran has been tricky. Security experts who have studied Iranian hackers said many take part in attacks, or disinformation campaigns, while they are still in college. They are often recruited for government work, but may also float in and out of government-backed contracts.

    Reply
  5. Tomi Engdahl says:

    Issie Lapowsky / Wired:
    A look at SurfSafe, a browser plug-in that cross-references images with 100 trusted news sites and fact-checking organizations to determine if images are fake
    https://www.wired.com/story/surfsafe-browser-extension-save-you-from-fake-photos

    Reply
  6. Tomi Engdahl says:

    Rowan Atkinson on England and Freedom of Speech
    https://www.youtube.com/watch?v=h3UeUnRxE0E

    Nailed it.

    Britain has got to the point were even Mr Bean is trying to knock some sense back into the place

    mr.bean speaks more sense in one clip than most politicians in one life.

    years later, Rowan’s message only gets more and more relevant

    The irony of it is, under the guise of promoting tolerance, we’re losing our right to freedom of speech…which in itself is intolerant.

    Reply
  7. Tomi Engdahl says:

    Huono palvelu johtaa helposti somemyrskyyn, jolta yrittäjän on vaikea suojautua – haukkuja joutuu sietämään sananvapauden nimissä
    https://yle.fi/uutiset/3-10360600

    Reply
  8. Tomi Engdahl says:

    Jacqueline Howard / CNN:
    Study: IRA-backed trolls flooded social media to amplify online debates about vaccines, often linking messages around vaccines to racial or class disparities — – Russian trolls and Twitter bots amplified online vaccine debates between 2014 and 2017, a new study suggests

    Why Russian trolls stoked US vaccine debates
    https://edition.cnn.com/2018/08/23/health/russia-trolls-vaccine-debate-study/

    Russia’s meddling online went beyond the 2016 US presidential election and into public health, amplifying online debates about vaccines, according to a new study.
    The recent research project was intended to study how social media and survey data can be used to better understand people’s decision-making process around vaccines. It ended up unmasking some unexpected key players in the vaccination debate: Russian trolls.

    The researchers started examining Russian troll accounts as part of their study after NBC News published its database of more than 200,000 tweets tied to Russian-linked accounts this year. They noticed vaccine-related tweets among the Russian troll accounts, and some tweets even used the hashtag #VaccinateUS.

    These known Russian troll accounts were tied to the Internet Research Agency, a company backed by the Russian government that specializes in online influence operations.

    “One of the things about them that was weird was that they tried to — or they seemed to try to — relate vaccines to issues in American discourse, like racial disparities or class disparities that are not traditionally associated with vaccination,” Broniatowski said.

    “One of the things about them that was weird was that they tried to — or they seemed to try to — relate vaccines to issues in American discourse, like racial disparities or class disparities that are not traditionally associated with vaccination,” Broniatowski said.

    The consensus among doctors is that vaccines are safe, effective and important for public health, as they help reduce the spread of preventable disease and illness. A Pew Research Center study found last year that the vast majority of Americans support vaccine requirements.

    Why trolls tweet about vaccines

    “This is consistent with a strategy of promoting discord across a range of controversial topics — a known tactic employed by Russian troll accounts. Such strategies may undermine the public health: normalizing these debates may lead the public to question long-standing scientific consensus regarding vaccine efficacy,”

    “The Internet Research Agency has been known to engage in certain behaviors. There’s the one everybody knows about, which is the election. They also tend to engage in other topics that promote discord in American society,” Broniatowski said.

    “It’s basically the hot-button political issues of the day. They’re happy to grab onto whatever is salient,” he said. “I think that they want us focused on our own problems so that we don’t focus on them.

    “If most of our energies are focused internally with divisions inside of the United States — or divisions between the United States and, say, Europe — that leaves a window open for Russia to expand its sphere of influence.”

    In the 1980s, there was a Soviet campaign to spread false news about the AIDS epidemic in the US.

    Russian trolls could have amplified online vaccine debates in other countries as well

    DiResta pointed to how Italy’s Five Star movement and its coalition partner, the far-right League party, both have voiced their opposition to compulsory vaccinations.

    “Both real people and trolls are capitalizing on that mistrust to push conspiracy theories out to vulnerable audiences,”

    “This isn’t just happening on Twitter. This is happening on Facebook, and this is happening on YouTube, where searching for vaccine information on social media returns a majority of anti-vaccine propaganda,”

    Anti-vaccine sentiment has taken root in some European countries. Cases of measles have reached a record high in Europe this year,

    “There are messages being put out there that aren’t scientifically sound,” Allem said.
    “This has the potential to drown out scientifically sound messages from health care providers

    Reply
  9. Tomi Engdahl says:

    Jeff Horwitz / Associated Press:
    Sources: Enquirer used a safe for documents about payments and stories it killed as part of ties with Trump; Enquirer first promoted Trump for president in 2010 — WASHINGTON (AP) — The National Enquirer kept a safe containing documents on hush money payments and other damaging stories …

    AP: National Enquirer hid damaging Trump stories in a safe
    https://apnews.com/amp/143be3c52d4746af8546ca6772754407

    The National Enquirer kept a safe containing documents on hush money payments and other damaging stories it killed as part of its cozy relationship with Donald Trump leading up to the 2016 presidential election, people familiar with the arrangement told The Associated Press.

    Reply
  10. Tomi Engdahl says:

    Paul Farhi / Washington Post:
    A look at the harm from a spike in newsprint costs driven by Trump tariffs; the US International Trade Commission will consider the tariffs on Tuesday — Print isn’t dead. But the soaring cost of newsprint is contributing to the slow death of America’s newspapers.
    http://www.washingtonpost.com/lifestyle/style/how-a-trump-tariff-is-strangling-american-newspapers/2018/08/23/2ecc7fb6-a62a-11e8-8fac-12e98c13528d_story.html

    Reply
  11. Tomi Engdahl says:

    Motherboard:
    Leaked docs and sources describe Facebook’s content moderation apparatus and the logistics of trying to moderate billions of posts a week in 100+ languages

    The Impossible Job: Inside Facebook’s Struggle to Moderate Two Billion People
    https://motherboard.vice.com/en_us/article/xwk9zd/how-facebook-content-moderation-works

    Moderating billions of posts a week in more than a hundred languages has become Facebook’s biggest challenge. Leaked documents and nearly two dozen interviews show how the company hopes to solve it.

    This spring, Facebook reached out to a few dozen leading social media academics with an invitation: Would they like to have a casual dinner with Mark Zuckerberg to discuss Facebook’s problems?

    According to five people who attended the series of off-the-record dinners at Zuckerberg’s home in Palo Alto, California, the conversation largely centered around the most important problem plaguing the company: content moderation.

    Reply
  12. Tomi Engdahl says:

    Issie Lapowsky / Wired:
    A look at SurfSafe, a browser plug-in that cross-references images with 100 trusted news sites and fact-checking organizations to determine if images are fake

    https://www.wired.com/story/surfsafe-browser-extension-save-you-from-fake-photos/

    Reply
  13. Tomi Engdahl says:

    Felix Salmon / Slate:
    A New York Times article on a preliminary study of Facebook use and hate crimes in Germany overstated the complicated study’s conclusions

    The New York Times Shouldn’t Have Built Its Facebook and Anti-Refugee Violence Story Around That One Study
    https://slate.com/technology/2018/08/the-nytimes-shouldnt-have-relied-so-heavily-on-that-facebook-and-anti-refugee-study.html

    It’s an interesting bit of research, but it’s preliminary and mostly focused on methodology.

    In December 2017, a pair of Warwick University post-docs, Karsten Müller and Carlo Schwarz, published an intriguing and clever attempt to measure whether there was any correlation between hate crime and social media usage in Germany.

    The media first picked up on this SSRN paper in January 2018, when the Economist published a “Daily chart” under the headline “In Germany, online hate speech has real-world consequences.” The short article, citing the paper, explained that the research had shown that “for every four additional Facebook posts critical of refugees, there was one additional anti-refugee incident,” while taking pains to note that “this correlation is of course no guarantee of causation.” Notably, the Economist article only addressed the volume of right-wing anti-refugee posts, rather than the amount of Facebook use broadly.

    The New York Times, by contrast, took a very different approach
    The headline, too, was suitably hedged—“Facebook Fueled Anti-Refugee Attacks in Germany, New Research Suggests.”

    Immediately, journalists and economists around the world started downloading the paper to try and work out whether it was reliable, and whether it said what the Times said that it said. Neither task is easy: The paper is technically complicated, difficult to read and understand. Its results are hard to judge without replicating a lot of hard statistical work. And to make matters worse, my search for the breathtaking statistic revealed that it was never explicitly stated in the paper.

    researchers updated the paper on the SSRN site the very day the Times article ran.

    It makes intuitive sense that in areas of the country where Facebook usage is high, people will be more likely to see anti-refugee sentiment on Facebook. It arguably makes sense to then assume that these people would also be more likely to act on that sentiment by attacking refugees. What the Warwick paper suggests is that this intuition is empirically true.

    But in reality, the breathtaking statistic didn’t come directly from the paper. Rather, it came from long phone conversations in which the paper’s authors walked the newspaper’s journalists through the data, and the methodology, and the results.

    What is abundantly clear, however, is that the authors of the paper are more interested in presenting a methodology for trying to estimate these effects than they are in presenting the actual results.

    That’s all OK. The white paper was written by a pair of post-docs without any peer review, and there’s no particular reason why it should have been ready for the the social-media klieg lights that suddenly got trained on it.

    This means that even if there is extremely strong correlation between anti-refugee sentiment on Facebook and attacks in the real world, this study isn’t designed to assess if one is causing the other.

    Reply
  14. Tomi Engdahl says:

    Lucia Moses / Digiday:
    Chartbeat study: only a third of 159 publishers that adopted Google’s AMP in 2017 saw clear statistical evidence of a traffic increase

    Google AMP beat Facebook Instant Articles, but publishers start to question AMP’s benefits
    https://digiday.com/media/google-amp-beat-facebook-instant-articles-publishers-start-question-amps-benefits/

    Google unveiled its open-source Accelerated Mobile Pages format in 2016 to improve the mobile web by making pages load faster (and match Facebook‘s own fast Instant Articles format). While Instant Articles has fallen out of use with publishers, AMP has contributed to Google overtaking Facebook as a traffic referral source. But despite AMP becoming a growing share of web traffic, some are calling its benefits into question.

    A new study by Chartbeat published today shows that only a third of publishers actually see clear evidence of a traffic increase from AMP.

    If AMP was delivering a lot more traffic, that could make up for the revenue shortfall publishers often complain about.

    In fact, AMP represents just a sliver of the revenue publishers make from their distributed content, according to a report from Digital Content Next. Other complaints have been that AMP limits publishers’ ability to promote other products, use elaborate editorial formats and get data back on reader behavior.

    “AMP had a lot of hype and promise,” said Chris Breaux, director of data science at Chartbeat. “It’s really good for users in providing a consistent experience in terms of page-load time. The real question is, do you see more traffic than you would have if you didn’t do the implementation? The answer for two-thirds of publishers is, no.”

    Nathan Kontny, CTO of development firm Rockstar Coders, said after he enabled AMP on the company’s site, conversions dropped 70 percent, leading him to disable it. “Conversions are the lifeblood of our business,” he said.

    An earlier Chartbeat study this year found that AMP was delivering a traffic and engagement boost. The new study found that there was a 22 percent increase in Google mobile traffic on average across publishers but that individual publisher effects varied widely.

    In the end, the benefit seems to come down to how well the publisher implements AMP

    “The message for publishers is, it takes a fair amount of work to support a new platform,” Breaux said. “There are also challenges on the ad side.”

    Most publishers that have adopted AMP still use it.

    ”A lot of publishers are nervous that if they don’t adopt it, either through overt priority or consumer preferences, they’ll be harmed,”

    Reply
  15. Tomi Engdahl says:

    The consequences of indecency
    https://techcrunch.com/2018/08/23/the-consequences-of-indecency/

    AdChoices

    The consequences of indecency
    Ron Wyden
    @RonWyden / Aug 23, 2018

    ethics word abstract in letterpress wood type
    Ron Wyden
    Contributor
    Ron Wyden (D-OR) has served in the United States Senate since 1996. He previously served in the United States House of Representatives from 1981 to 1996.
    I wrote the law that allows sites to be unfettered free speech marketplaces. I wrote that same law, Section 230 of the Communications Decency Act, to provide vital protections to sites that didn’t want to host the most unsavory forms of expression. The goal was to protect the unique ability of the internet to be the proverbial marketplace of ideas while ensuring that mainstream sites could reflect the ethics of society as a whole.

    In general, this has been a success — with one glaring exception. I never expected that internet CEOs would fail to understand one simple principle: that an individual endorsing (or denying) the extermination of millions of people, or attacking the victims of horrific crimes or the parents of murdered children, is far more indecent than an individual posting pornography.

    Social media cannot exist without the legal protections of Section 230. That protection is not constitutional, it’s statutory. Failure by the companies to properly understand the premise of the law is the beginning of the end of the protections it provides. I say this because their failures are making it increasingly difficult for me to protect Section 230 in Congress. Members across the spectrum, including far-right House and Senate leaders, are agitating for government regulation of internet platforms. Even if government doesn’t take the dangerous step of regulating speech, just eliminating the 230 protections is enough to have a dramatic, chilling effect on expression across the internet.

    Reply
  16. Tomi Engdahl says:

    Skim reading is the new normal. The effect on society is profound
    https://www.theguardian.com/commentisfree/2018/aug/25/skim-reading-new-normal-maryanne-wolf

    When the reading brain skims texts, we don’t have time to grasp complexity, to understand another’s feelings or to perceive beauty. We need a new literacy for the digital age

    Look around on your next plane trip. The iPad is the new pacifier for babies and toddlers. Younger school-aged children read stories on smartphones; older boys don’t read at all, but hunch over video games. Parents and other passengers read on Kindles or skim a flotilla of email and news feeds. Unbeknownst to most of us, an invisible, game-changing transformation links everyone in this picture: the neuronal circuit that underlies the brain’s ability to read is subtly, rapidly changing – a change with implications for everyone from the pre-reading toddler to the expert adult.

    the present reading brain enables the development of some of our most important intellectual and affective processes: internalized knowledge, analogical reasoning, and inference; perspective-taking and empathy; critical analysis and the generation of insight. Research surfacing in many parts of the world now cautions that each of these essential “deep reading” processes may be under threat as we move into digital-based modes of reading.

    This is not a simple, binary issue of print vs digital reading and technological innovation.

    As MIT scholar Sherry Turkle has written, we do not err as a society when we innovate, but when we ignore what we disrupt or diminish while innovating.

    the result is that less attention and time will be allocated to slower, time-demanding deep reading processes, like inference, critical analysis and empathy, all of which are indispensable to learning at any age.

    Increasing reports from educators and from researchers in psychology and the humanities bear this out.

    students actively avoid the classic literature of the 19th and 20th centuries because they no longer have the patience to read longer, denser, more difficult texts. We should be less concerned with students’ “cognitive impatience,” however, than by what may underlie it: the potential inability of large numbers of students to read with a level of critical analysis sufficient to comprehend the complexity of thought and argument found in more demanding texts, whether in literature and science in college, or in wills, contracts and the deliberately confusing public referendum questions citizens encounter in the voting booth.

    Multiple studies show that digital screen use may be causing a variety of troubling downstream effects on reading comprehension in older high school and college students.

    Ziming Liu from San Jose State University has conducted a series of studies which indicate that the “new norm” in reading is skimming, with word-spotting and browsing through the text. Many readers now use an F or Z pattern when reading in which they sample the first line and then word-spot through the rest of the text. When the reading brain skims like this, it reduces time allocated to deep reading processes. In other words, we don’t have time to grasp complexity, to understand another’s feelings, to perceive beauty, and to create thoughts of the reader’s own.

    Katzir’s research has found that the negative effects of screen reading can appear as early as fourth and fifth grade – with implications not only for comprehension, but also on the growth of empathy.

    The possibility that critical analysis, empathy and other deep reading processes could become the unintended “collateral damage” of our digital culture is not a simple binary issue about print vs digital reading. It is about how we all have begun to read on any medium and how that changes not only what we read, but also the purposes for why we read.

    There’s an old rule in neuroscience that does not alter with age: use it or lose it.

    If we work to understand exactly what we will lose, alongside the extraordinary new capacities that the digital world has brought us, there is as much reason for excitement as caution.

    We need to cultivate a new kind of brain: a “bi-literate” reading brain capable of the deepest forms of thought in either digital or traditional mediums.

    Reply
  17. Tomi Engdahl says:

    The cure for Facebook’s fake news infection? It might be these women
    At Facebook, where men outnumber women almost two to one, the future of news is female.
    https://www.cnet.com/features/the-cure-for-facebooks-fake-news-infection-it-might-be-these-women/

    To put something in context, she prefaces sentences with “In a world…”

    “In a world where people with different viewpoints and opinions cannot come together around a shared set of facts,” she tells me in an interview, “that’s a very dangerous place to be.”

    But make no mistake, Hardiman and her colleagues are on the kind of high-stakes mission that’s ripe for cinematic retelling. They’re trying to wipe aside fake news from Facebook’s massive social network, a critical source of information to 2.23 billion people, while also fostering a support system for more legitimate reporting. Their success or failure will affect the health of the news industry and the well-being of democracy worldwide.

    And at Facebook, where men outnumber women nearly two to one, the commanders of this mission are women.

    Facebook’s two dedicated news groups — Hardiman’s news products team and a news partnerships team run by former CNN and NBC anchor Campbell Brown — are led by women. A majority of the managers on both teams are women. And the phalanx of Facebook’s News Feed employees that handles issues like disinformation and hoaxes has five product managers; three are women.

    “They are fearless. They are fierce,” says Hardiman of her female colleagues. “It’s because, when you think about how to spend your time, for many of us, there’s no greater thing that we can try to do than to solve these problems as best as we can.”

    And today, society is re-examining Silicon Valley’s norms — disruptive, witlessly idealistic and, yes, male-dominated — for the nightmares they’ve created. Twitter, Google’s YouTube and others share in this reckoning with big tech’s flaws, but no company encapsulates it more than Facebook.

    If a departure from the norm is what Facebook needs, it makes sense to put the fate of news there into the hands of women

    “Trying to do this at scale is hard,” says Brown, who speaks with the kind of self-composed polish that persists in the off-camera mannerisms of TV journalists. But, she says, fixing Facebook’s news problems isn’t impossible. “Because [stamp] we have the resources [stamp]. It’s a huge [stamp] priority for us, not only for Facebook but for our country.” Stamp.

    Mark always says move faster, but I worry we can’t move fast enough, as fast as we need to.

    Brown’s news partnerships team focuses on Facebook’s relationships with news outlets. Hardiman’s news products team develops site features for news content, like a red “breaking news” label on a story about an earthquake that just struck. They both contribute to the Facebook Journalism Project, a collection of programs providing tools for publishers and building news products in collaboration with them.

    “Last year was very much about trying to reduce the amount of false news on the platform, reduce the bad,” Brown says. This year, the news teams have begun to focus on “elevating the good,” she says. They’re focusing now on the programs and features to help legitimate news thrive on — and off — Facebook.

    “Elevating the good” may seem less thorny than dealing with malevolent hoaxes, but, really, all of Facebook’s interaction with news is fraught.

    Generally speaking, “elevating the good” includes identifying trusted, informative outlets and prioritizing them in News Feed. Hardiman’s team also espouses “collaborative product design.” It works directly with news organizations as they build, so the results actually suit publishers’ needs.

    The progress has been slow.

    “The only area I’ve seen positive light — and I don’t know just because it’s so early — is local,”

    As far as “reducing the bad,” Facebook’s fumbles with Alex Jones underscore the company’s weaknesses reining in fake news itself.

    Jones’ Infowars assumes the guise of a news organization to prolifically deliver conspiracy theories.

    News Feed ranking is Facebook’s weapon of choice against fake news. Facebook doggedly resists removing disinformation outright unless it also violates community standards like those against harassment or hate speech, a la Alex Jones’ Infowars. Its preference is to “downrank” disinformation — effectively burying it at the bottom of News Feed.

    Downranking reduces the spread of false news, while still “staying true to what we are, which is a platform for expression and connection,”

    For years, Facebook was loath to admit it was a media company, even as nearly half of US adults said they get their news from Facebook. So it also refused the responsibilities that come with media power, like having editorial standards.

    “That control-without-liability thinking permeates so much of their rhetoric,”

    Between the Cambridge Analytica scandal and Facebook’s struggles to contain fake news, the company continues to draw scrutiny from lawmakers and consumers — and bitterness from publishers.

    So Facebook has an excess of catastrophes and a shortage of women. But by addressing its gender shortcomings in the area of news, it may stand a better chance of heading off more disasters.

    Why? Data shows women in the workplace get more stuff done.

    Reply
  18. Tomi Engdahl says:

    Online services are only accelerating the reach and impact of data-intelligence practices that stretch back decades. They have collected your personal data, with and without your permission, from employers, public records, purchases, banking activity, educational history, and hundreds more sources. They have connected it, recombined it, bought it, and sold it. Processed foods look wholesome compared to your processed data, scattered to the winds of a thousand databases. Everything you have done has been recorded, munged, and spat back at you to benefit sellers, advertisers, and the brokers who service them. It has been for a long time, and it’s not going to stop. The age of privacy nihilism is here, and it’s time to face the dark hollow of its pervasive void.

    WHAT WE NOW KNOW ABOUT IRAN’S GLOBAL PROPAGANDA CAMPAIGN
    https://www.wired.com/story/iran-global-propaganda-fireeye

    THEY SET UP phony news sites with stories ripped from other sources, backing up their state-sponsored agenda. They stole photos for their social media profiles and made up names to catfish unsuspecting victims. They formed an incestuous web of promotion across Facebook, Twitter, YouTube, Google+, Reddit, and other platforms. They seemed to have a thing for Bernie Sanders. And then they got caught.

    Yes, that’s the story of the infamous Russian trolls who spread divisive content throughout the 2016 presidential campaign season. But it just as easily applies to the recently discovered propaganda network that Facebook and Google have linked to Iran’s state media corporation, Islamic Republic of Iran Broadcasting. They and Twitter have since deleted hundreds of accounts between them, thanks to a tipoff from vigilant researchers at the cybersecurity firm FireEye.

    Reply
  19. Tomi Engdahl says:

    Issie Lapowsky / Wired:
    FireEye: Iran-linked propaganda campaign unearthed on Facebook, Twitter, and Google was mostly about promoting Iranian interests rather than dividing US voters — THEY SET UP phony news sites with stories ripped from other sources, backing up their state-sponsored agenda.
    http://www.wired.com/story/iran-global-propaganda-fireeye

    Reply
  20. Tomi Engdahl says:

    Daniel Funke / Poynter:
    How Brazilian fact-checking project Aos Fatos used technology from British charity Full Fact to transcribe and fact-check a presidential debate in real time

    These fact-checkers teamed up across the Atlantic to cover a presidential debate in real time
    https://www.poynter.org/news/these-fact-checkers-teamed-across-atlantic-cover-presidential-debate-real-time

    Aos Fatos has a friend in Full Fact.

    The Brazilian fact-checking project used technology from the latter during a presidential debate in São Paulo last Friday to make its job a little easier. Instead of only relying on what they could hear on TV, fact-checkers picked out claims from a live transcript on their computers.

    “It kept us from (doing) four to five hours of work,” said Tai Nalon, director of Aos Fatos. “We still had to review everything of course — we have to edit what the tool transcribes — but it’s way easier to fact-check and make the live transcription and put everything on there by the time the debate is over.”

    That system was made possible thanks to a platform called Live, which Full Fact has spent the past couple of years developing. It works by automatically scanning BBC and Parliament transcripts for fact-checkable claims, which it then matches against an existing database of fact checks.

    The British fact-checking charity gave Aos Fatos its own login to Live, which they used to view a real-time transcript of the debate. Powered by Speechmatics, an automatic speech recognition tool, Live pulled audio from a YouTube video of the debate and then generated the transcript.

    “We know what it’s like to live fact-check a debate without a transcript — not easy — so we worked overtime to get the system up and running for them,”

    Reply
  21. Tomi Engdahl says:

    Online Jokes Are No Laughing Matter in Russia
    https://www.hrw.org/news/2018/08/21/online-jokes-are-no-laughing-matter-russia

    Russians Increasingly Prosecuted Under Extremism Legislation for Social Media Posts

    Russian authorities have registered 762 “extremist crimes” so far this year, many of them consisting of social media posts. In some cases, the authorities make little effort to conceal that they are using the country’s broad and vague anti-extremism legislation to silence free expression.

    Not everyone in government agrees with this crackdown; the Ministry of Communications recently supported a proposed law change to eliminate criminal liability for reposting content online. But with arrests on the rise, it’s clear free speech online is under threat.

    Reply
  22. Tomi Engdahl says:

    Twitter Bots And Russian Government Trolls Are Stoking Vaccine Wars
    https://www.iflscience.com/health-and-medicine/twitter-bots-and-russian-government-trolls-are-stoking-vaccine-wars/

    Some Twitter accounts spreading vaccination myths are actually bots and others are trolls who don’t believe their message, a new analysis has found. Malware operators and spammers have seized on anti-vaccination messages to promote links. Meanwhile, something even stranger is emerging from the famed Russian bot farms, who are pushing both sides of the vaccine conflict on social media.

    Reply
  23. Tomi Engdahl says:

    Adblocking with permission – in Finland a courageous experiment

    For web sites with advertising funding, the use of browser ad blockers is poison. The computer magazine MikroBitti has renewed its site and is approaching the same problem with a new look.

    The Mikrobitti magazine’s web service was renewed last week. In the future, all the stories of a printed magazine are also available in digital format. Some of the articles on the site are paid for, so they are only readable by subscribers.

    The reform also tackled the adblock problem – There are a lot of people on the microbicus site who block their ads on their browser. The new site will recognize them and direct them to make an order. Subscribers can use the adblock if they so wish.

    “Quality content production is a prerequisite for content users or advertising sales. We’ve received a lot of contacts where users declare their willingness to pay for content if they do not need to look at ads. Now it is possible for them, “says Mikrobit’s Editor-in-Chief Mikko Torikka .

    Source: https://www.tivi.fi/Kaikki_uutiset/adblockausta-ihan-luvan-kanssa-suomessa-rohkea-kokeilu-6738369

    Reply
  24. Tomi Engdahl says:

    Allie Volpe / The Verge:
    A look at how memes can help people correct behaviors framed as unsavory or distasteful by tapping into more widely relatable emotions or experiences

    It’s not all Pepes and trollfaces — memes can be a force for good
    https://www.theverge.com/2018/8/27/17760170/memes-good-behavioral-science-nazi-pepe

    How the ‘emotional contagion’ of memes makes them the internet’s moral conscience

    the most successful memes strike a cultural chord and can guide and even influence behavior

    Memes are incredibly efficient at guiding viewers toward socially acceptable group behavior

    minor behaviors benign enough to not warrant a larger conversation, but annoying enough to justify an online joke

    Memes help normalize mental health struggles more concretely than simply offering momentary comfort

    Reply
  25. Tomi Engdahl says:

    Barry Schwartz / Search Engine Land:
    Microsoft launches Bing spotlight, a news aggregator for developing stories in Bing search results compiled by AI and human editors, available in the US

    ‘Bing spotlight’ offers a news hub for information on evolving stories, powered by AI and human editors
    https://searchengineland.com/bing-spotlight-the-latest-news-powered-by-ai-and-human-editors-304433

    Bing is tackling news with a new feature that showcases a timeline of how a story has evolved, different perspectives from news sources and related social media posts on a topic.

    Reply
  26. Tomi Engdahl says:

    Trump claims Google is suppressing positive news about him and ‘will be addressed’
    The president said that Google had “rigged” its search results to suppress positive coverage
    https://www.theverge.com/2018/8/28/17790164/president-trump-google-left-wing-bias-claims

    The perceived leftwing bias of big tech companies has become a major talking point for conservatives. And this morning on Twitter, President Trump escalated their claims.

    Trump suggested that not only was there a tendency for tech companies to suppress right-wing voices, but that Google has “rigged” its search results to only show negative reporting about him. The president added that Google and others were “hiding information and news that is good,” and said that this was “very serious situation” that “will be addressed.”

    Trump included in his tweets a dubious statistic: that 96 percent of search results for the term “Trump News” came from “National Left-Wing Media.” This claim seems to have originated on the right-wing PJ Media before spreading to other outlets, including Fox News.

    It’s also difficult to analyze the idea of bias in Google’s algorithms. The company uses a number of measures to weight its search rankings, and while the exact formula is not known, factors like an outlet’s longevity, its reputation, and its ability to fill stories with relevant keywords all play a part.

    Reply
  27. Tomi Engdahl says:

    The Logan Paul vs. KSI fight exposed an ugliness that’s older than YouTube
    An expertly orchestrated circus of hurt machismo
    https://www.theverge.com/2018/8/27/17785762/logan-paul-ksi-fight-winner-result-youtube

    Serving up bloody violence as a $10 pay-per-view on YouTube and charging as much as $200 for floor seats in the arena, this was an affirmation of the enduring bankability of toxic masculinity.

    When I questioned KSI about the sensationalism surrounding this fight, he disowned it. “It’s because of our fan bases,” he claimed. “Have you seen my [YouTube subscriber] numbers? Have you seen his numbers?” (Logan Paul’s YouTube channel currently has more than 18 million followers, while KSI’s has in excess of 19 million.) KSI would have us believe that the hype is all self-generated from an avid fan base, an assertion directly contradicted by the pair’s matching diss tracks; KSI’s features him rapping over Logan Paul’s head on a platter with an apple stuffed in its mouth.

    Serious mainstream media outlets treated this like a serious boxing contest

    This fight proved one thing: when it comes to making money, there’s no difference between fame and infamy

    Ultimately, this weekend’s hugely lucrative event was a reminder of some unpleasant, yet still central aspects of our broader culture.

    I can’t hate the Pauls or Olatunjis for winning at the viral-fame game. Their shameless boorishness, materialism, and sexism are popular, and the blame for that lies more with us, the audience, than with them. They embody and embrace the simple reality that, when it comes to getting paid, there’s no longer a difference between fame and infamy.

    YouTube was supposed to democratize access to global stardom — and it has — but the implicit promise of greater freedom of expression hasn’t materialized. Instead, we’ve got people demeaning and diminishing themselves, appealing to our basest desires for unsavory spectacle. I wish I could rise above it

    Reply
  28. Tomi Engdahl says:

    Trump warns Google, Facebook and Twitter in row over bias
    https://www.bbc.com/news/technology-45331210

    US President Donald Trump has warned Google, Twitter and Facebook they are “treading on troubled territory” amid a row over perceived bias.

    He said they had to be “very careful”, after earlier accusing Google of rigging the search results for the phrase “Trump news”.

    An aide said the administration was “looking into” the issue of regulation.

    Google said its search engine set no political agenda and was not biased towards any political ideology.

    Speaking to reporters at the White House, Mr Trump said Google had “really taken a lot of advantage of a lot of people, it’s a very serious thing”.

    Adding the names of Facebook and Twitter, he said: “They better be careful, because you can’t do that to people… we have literally thousands of complaints coming in.”

    However, when asked earlier about Google, Mr Trump’s top economic adviser, Larry Kudlow, said the administration was “taking a look” at whether it should be regulated and would do “some investigation and some analysis”.

    ‘Suppressed’

    Analysts say there is little to back up Mr Trump’s claim and that it is unclear how he could take action.

    Some said attempts to alter search engine results could violate the First Amendment, although his administration could make it difficult for Google by looking into its dominance of the market.

    In an earlier tweet, Mr Trump accused Google of prioritising negative news stories from what he described as the “national left-wing media”.

    He said most of the stories that appeared on the results page were negative and that conservative reporting was being “suppressed”.

    Last week he said social media were “totally discriminating against Republican/Conservative voices” and that he would “not let that happen”.

    Google denied using political viewpoints to shape its search results.

    “Google’s news algorithm is optimised for actuality and proximity of an event but it is generally not optimised to look for political orientation,” she said.

    “However, it has a tendency to rank web pages higher that a lot of people link to.

    Reply
  29. Tomi Engdahl says:

    Ellen Pao / Wired:
    CEOs of social media companies should not hide behind “naiveté” and “free speech” as an excuse for letting fake news and harassment flourish on their platforms

    LET’S STOP PRETENDING FACEBOOK AND TWITTER’S CEOS CAN’T FIX THIS MESS
    https://www.wired.com/story/ellen-pao-facebook-twitter-ceos-can-fix-abuse-mark-zuckerberg-jack-dorsey/

    IT’S NOT OFTEN you hear some of the richest, most powerful men in the world described as naive, but it’s becoming pretty commonplace. Mark Zuckerberg, after a dark period for Facebook, has been called naive more than once. Jack Dorsey, meanwhile, has admitted having to rethink fundamental aspects of Twitter.

    But I struggle to believe that these brilliant product CEOs, who have created social media services used by millions of people worldwide, are actually naive. It’s a lot more likely that they simply don’t care. I think they don’t care about their users and how their platforms work to harm many, and so they don’t bother to understand the interactions and amplification that result.

    They’ve been trained to not care.

    The core problem is that these CEOs are actually making totally logical decisions every step of the way. Capitalism—which drives the markets, investors, venture capitalists, and board members—demands a certain approach to growth and expansion, one that values particular metrics. So social media companies and the leaders who run them are rewarded for focusing on reach and engagement, not for positive impact or for protecting subsets of users from harm. They’re rewarded for keeping costs down, which encourages the free-for-all, anything-goes approach misnomered “free speech.” If they don’t need to monitor their platforms, they don’t need to come up with real policies—and avoid paying for all the people and tools required to implement them.

    Reply
  30. Tomi Engdahl says:

    Twitter suspends 486 more accounts for ‘coordinated manipulation’
    Some of the accounts were sharing divisive social commentaries.
    https://www.cnet.com/news/twitter-suspends-486-more-accounts-for-coordinated-manipulation/#ftag=CAD590a51e

    Twitter is still going after fake accounts that have engaged in “coordinated manipulation.”

    The social media platform on Tuesday said it suspended 486 more accounts that violated its policies, bringing the total of suspended accounts to 770.

    Since our initial suspensions last Tuesday, we have continued our investigation, further building our understanding of these networks. In addition, we suspended an additional 486 accounts for violating the policies outlined last week. This brings the total suspended to 770.

    This comes after Russian interference in the 2016 US presidential elections, which surfaced numerous fake accounts used to stir conflict and spread misinformation across social media.

    Reply
  31. Tomi Engdahl says:

    BuzzFeed News:
    Trump tweets video claiming Google didn’t promote his State of the Union address on its homepage; Google says it did, confirmed by screenshot posted on Reddit — “Google promoted president Obama’s State of the Union on its homepage. When President Trump took office, Google stopped,” a video shared by Trump falsely claimed.

    Trump Claims Google Didn’t Promote His State Of The Union. Google And This Screenshot Say Otherwise.
    https://www.buzzfeednews.com/article/janelytvynenko/trump-claims-google-didnt-promote-his-state-of-the-union

    “Google promoted president Obama’s State of the Union on its homepage. When President Trump took office, Google stopped,” a video shared by Trump falsely claimed.

    Reply
  32. Tomi Engdahl says:

    Kara Swisher / New York Times:
    Trump’s accusations against Google, Facebook, and Twitter about limiting conservatives have no merit because those platforms have always amplified his messages — The idea that Google and Twitter are rigging their platforms against him is patently false. — Ms. Swisher covers technology and is a contributing opinion writer.

    Trump’s Ludicrous Attack on Big Tech
    https://www.nytimes.com/2018/08/29/opinion/trump-bias-google-twitter.html

    The idea that Google and Twitter are rigging their platforms against him is patently false.

    Reply
  33. Tomi Engdahl says:

    David Shepardson / Reuters:
    Trump’s Twitter account unblocks 41 more users after US court ruling in May that said government officials’ accounts were public forums, but some remain blocked — WASHINGTON (Reuters) – U.S. President Donald Trump on Tuesday unblocked some additional Twitter users after a federal judge …

    Trump unblocks more Twitter users after U.S. court ruling
    https://www.reuters.com/article/us-usa-trump-twitter/trump-unblocks-more-twitter-users-after-u-s-court-ruling-idUSKCN1LE08Q

    U.S. President Donald Trump on Tuesday unblocked some additional Twitter users after a federal judge in May said preventing people from following him violated individuals constitutional rights.

    U.S. District Judge Naomi Reice Buchwald in Manhattan ruled on May 23 that comments on the president’s account, and those of other government officials, were public forums and that blocking Twitter Inc users for their views violated their right to free speech under the First Amendment of the U.S. Constitution.

    Reply
  34. Tomi Engdahl says:

    The president’s attacks on social media are incoherent and depressing
    Is all of this complaining actually leading to anything?
    https://www.theverge.com/2018/8/30/17798264/trump-social-media-censorship-twitter-facebook-google

    Start with grievance No. 1: the president said he lost social media followers because of anti-conservative censorship

    Grievance No. 2: the president tweeted a video suggesting that Google discriminated against him by not putting a link to his first address to Congress on its homepage. He included the hashtag #StopTheBias. Google explained that it historically has never promoted the president’s first address to Congress, which is not an official State of the Union address, and that the president’s complaint was groundless.

    Grievance No. 3, which took place on Tuesday but continued to reverberate through the take-o-sphere Wednesday, is that a lot of Google News results about Trump contain articles about things he actually did, which paint him in a negative light.

    It all felt sort of unbelievable that we were even talking about any of this.

    Many people told the president to be quiet.

    Reply
  35. Tomi Engdahl says:

    Trump’s latest misleading attack on Google, explained
    The president falsely claimed Google did not link to State of the Union addresses
    https://www.theverge.com/2018/8/29/17798118/president-donald-trump-google-state-of-the-union-address-liberal-bias

    President Donald Trump intensified his criticism of Google today, posting a native video of unknown origin to his Twitter account this afternoon claiming the search giant stopped promoting the State of the Union (SOTU) address on its homepage after he took office. It turns out the video he posted is not only misleading, but also contains what appears to be a fake screenshot of the Google homepage on the day in question. It has since been viewed more than 1.5 million times.

    As Gizmodo notes, however, whoever made the video could have simply pulled the first screenshot of the Google homepage for those dates from the Wayback Machine, which doesn’t accurately reflect changes to the interactive portions of the page, like Google Doodles and links to YouTube Live.

    With regards to the 2018 SOTU, Google says it did in fact promote it on its homepage. “On January 30th 2018, we highlighted the livestream of President Trump’s State of the Union on the google.com homepage,” reads Google’s statement. “We have historically not promoted the first address to Congress by a new President, which is not a State of the Union address.

    Trump’s posting of the video to his official Twitter account is just the latest attack in a series of heightened criticism against tech companies this week, with Trump singling out Google after watching a Fox News segment about a report that claimed 96 percent of search results were for from “national left-wing media.” (The author of the report has since distanced herself from the claims, calling them “not scientific” and “based on only a small sample size” of 100 results.)

    Reply
  36. Tomi Engdahl says:

    Civil, the blockchain journalism startup, has partnered with one of the oldest names in media
    https://techcrunch.com/2018/08/28/civil-the-blockchain-journalism-startup-has-partnered-with-one-of-the-oldest-names-in-media/?sr_share=facebook&utm_source=tcfbpage

    Civil, the two-year-old crypto startup that wants to save the journalism industry by leveraging the blockchain and cryptoeconomics, has partnered with the 172-year-old Associated Press to help the wire service stop bad actors from stealing its content.

    Civil, using its blockchain-enabled licensing mechanism, which is still in development, will help the AP track where its content is going and whether it’s licensed correctly.

    plans to make the licensing tool available to all the newsrooms in its ecosystem once it’s up and running.

    “We have a problem now of not even just dealing with literal fake news, but dealing with the social aspects of people not really knowing what to trust anymore because people are throwing around allegations,” Iles told TechCrunch. “We think [Civil] is going to create far better signals for consumers to really know if a news organization is trusting and credible, despite whatever powerful people might be saying.”

    If all goes well, the AP will rake in more revenue as a result of the partnership and Civil will have a nice use case of its blockchain-enabled licensing tool to show off.

    Reply
  37. Tomi Engdahl says:

    AI sucks at stopping online trolls spewing toxic comments
    It’s easy to for hate speech to slip past dumb machines
    https://www.theregister.co.uk/2018/08/31/ai_toxic_comments/

    New research has shown just how bad AI is at dealing with online trolls.

    Such systems struggle to automatically flag nudity and violence, don’t understand text well enough to shoot down fake news and aren’t effective at detecting abusive comments from trolls hiding behind their keyboards.

    A group of researchers from Aalto University and the University of Padua found this out when they tested seven state-of-the-art models used to detect hate speech. All of them failed to recognize foul language when subtle changes were made, according to a paper [PDF] on arXiv.

    Adversarial examples can be created automatically by using algorithms to misspell certain words, swap characters for numbers or add random spaces between words or attach innocuous words such as ‘love’ in sentences.

    The models failed to pick up on adversarial examples and successfully evaded detection. These tricks wouldn’t fool humans, but machine learning models are easily blindsighted.

    https://arxiv.org/pdf/1808.09115.pdf

    Reply
  38. Tomi Engdahl says:

    Emergence of Global Legislation Against ‘Fake News’ May Present Regulatory Risks
    https://www.flashpoint-intel.com/blog/anti-fake-news-legislation-may-present-regulatory-risks/

    In response to fake news becoming an increasingly pervasive issue affecting the global political climate, many countries have implemented, or are in the process of implementing, legislation to combat the online spread of false information. While it’s difficult to reach uniform conclusions about these different legislative acts, organizations with an online presence in countries with anti-fake news laws may be subjected to increased government scrutiny, as well as potential fines or sanctions.

    The following countries have passed legislation to combat the spread of fake news:
    Qatar
    Malaysia
    Kenya
    France
    Egypt
    Russia

    Assessment

    Laws intended to combat fake news introduce a variety of regulatory risk for businesses, especially in countries that adopt legislation broadly worded enough to hold online platforms accountable, not only for the content they publish, but also for the content shared or created by users. As such, companies operating media platforms or social networks with international user bases should monitor the global regulatory landscape for legislation that may present liabilities and adjust their operations accordingly.

    Reply
  39. Tomi Engdahl says:

    Out-of-control censorship machines removed my article warning of out-of-control censorship machines
    https://juliareda.eu/2018/08/censorship-machines-gonna-censor/

    A few days ago, about a dozen articles and campaign sites criticising EU plans for copyright censorship machines silently vanished from the world’s most popular search engine. Proving their point in the most blatant possible way, the sites were removed by exactly what they were warning of: Copyright censorship machines.

    Among the websites that were made impossible to find: A blog post of mine in which I inform Europeans about where their governments stand on online censorship in the name of copyright

    Reply
  40. Tomi Engdahl says:

    AI-Human Partnerships Tackle “Fake News”
    https://spectrum.ieee.org/computing/software/aihuman-partnerships-tackle-fake-news

    “They’re all using AI because they need to scale,” says Claire Wardle, who leads the misinformation-fighting project First Draft, based in Harvard University’s John F. Kennedy School of Government. AI can speed up time-consuming steps, she says, such as going through the vast amount of content published online every day and flagging material that might be false.

    But Wardle says AI can’t make the final judgment calls. “For machines, how do you code for ‘misleading’?” she says. “Even humans struggle with defining it. Life is messy and complicated and nuanced, and AI is still a long way from understanding that.”

    Reply
  41. Tomi Engdahl says:

    Paul Tadich / Canadaland:
    Inside Canadian network Global’s cost-cutting “multi-market content” model, where local news for 12+ markets is produced in Toronto using green screens — Global’s late-night local newscasts, from Saskatoon to Halifax, are all produced in a Toronto green-screen studio.

    Inside The Bunker Where Global Produces Local Newscasts For The Entire Country
    http://www.canadalandshow.com/inside-globals-local-news-bunker/

    Global’s late-night local newscasts, from Saskatoon to Halifax, are all produced in a Toronto green-screen studio. A former producer describes life in a “news sweatshop.”

    Reply
  42. Tomi Engdahl says:

    Renee DiResta / Wired:
    As politicians accuse search and social media firms of censorship, the real focus should be on opaque algorithms and reckless amplification of harmful content — THE ALGORITHMS THAT govern how we find information online are once again in the news—but you have to squint to find them.

    Free Speech Is Not the Same As Free Reach
    https://www.wired.com/story/free-speech-is-not-the-same-as-free-reach

    The algorithms that govern how we find information online are once again in the news—but you have to squint to find them.

    “Trump Accuses Google of Burying Conservative News in Search Results,” reads an August 28 New York Times headline. The piece features a bombastic president, a string of bitter tweets, and accusations of censorship. “Algorithms” are mentioned, but not until the twelfth paragraph.

    Trump—like so many other politicians and pundits—has found search and social media companies to be convenient targets in the debate over free speech and censorship online. “They have it RIGGED, for me & others, so that almost all stories & news is BAD,” the president recently tweeted. He added: “They are controlling what we can & cannot see. This is a very serious situation—will be addressed!”

    Trump is partly right: They are controlling what we can and cannot see. But “they” aren’t the executives leading Google, Facebook, and other technology companies. “They” are the opaque, influential algorithms that determine what content billions of internet users read, watch, and share next.

    Indeed, YouTube’s video-recommendation algorithm inspires 700,000,000 hours of watch time per day—and can spread misinformation, disrupt elections, and incite violence. Algorithms like this need fixing.

    The social internet is mediated by algorithms: recommendation engines, search, trending, autocomplete, and other mechanisms that predict what we want to see next. The algorithms don’t understand what is propaganda and what isn’t, or what is “fake news” and what is fact-checked. Their job is to surface relevant content (relevant to the user, of course), and they do it exceedingly well. So well, in fact, that the engineers who built these algorithms are sometimes baffled: “Even the creators don’t always understand why it recommends one video instead of another,” says Guillaume Chaslot, an ex-YouTube engineer who worked on the site’s algorithm.

    Reply
  43. Tomi Engdahl says:

    This Is How Russian Propaganda Actually Works In The 21st Century
    https://www.buzzfeednews.com/article/holgerroonemaa/russia-propaganda-baltics-baltnews

    Skype logs and other documents obtained by BuzzFeed News offer a rare glimpse into the inner workings of the Kremlin’s propaganda machine.

    TALLINN, Estonia — The Russian government discreetly funded a group of seemingly independent news websites in Eastern Europe to pump out stories dictated to them by the Kremlin, BuzzFeed News and its reporting partners can reveal.

    Russian state media created secret companies in order to bankroll websites in the Baltic states — a key battleground between Russia and the West — and elsewhere in Eastern Europe and Central Asia.

    The scheme has only come to light through Skype chats and documents obtained by BuzzFeed News, Estonian newspaper Postimees, and investigative journalism outlet Re:Baltica via freedom of information laws, as part of a criminal probe into the individual who was Moscow’s man on the ground in Estonia.

    The websites, all called Baltnews, presented themselves as independent news outlets, but in fact, editorial lines were dictated directly by Moscow.

    “In other words — information warfare.”

    “The pressure to turn [Estonia] from facing the West to facing the East has grown.”

    The revelations about the websites in the Baltic states provide a rare and detailed inside look into how such disinformation campaigns work, and the lengths to which Moscow is willing to go to obscure its involvement in such schemes.

    In the Baltics, Russia directly borders the European Union, and NATO has a big military presence, but perhaps most importantly the region is home to hundreds of thousands of ethnic Russians, mostly in Estonia and Latvia.

    Reply
  44. Tomi Engdahl says:

    Benjamin Plackett / Engadget:
    Ten Reddit moderators talk about the abuse they receive, including death and rape threats, burnout under pressure, and how Reddit fails to tackle the issues

    Unpaid and abused: Moderators speak out against Reddit
    https://www.engadget.com/2018/08/31/reddit-moderators-speak-out/

    Keeping Reddit free of racism, sexism and spam comes with a mental health risk.

    Somewhere out there, a man wants to rape Emily. She knows this because he was painfully clear in typing out his threat. In fact, he’s just one of a group of people who wish her harm.

    For the past four years, Emily has volunteered to moderate the content on several sizable subreddits — large online discussion forums — including r/news, with 16.3 million subscribers, and r/london, with 114,000 subscribers. But Reddit users don’t like to be moderated.

    In a joint investigation, Engadget and Point spoke to 10 Reddit moderators, and all of them complained that Reddit is systematically failing to tackle the abuse they suffer.

    Reply
  45. Tomi Engdahl says:

    Paul Sawers / VentureBeat:
    Google releases Content Safety API, an AI-powered tool to identify online child sexual abuse material and reduce human reviewers’ exposure to the content

    Google releases AI-powered Content Safety API to identify more child abuse images
    https://venturebeat.com/2018/09/03/google-releases-ai-powered-content-safety-api-to-identify-more-child-abuse-images/

    Reply
  46. Tomi Engdahl says:

    Wall Street Journal:
    Sources: Jack Dorsey has personally weighed in on content decisions at Twitter and overruled staff to keep Alex Jones and reinstate Richard Spencer’s account — Twitter CEO Jack Dorsey has personally weighed in on high-profile decisions, frustrating some employees

    Inside Twitter’s Long, Slow Struggle to Police Bad Actors
    https://www.wsj.com/articles/inside-twitters-long-slow-struggle-to-police-bad-actors-1535972402?mod=e2tw

    Twitter CEO Jack Dorsey has personally weighed in on high-profile decisions, frustrating some employees

    Understanding Mr. Dorsey’s role in making content decisions is crucial, as Twitter tries to become more transparent to its 335 million users, as well as lawmakers about how it polices toxic content on its site.

    Reply
  47. Tomi Engdahl says:

    Kevin Roose / New York Times:
    Experts say Facebook’s public cleanup is pushing toxic content into private Facebook groups, WhatsApp, and Messenger, making it harder to monitor and moderate

    Fringe Figures Find Refuge in Facebook’s Private Groups
    https://www.nytimes.com/2018/09/03/technology/facebook-private-groups-alex-jones.html

    Much of the empire built by Alex Jones, the Infowars founder and social media shock jock, vanished this summer when Facebook suspended Mr. Jones for 30 days and took down four of his pages for repeatedly violating its rules against bullying and hate speech. YouTube, Apple and other companies also took action against Mr. Jones. But a private Infowars Facebook group with more than 110,000 members, which had survived the crackdown, remained a hive of activity.

    In Mr. Jones’s absence, the group continued to fill with news stories, Infowars videos and rants about social media censorship. Users also posted the sort of content — hateful attacks against Muslims, transgender people and other vulnerable groups — that got Mr. Jones suspended. And last week, when Mr. Jones’s suspension expired, he returned to the group triumphantly.

    Mr. Jones built his Facebook audience on pages — the big public megaphones he used to blast links, memes and videos to millions of his followers. In recent months, though, he and other large-scale purveyors of inflammatory speech have found refuge in private groups, where they can speak more openly with less fear of being punished for incendiary posts.

    Several private Facebook groups devoted to QAnon, a sprawling pro-Trump conspiracy theory, have thousands of members.

    Facebook’s fight against disinformation and hate speech will be a topic of discussion on Capitol Hill on Wednesday, when Sheryl Sandberg, the company’s chief operating officer, will join Jack Dorsey, Twitter’s chief executive, to testify in front of the Senate Intelligence Committee.

    Facebook has taken many steps to clean up its platform, including hiring thousands of additional moderators, developing new artificial-intelligence tools and breaking up coordinated influence operations ahead of the midterm elections.

    But when it comes to more private forms of communication through the company’s services — like Facebook groups, or the messaging apps WhatsApp and Facebook Messenger — the social network’s progress is less clear. Some experts worry that Facebook’s public cleanup may be pushing more toxic content into these private channels, where it is harder to monitor and moderate.

    Misinformation is not against Facebook’s policies unless it leads to violence. But many of the private groups reviewed by The New York Times contained content and behavior that appeared to violate other Facebook rules, such as rules against targeted harassment and hate speech.

    After The Times sent screenshots to Facebook of activity taking place inside these groups, Facebook removed several comments, saying they violated the company’s policies on hate speech. The groups themselves, however, remain active.

    A Facebook spokeswoman said the company used automated tools, including machine learning algorithms, to detect potentially harmful content inside private groups and flag it for human reviewers, who make the final decisions about whether or not to take it down. The company is developing additional ways, she said, to determine if an entire group violates the company’s policies and should be taken down, rather than just its individual posts or members.

    Facebook groups are self-regulated by members who act as administrators and moderators, with the authority to remove posts and oust unruly members.

    One type of private Facebook group, known as a “closed” group, can be found through searches. Another type, known as a “secret” group, is invisible to all but those who receive an invitation from a current member to join. In both cases, only members can see posts made inside the group.

    Since the public Infowars page was taken down, the private group has functioned as a makeshift home for fans of the site.

    Reply
  48. Tomi Engdahl says:

    Natasha Lomas / TechCrunch:
    Wikimedia urges European Parliament to weigh effects of its copyright vote next week, which could set a copyright on “snippets” and shift liability to platforms

    Wikimedia warns EU copyright reform threatens the ‘vibrant free web’
    https://techcrunch.com/2018/09/04/wikimedia-warns-eu-copyright-reform-threatens-the-vibrant-free-web/

    The Wikimedia Foundation has sounded a stark warning against a copyright reform proposal in Europe that’s due to be voted on by the European Parliament next week. (With the mild irony that it’s done so with a blog post on the commercial Medium platform.)

    In the post, also emailed to TechCrunch, María Sefidari Huici, chair of the Wikimedia Foundation, writes: “Next week, the European Parliament will decide how information online is shared in a vote that will significantly affect how we interact in our increasingly connected, digital world. We are in the last few moments of what could be our last opportunity to define what the Internet looks like in the future.

    Reply
  49. Tomi Engdahl says:

    This Group Posed As Russian Trolls And Bought Political Ads On Google. It Was Easy.
    https://www.buzzfeednews.com/article/charliewarzel/researchers-posed-as-trolls-bought-google-ads

    Google says it’s securing its ad platform against foreign meddlers, but for just $35 researchers posing as Russian trolls were able to run political ads without any hurdles.

    In the summer of 2018, after months of public and legislator outcry over election interference, you might think it would be difficult for a Russian troll farm to purchase — with Russian currency, from a Russian zip code — racially and politically divisive ads through Google. And you might reasonably assume that if such a troll farm were able to do this, Google — which has said “no amount of interference that is acceptable” — would prevent it from successfully targeting those ads at thousands of Americans on major news sites and YouTube channels.

    But you’d be wrong.

    Researchers from the advocacy group the Campaign for Accountability — which has frequently targeted Google with its “transparency project” investigations and has received funding from Google competitor Oracle — posed as Kremlin-linked trolls and successfully purchased divisive online ads using Google’s ad platform and targeted them toward Americans.

    Reply
  50. Tomi Engdahl says:

    Davey Alba / BuzzFeed News:
    A look at the human cost of Facebook’s rise in the Philippines: truth matters less, propaganda is ubiquitous, lives are wrecked, and some have died as a result

    How Duterte Used Facebook To Fuel the Philippine Drug War
    https://www.buzzfeednews.com/article/daveyalba/facebook-philippines-dutertes-drug-war

    “We were seduced, we were lured, we were hooked, and then, when we became captive
    audiences, we were manipulated to see what other people — people with vested interests and evil motives of power and domination — wanted us to see.”

    In August 2016, a handful of crude images began circulating widely throughout Facebook’s Filipino community: a middle-aged man and woman having clumsy sex atop a tacky floral bedspread. The man’s face, obscured by shadows, was impossible to make out. The woman’s was not. She appeared to be Sen. Leila de Lima — a fierce critic of Philippine President Rodrigo Duterte and his bloody war on drugs.

    But the woman was not de Lima.

    The senator issued a strong public denial (“That’s not me. I don’t understand”) and internet sleuths subsequently tracked the provenance of the images to a porn site. Still, the doctored photos very quickly became part of a narrative propagated by Duterte, who had accused de Lima of accepting bribes from drug pushers.

    De Lima was soon beset by disparaging fake news reports that spread quickly across Facebook: She had pole-danced for a convict; she’d used government funds to buy a $6 million mansion in New York; the Queen of England had congratulated the Philippine Senate for ousting her. Six months later, her reputation fouled, de Lima was arrested and detained on drug charges, though she vehemently disputes them. She has now been in jail for over a year, despite outcry from international human rights groups over what they consider a politically motivated detention.

    For all the recent hand-wringing in the United States over Facebook’s monopolistic power, the mega-platform’s grip on the Philippines is something else entirely. Thanks to a social media–hungry populace and heavy subsidies that keep Facebook free to use on mobile phones, Facebook has completely saturated the country.

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*