Nov 30, 2020
Everyone who uses Facebook, Google, and Twitter has probably noticed the disappearance of posts and the appearance of labels, especially during the 2020 election season. In this episode, hear the highlights from six recent House and Senate hearings where executives from the social media giants and experts on social media testified about the recent changes. The incoming 117th Congress is promising to make new laws that will affect our social media experiences; these conversations are where the new laws are being conceived.
Use your bank’s online bill pay function to mail contributions to:
Please make checks payable to Congressional Dish
Thank you for supporting truly independent media!
CD196: The Mueller Report
CD186: National Endowment for Democracy
30:50 Jack Dorsey: We were called here today because of an enforcement decision we made against New York Post, based on a policy we created in 2018. To prevent Twitter from being used to spread hacked materials. This resulted in us blocking people from sharing a New York Post article, publicly or privately. We made a quick interpretation, using no other evidence that the materials in the article were obtained through hacking, and according to our policy, we blocked them from being spread. Upon further consideration, we admitted this action was wrong and corrected it within 24 hours. We informed the New York Post of our air and policy update and how to unlock their account by deleting the original violating tweet, which freed them to tweet the exact same content and news article again. They chose not to, instead insisting we reverse our enforcement action. We do not have a practice around retro actively overturning prior enforcement's, since then it demonstrated that we needed one and so we created one we believe is fair and appropriate.
35:13 Mark Zuckerberg: At Facebook, we took our responsibility to protect the integrity of this election very seriously. In 2016, we began to face new kinds of threats and after years of preparation, we were ready to defend against them. We built sophisticated systems to protect against election interference, that combined artificial intelligence, significant human review, and partnerships with the intelligence community, law enforcement and other tech platforms. We've taken down more than 100 networks of bad actors, we're trying to coordinate and interfere globally, we established a network of independent fact checkers that covers more than 60 languages. We made political advertising more transparent on Facebook than anywhere else, and including TV, radio and email. And we introduced new policies to combat voter suppression and misinformation. Still, the pandemic created new challenges, how to handle misinformation about COVID and voting by mail, how to prepare people for the reality, the results would take time, and how to handle if someone prematurely declared victory or refused to accept the result. So in September, we updated our policies again to reflect these realities of voting in 2020. And make sure that we were taking precautions given these unique circumstances. We worked with local election officials to remove false claims about polling conditions that might lead to voter suppression. We partnered with Reuters and the national election pool to provide reliable information about results. We attach voting information to posts by candidates on both sides and additional contexts to posts trying to de legitimize the outcome. We lock down new political ads and the week before the election to prevent misleading claims from spreading when they couldn't be rebutted. We strengthened our enforcement against militias and conspiracy networks like QAnon to prevent them from using our platforms to organize violence or civil unrest altogether. I believe this was the largest election integrity effort by any private company in recent times.
40:50 Jack Dorsey: We have transparency around our policies, we do not have transparency around how we operate content moderation, the rationale behind it, the reasoning. And as we look forward, we have more and more of our decisions of our operations moving to algorithms, which are, have a difficult time explaining why they make decisions, bringing transparency around those decisions. And that is why we believe that we should have more choice in how these algorithms are applied to our content, whether we use them at all so we can turn them on or off and have clarity around the outcomes that they're projecting and how they affect our experience.
45:39 Mark Zuckerberg: We work with a number of independent organizations that are accredited by the Poynter Institute. And they include Reuters, the Associated Press. AJans France presse, United States, USA Today, factcheck.org, Science Feedback, PolitiFact, Check Your Fact, Leadstories and the Dispatch in the United States.
48:54 Sen. Lindsay Graham (SC): Do both of you support change to 230? Reform of Section 230? Mark Zuckerberg: Senator I do. Sen. Lindsay Graham (SC): Mr. Dorsey? Jack Dorsey: Yes. Sen. Lindsay Graham (SC): Thank you.
54:10 Sen. Richard Blumenthal (CT): How many times is Steve Bannon allowed to call for the murder of government officials before Facebook suspends his account? Mark Zuckerberg: Senator, as you say, the content in question did violate our policies and we took it down. Having a content violation does not automatically mean your account gets taken down. And the number of strikes varies depending on the amount and type of offense. So if people are posting terrorist content or child exploitation content, then the first time they do it, then we will take down their account. For other things. It's multiple, I'd be happy to follow up afterwards. We try not to disclose these... Sorry, I didn't hear that. Sen. Richard Blumenthal (CT): Will you commit to taking down that account? Steve Bannon? Mark Zuckerberg: Senator, no, that's not what our policies would suggest that we should do in this case.
1:07:05 Jack Dorsey: What we saw and what the market told us was that people would not put up with abuse, harassment and misleading information that would cause offline harm, and they would leave our service because of it. So our intention is to create clear policy, clear enforcement that enables people to feel that they can express themselves on our service, and ultimately trust it. Sen. John Cornyn (TX): So it was a business decision. Jack Dorsey: It was a business decision.
2:56:34 Mark Zuckerberg: We do coordinate on and share signals on security related topics. So for example, if there is signal around a terrorist attack or around child exploitation imagery or around a foreign government, creating an influence operation, that is an area where the companies do share signals about what they see. But I think it's important to be very clear that that is distinct from the content moderation policies that we or the other companies have, where once we share intelligence or signals between the companies, each company makes its own assessment of the right way to address and deal with that information.
3:59:10 Sen. Mazie Hirono (HI): I don't know what it what are both of you prepared to do regarding Donald Trump's use of your platforms after he stops being president it? Will he still be deemed newsworthy? And will he still get to use your platform to spread this misinformation? Mark Zuckerberg: Senator, let me clarify my last answer. We are also having academic study, the effective of all of our election measures and they'll be publishing those results publicly. In terms of President Trump and moving forward. There are a small number of policies where we have exceptions for politicians under the principle that people should be able to hear what their elected officials are saying and candidates for office. But by and large, the vast majority of our policies have no newsworthiness or political exception. So if the President or anyone else is spreading hate speech, or inciting violence, or posting content, that delegitimizes the election or valid forms of voting, those will receive the same treatment is anyone else saying those things, and that will continue to be the case Sen. Mazie Hirono (HI): Remains to be seen. Jack Dorsey: So we do have a policy around public interest, where for global leaders, we do make exceptions in terms of whether if a tweet violates our terms of service, we leave it up behind an interstitial, and people are not allowed to share that more broadly. So a lot of the sharing is disabled with the exception of quoting it so that you can add your own conversation on top of it. So if an account suddenly becomes, is not a world leader anymore, that particular policy goes away.
4:29:35 Sen. Marsha Blackburn (TN): Do you believe it's Facebook's duty to comply with state sponsored censorship so it can keep operating doing business and selling ads in that country? Mark Zuckerberg: Senator in general, we try to comply with the laws in every country where we operate and do business.
10:10 Sen. Roger Wicker (MS): In policing, conservative sites, then its own YouTube platform or the same types of offensive and outrageous claims.
45:50 Jack Dorsey: The goal of our labeling is to provide more context to connect the dots so that people can have more information so they can make decisions for themselves.
46:20 Sen. Roger Wicker (MS): I have a tweet here from Mr. Ajit Pai. Mr. Ajit Pai is the chairman of the Federal Communications Commission. And he recounts some four tweets by the Iranian dictator, Ayatollah Ali Khamenei, which Twitter did not place a public label on. They all four of them glorify violence. The first tweet says this and I quote each time 'the Zionist regime is a deadly cancerous growth and a detriment to the region, it will undoubtedly be uprooted and destroyed.' That's the first tweet. The second tweet 'The only remedy until the removal of the Zionist regime is firm armed resistance,' again, left up without comment by Twitter. The third 'the struggle to free Palestine is jihad in the way of God.' I quote that in part for the sake of time, and number four, 'we will support and assist any nation or any group anywhere who opposes and fights the Zionist regime.' I would simply point out that these tweets are still up, Mr. Dorsey. And how is it that they are acceptable to be to be there? Alan, I'll ask unanimous consent to enter this tweet from Ajit Pai in the record at this point that'll be done. Without objection. How Mr. Dorsey, is that acceptable based on your policies at Twitter? Jack Dorsey: We believe it's important for everyone to hear from global leaders and we have policies around world leaders. We want to make sure that we are respecting their right to speak and to publish what they need. But if there's a violation of our terms of service, we want to label it and... Sen. Roger Wicker (MS): They're still up, did they violate your terms of service? Mr. Dorsey? Jack Dorsey: We did not find those two violate our terms of service because we consider them saber rattling, which is, is part of the speech of world leaders in concert with other countries. Speech against our own people, or a country's own citizens we believe is different and can cause more immediate harm.
59:20 Jack Dorsey: We don't have a policy against misinformation. We have a policy against misinformation in three categories, which are manipulated media, public health, specifically COVID and civic integrity, election interference and voter suppression.
1:39:05 Sen. Brian Schatz (HI): What we are seeing today is an attempt to bully the CEOs of private companies into carrying out a hit job on a presidential candidate, by making sure that they push out foreign and domestic misinformation meant to influence the election. To our witnesses today, you and other tech leaders need to stand up to this immoral behavior. The truth is that because some of my colleagues accuse you, your companies and your employees of being biased or liberal, you have institutionally bent over backwards and over compensated, you've hired republican operatives, hosted private dinners with Republican leaders, and in contravention of your Terms of Service, given special dispensation to right wing voices, and even throttled progressive journalism. Simply put, the republicans have been successful in this play.
1:47:15 Jack Dorsey: This one is a tough one to actually bring transparency to. Explainability in AI is a field of research but is far out. And I think a better opportunity is giving people more choice around the algorithms they use, including to turn off the algorithms completely which is what we're attempting to do.
2:15:00 Sen. Jerry Moran (KS): Whatever the numbers are you indicate that they are significant. It's a enormous amount of money and an enormous amount of employee time, contract labor time in dealing with modification of content. These efforts are expensive. And I would highlight for my colleagues on the committee that they will not be any less expensive, perhaps less than scale, but not less in cost for startups and small businesses. And as we develop our policies in regard to this topic, I want to make certain that entrepreneurship, startup businesses and small business are considered in what it would cost in their efforts to meet the kind of standards to operate in a sphere.
2:20:40 Sen. Ed Markey (MA): The issue is not that the companies before us today are taking too many posts down. The issue is that they're leaving too many dangerous posts up. In fact, they're amplifying harmful content so that it spreads like wildfire and torches our democracy.
3:04:00 Sen. Mike Lee (UT): Between the censorship of conservative and liberal points of view, and it's an enormous disparity. Now you have the right, I want to be very clear about this, you have every single right to set your own terms of service and to interpret them and to make decisions about violations. But given the disparate impact of who gets censored on your platforms, it seems that you're either one not enforcing your Terms of Service equally, or alternatively, to that you're writing your standards to target conservative viewpoints.
3:15:30 Sen. Ron Johnson (MA): Okay for both Mr. Zuckerberg and Dorsey who censored New York Post stories, or throttled them back, did either one of you have any evidence that the New York Post story is part of Russian disinformation? Or that those emails aren't authentic? Did anybody have any information whatsoever? They're not authentic more than they are Russian disinformation? Mr. Dorsey? Jack Dorsey: We don't. Sen. Ron Johnson (MA): So why would you censor it? Why did you prevent that from being disseminated on your platform that is supposed to be for the free expression of ideas, and particularly true ideas... Jack Dorsey: we believe to fell afoul of our hacking materials policy, we judged... Sen. Ron Johnson (MA): They weren't hacked. Jack Dorsey: We we judge them moment that it looked like it was hacked material. Sen. Ron Johnson (MA): You were wrong. Jack Dorsey: And we updated our policy and our enforcement within 24 hours. Sen. Ron Johnson (MA): Mr. Zuckerberg? Mark Zuckerberg: Senator, as I testified before, we relied heavily on the FBI, his intelligence and alert status both through their public testimony and private briefings. Sen. Ron Johnson (MA): Did the FBI contact you, sir, than your co star? It was false. Mark Zuckerberg: Senator not about that story specifically. Sen. Ron Johnson (MA): Why did you throttle it back. Mark Zuckerberg: They alerted us to be on heightened alert around a risk of hack and leak operations around a release and probe of information. And to be clear on this, we didn't censor the content. We flagged it for fact checkers to review. And pending that review, we temporarily constrained its distribution to make sure that it didn't spread wildly while it was being reviewed. But it's not up to us either to determine whether it's Russian interference, nor whether it's true. We rely on the fact checkers to do that.
3:29:30 Sen. Rick Scott (FL): That's becoming obvious that your that your companies are unfairly targeting conservatives. That's clearly the perception today, Facebook is actively targeting as by conservative groups ahead of the election, either removing the ads completely or adding their own disclosure if they claim that didn't pass their fact check system.
3:32:40 Sen. Rick Scott (FL): You can't just pick and choose which viewpoints are allowed on your platform an expect to keep immunity granted by Section 230.
41:30 Rep. Jim Himes (CT): And I should acknowledge that we're pretty careful. We understand that we shouldn't be in the business of fighting misinformation that's probably inconsistent with the First Amendment. So what do we do? We ask that it be outsourced to people that we otherwise are pretty critical of like Mark Zuckerberg, and Jack Dorsey, we say you do it, which strikes me as a pretty lame way to address what may or may not be a problem.
42:00 Rep. Jim Himes (CT): Miss Jankowicz said that misinformation is dismantling democracy. I'm skeptical of that. And that will be my question. What evidence is that is out there that this is dismantling democracy, I don't mean that millions of people see QAnon I actually want to see the evidence that people are seeing this information, and are in a meaningful way, in a material way, dismantling our democracy through violence or through political organizations, because if we're going to go down that path, I need something more than eyeballs. So I need some evidence for how this is dismantling our democracy. And secondly, if you persuade me that we're dismantling our democracy, how do we get in the business of figuring out who should define what misinformation or disinformation is? Nina Jankowicz: To address your first question related to evidence of the dismantling of democracy. There's two news stories that I think point to this from the last couple of weeks alone. The first is related to the kidnapping plot against Michigan Governor Gretchen Whitmer. And the social media platforms played a huge role in allowing that group to organize. It allowed, that group to, it ceded the information that led them to organize and frankly, as a woman online who has been getting harassed a lot lately, lately, with sexualized and gender disinformation, I am very acutely aware of how those threats that are online can transfer on to real world violence. And that make no mistake is meant to keep women and minorities from not only participating in the democratic process by exercising our votes, but also keeping us from public life. So that's one big example. But there was another example just recently from a channel for in the UK documentary that looked at how the Trump campaign used Cambridge Analytica data to selectively target black voters with voter suppression ads during the 2016 election. Again, this is it's affecting people's participation. It's not just about fake news, stories on the internet. In fact, a lot of the best disinformation is grounded in a kernel of truth. And in my written testimony, I go through a couple of other examples of how online action has led to real world action. And this isn't something that is just staying on the internet, it is increasingly in real life. Rep. Jim Himes (CT): I don't have a lot of time. Do you think that both examples that you offered up Gov the plot to kidnap governor, the governor of Michigan, and your other example passed the but for test? I mean, this country probably got into the Spanish American War over 130 years ago because of the good works of William Randolph Hearst. So how do we, we've had misinformation and yellow journalism and terrible media and voter suppression forever. And I understand that these media platforms have scale that William Randolph Hearst didn't have. But are you sure that both of those examples pass the buck for they wouldn't have happened without the social media misinformation? Nina Jankowicz: I believe they do, because they allow the organization of these groups without any oversight, and they allow the targeting the targeting of these messages to the groups and people that are going to find the most vulnerable and are most likely to take action against them. And that's what our foreign adversaries do. And increasingly, it's what people within our own country are using to organize violence against the democratic participation of many of our fellow citizens. Rep. Jim Himes (CT): Okay, well, I'm out of time I would love to continue this conversation and pursue what you mean by groups being formed quote, without oversight, that's language I'd like to better understand but I'm out of time, but I would like to continue this conversation into, well, if this is the problem that you say it is, what do we actually do about it?
55:15 Adam Cohen: Congresswoman we use a combination of automated tools, we can recognize copyrighted material that creators upload and instantaneously discover it and keep it from being seen on our platforms.
1:16:00 Rep. David Cicilline (RI): Do you use consumer data to favor Amazon products? Because before you answer that, analysts estimate that between 80 and 90% of sales go to the Amazon buy box. So you collect all this data about the most popular products where they're selling. And you're saying you don't use that in any way to change an algorithm to support the sale of Amazon branded products? Nate Sutton: Our algorithms such as the buy box is aimed to predict what customers want to buy, apply the same criteria whether you're a third party seller, or Amazon to that because we want customers to make the right purchase, regardless of whether it's a seller or Amazon. Rep. David Cicilline (RI): But the best purchase to you as an Amazon product. Nate Sutton: No, that's not true. Rep. David Cicilline (RI): So you're telling us you're under oath, Amazon does not use any of that data collected with respect to what is selling, where it's on what products to inform the decisions you make, or to change algorithms to direct people to Amazon products and prioritize Amazon and D prioritize competitors. Nate Sutton: The algorithms are optimized to predict what customers want to buy regardless of the seller. We provide this same criteria and with respect to popularity, that's public data on each product page. We provide the ranking of each product.
3:22:50 Dr. Fiona Scott Morton: As is detailed in the report that I submitted as my testimony, there are a number of characteristics of platforms that tend to drive them toward concentrated markets, very large economies of scale, consumers exacerbate this with their behavioral biases, we don't scroll down to the second page, we don't. We accept default, we follow the framing the platform gives us and instead of searching independently, and what that does is it makes it very hard for small companies to grow and for new ones to get traction against the dominant platform. And without the threat of entry from entrepreneurs and growth from existing competitors, the dominant platform doesn't have to compete as hard. If it's not competing as hard, then there are several harms that follow from that. One is higher prices for advertisers, many of these platforms are advertising supported, then there's higher prices to consumers who may think that they're getting a good deal by paying a price of zero. But the competitive price might well be negative, the consumers might well be able to be paid for using these platforms in a competitive market. Other harms include low quality in the form of less privacy, more advertising and more exploitative content that consumers can't avoid. Because, as Tim just said, there isn't anywhere else to go. And lastly, without competitive pressure, innovation is lessened. And in particular, it's channeled in the direction the dominant firm prefers, rather than being creatively spread across directions chosen by entrance. And this is what we learned both from at&t and IBM and Microsoft, is that when the dominant firm ceases to control innovation, there's a flowering and it's very creative and market driven. So the solution to this problem of insufficient competition is complimentary steps forward in both antitrust and regulation. Antitrust must recalibrate the balance it strikes between the risk of over enforcement and under enforcement. The evidence now shows we've been under enforcing for years and consumers have been harmed.
3:22:50 Stacy Mitchell: I hope the committee will consider several policy tools as part of this investigation. In particular, we very much endorse the approach that Congress took with regard to the railroads, that if you operate essential infrastructure, you can't also compete with the businesses that rely on that infrastructure.
3:45:00 Morgan Reed: Here on the table, I have a copy of Omni page Pro. This was a software you bought, if you needed to scan documents. If you wanted to turn it into a processor and you could look at it in a word processor. I've also got this great review from PC World, they loved it back in 2005. But the important fact here in this review is that it says the street price of this software in 2005 was $450. Now, right here, I've got an app from a company called Readdle, that is nearly the same product level has a bunch of features that this one doesn't, it's $6. Basically now consumers pay less than 1% of what they used to pay for some of the same capability. And what's even better about that, even though I love the product from Readdle, there are dozens of competitors in the app space. So when you look at it from that perspective, consumers are getting a huge win. How have platforms made this radical drop in price possible? Simply put, they've provided three things a trusted space, reduced overhead, and given my developers nearly instant access to a global marketplace with billions of customers, before the platforms to get your software onto a retail store shelf. companies had to spend years and thousands of dollars to get to the point where a distributor would handle their product, then you'd agree agree to a cut of sales revenue, write a check for upfront marketing, agree to refund the distributor the cost of any unsold boxes and then spend 10s of thousands of dollars to buy an end cap. Digging a little bit on this, I don't know how many of you know or aware that the products you see on your store shelf or in the Sunday flyer aren't there because the manager thought it was a cool product. Those products are displayed at the end of an aisle or end cap because the software developer or consumer goods company literally pays for the shelf space. In fact, for many retailers the sale of floor the sale of floor space and flyers makes a huge chunk of their profitability for their store. And none of this takes into consideration printing boxes, manuals, CDs, dealing with credit cards if you go direct translation services, customs authorities if you want to sell abroad in the 1990s it cost a million dollars to start up a software company. Now it's $100,000 in sweat equity. And thanks to these changes, the average cost for consumer software has dropped from $50 to three. For developers. Our cost to market has dropped enormously and the size of our market has expanded globally.
3:48:55 Stacy Mitchell: I've spent a lot of time interviewing and talking with independent retailers, manufacturers of all sizes. Many of them are very much afraid of speaking out publicly because they fear retaliation. But what we consistently hear is that Amazon is the biggest threat to their businesses. We just did a survey of about 550 independent retailers nationally, Amazon ranked number one in terms of being what they said was the biggest threat to their business above, rising healthcare costs, access to capital, government, red tape, anything else you can name. Among those who are actually selling on the platform, only 7% reported that it was actually helping their bottom line. Amazon has a kind of godlike view of a growing share of our commerce and it uses the data that it gathers to advantage its own business and its own business interests in lots of ways. A lot of this, as I said, comes from the kind of leverage its ability to sort of leverage the interplay between these different business lines to maximize its advantage, whether it's promoting its own product because that's lucrative or whether it's using the manufacturer of a product to actually squeeze a seller or vendor into giving it bigger discounts.
[3:53:15] Rep. Kelly Armstrong (ND): When we recognize, I come from very rural area, the closest, what you would consider a big box store is Minneapolis or Denver. So and so when we're talking about competition, all of this I also think we've got to remember, at no point in time from my house in Dickinson, North Dakota have I had more access to more diverse and cheap consumer products. I mean, things that often would require a plane ticket or a nine hour car ride to buy can now be brought to our house. So I think when we're talking about consumers, we need to remember that side of it, too.
19:16 Nathaniel Gleicher: Facebook has made significant investments to help protect the integrity of elections. We now have more than 35,000 people working on safety and security across the company, with nearly 40 teams focused specifically on elections and election integrity. We're also partnering with federal and state governments, other tech companies, researchers and civil society groups to share information and stop malicious actors. Over the past three years, we've worked to protect more than 200 elections around the world. We've learned lessons from each of these, and we're applying these lessons to protect the 2020 election in November.
21:58 Nathaniel Gleicher: We've also been proactively hunting for bad actors trying to interfere with the important discussions about injustice and inequality happening around our nation. As part of this effort, we've removed isolated accounts seeking to impersonate activists, and two networks of accounts tied to organize hate groups that we've previously banned from our platforms.
26:05 Nick Pickles: Firstly, Twitter shouldn't determine the truthfulness of tweets. And secondly, Twitter should provide context to help people make up their own minds in cases where the substance of a tweet is disputed.
26:15 Nick Pickles: We prioritize interventions regarding misinformation based on the highest potential for harm. And the currently focused on three main areas of content, synthetic & manipulated media, elections and civic integrity and COVID-19.
26:30 Nick Pickles: Where content does not break our rules and warrant removal. In these three areas, we may label tweets to help people come to their own views by providing additional context. These labels may link to a curated set of tweets posted by people on Twitter. This include factual statements, counterpoint opinions and perspectives, and ongoing public conversation around the issue. To date, we've applied these labels to thousands of tweets around the world across these three policy areas.
31:10 Richard Salgado: In search, ranking algorithms are an important tool in our fight against disinformation. Ranking elevates information that our algorithms determine is the most authoritative, above information that may be less reliable. Similarly, our work on YouTube focuses on identifying and removing content that violates our policies and elevating authoritative content when users search for breaking news. At the same time, we find and limit the spread of borderline content that comes close but just stops short of violating our policies.
53:28 Rep. Jackie Speier (CA): Mr. Gliecher, you may or may not know that Facebook is headquartered in my congressional district. I've had many conversations with Sheryl Sandberg. And I'm still puzzled by the fact that Facebook does not consider itself a media platform. Are you still espousing that kind of position? Nathaniel Gleicher: Congresswoman, we're first and foremost a technology company. We may be a technology company, but it's your technology company is being used as a media platform. Do you not recognize that? Congresswoman, we're a place for ideas across the spectrum. We know that there are people who use our platforms to engage and in fact that is the goal of the platform's to encourage and enable people to discuss the key issues of the day and to talk to family and friends.
54:30 Rep. Jackie Speier (CA): How long or or maybe I should ask this when there was a video of Speaker Pelosi that had been tampered with - slowed down to make her look like she was drunk. YouTube took it down almost immediately. What did Facebook do and what went into your thinking to keep it up? Nathaniel Gleicher: Congresswoman for a piece of content like that, we work with a network of third party fact checkers, more than 60 3rd party fact checkers around the world. If one of them determines that a piece of content like that is false, and we will down rank it, and we will put an interstitial on it so that anyone who would look at it would first see a label over it saying that there's additional information and that it's false. That's what we did in this context. When we down rank, something like that, we see the shares of that video, radically drop. Rep. Jackie Speier (CA): But you won't take it down when you know it's false. Nathaniel Gleicher: Congresswoman, you're highlighting a really difficult balance. And we've talked about this amongst ourselves quite a bit. And what I would say is, if we simply take a piece of content like this down, it doesn't go away. It will exist elsewhere on the internet. People who weren't looking for it will still find it. Rep. Jackie Speier (CA): But it you know, there will always be bad actors in the world. That doesn't mean that you don't do your level best to show the greatest deal of credibility. I mean, if YouTube took it down, I don't understand how you couldn't have taken down but I'll leave that where it lays.
1:40:10 Nathaniel Gleicher: Congressman, the collaboration within industry and with government is much, much better than it was in 2016. I think we have found the FBI, for example, to be forward leaning and ready to share information with us when they see it. We share information with them whenever we see indications of foreign interference targeting our election. The best case study for this was the 2018 midterms, where you saw industry, government and civil society all come together, sharing information to tackle these threats. We had a case on literally the eve of the vote, where the FBI gave us a tip about a network of accounts where they identified subtle links to Russian actors. Were able to investigate those and take action on them within a matter of hours.
1:43:10 Rep. Jim Himes (CT): I tend to be kind of a First Amendment absolutist. I really don't want Facebook telling me what's true and what's not true mainly because most statements are some combination of both.
1:44:20 Nathaniel Gleicher: Certainly people are drawn to clickbait. They're drawn to explosive content. I mean, it is the nature of clickbait, to make people want to click on it, but what we found is that if you separate it out from the particular content, people don't want a platform or experience, just clickbait, they will click it, if they see it, they don't want it prioritized, they don't want their time to be drawn into that and all emotional frailty. And so we are trying to build an environment where that isn't the focus, where they have the conversations they want to have, but I agree with you. A core piece of this challenge is people seek out that type of content wherever it is. I should note that as we're thinking about how we prioritize this, one of the key factors is who your friends are the pages and accounts that you follow and the assets that you engage with. That's the most important factor in sort of what you see. And so people have direct control over that because they are choosing the people they want to engage.
55:30 David Chavern: Platforms and news organizations mutual reliance would not be a problem, if not for the fact that the concentration among the platforms means a small number of companies now exercise an extreme level of control over the news. And in fact, a couple of dominant firms act as regulators of the news industry. Only these regulators are not constrained by legislative or democratic oversight. The result has been to siphon revenue away from news publishers. This trend is clear if you compare the growth in Google's total advertising revenue to the decline in the news industry's ad revenue. In 2000, Google's US revenue was 2.1 billion, while the newspaper industry accounted for 48 billion in advertising revenue. In 2017, in contrast, Google's US revenue had increased over 25 times to 52.4 billion, the newspaper industry's ad revenue had fallen 65% to 16.4 billion.
56:26 David Chavern: The effect of this revenue decline in publishers has been terrible, and they've been forced to cut back on their investments in journalism. That is a reason why newsroom employment has fallen nearly a quarter over the last decade. One question might be asked is if the platforms are unbalanced, having such a negative impact on the news media, then why don't publishers do something about it? The answer is they cannot, at least under the existing antitrust laws, news publishers face a collective action problem. No publisher on its own can stand up to the tech giants. The risk of demotion or exclusion from the platform is simply too great. And the antitrust laws prevent news organizations from acting collectively. So the result is that publishers are forced to accept whatever terms or restrictions are imposed on them.
1:06:20 Sally Hubbard: Facebook has repeatedly acquired rivals, including Instagram and WhatsApp. And Google's acquisition cemented its market power throughout the ad ecosystem as it bought up the digital ad market spoke by spoke, including applied semantics AdMob and Double Click. Together Facebook and Google have bought 150 companies in just the last six years. Google alone has bought nearly 250 companies.
1:14:17 David Pitofsky: Unfortunately, in the news business, free riding by dominant online platforms, which aggregate and then reserve our content has led to the lion's share of online advertising dollars generated off the back of news going to the platforms. Many in Silicon Valley dismissed the press as old media failing to evolve in the face of online competition. But this is wrong. We're not losing business to an innovator who has found a better or more efficient way to report and investigate the news. We're losing business because the dominant platforms deploy our news content, to target our audiences to then turn around and sell that audience to the same advertisers we're trying to serve.
1:15:04 David Pitofsky: The erosion of advertising revenue undercuts our ability to invest in high quality journalism. Meanwhile, the platforms have little if any commitment to accuracy or reliability. For them, a news article is valuable if viral, not if verified.
1:16:12 David Pitofsky: News publishers have no good options to respond to these challenges. Any publisher that tried to withhold its content from a platform as part of a negotiating strategy would starve itself of reader traffic. In contrast, losing one publisher would not harm the platform's at all since they would have ample alternative sources for news content.
1:36:56 Rep. Pramila Jayapal (WA): So Miss Hubbard, let me start with you. You were an Assistant Attorney General for New York State's antitrust division. You've also worked as a journalist, which online platforms would you say are most impacting the public's access to trustworthy sources of journalism? And why? Sally Hubbard: Thank you for the question. Congresswoman, I think in terms of disinformation, the platforms that are having the most impact are Facebook and YouTube. And that's because of their business models, which are to prioritize engagement, engaging content because of the human nature that you know survival instinct, we tend to tune into things that make us fearful or angry. And so by prioritizing engagement, these platforms are actually prioritizing disinformation as well. It serves their profit motives to keep people on the platforms as long as possible to show them ads and collect their data. And because they don't have any competition, they're free to pursue these destructive business models without having any competitive constraint. They've also lacked regulation. Normally, corporations are not permitted to just pursue profits without regard to the consequences.
1:38:10 Rep. Pramila Jayapal (WA): The Federal Trade Commission has repeatedly declined to interfere, as Facebook and Google have acquired would be competitors. Since 2007, Google has acquired Applied Semantics, Double Click and AdMob. And since 2011, Facebook has acquired Instagram and WhatsApp. What do these acquisitions mean for consumers of news and information? I think sometimes antitrust is seen and regulation is seen as something that's out there. But this has very direct impact for consumers. Can you explain what that means as these companies have acquired more and more? Sally Hubbard: Sure, so in my view, those, of all of the acquisitions that you just mentioned, were illegal under the Clayton Act, which prohibits mergers that may lessen competition. Looking back, it's clear that all of those mergers did lessen competition. And when you lessen competition, the harms to consumers are not just high prices, which was which are harder to see when in the digital age. But its loss of innovation is loss of choice, and loss of control. So when we approve anti competitive mergers, consumers are harmed.
1:55:48 Rep. Matt Gaetz (FL): Section 230, as I understand it, and I'm happy to be corrected by others, would say that if a technology platform is a neutral public platform, that they enjoy certain liability protections that newspapers don't enjoy, that Newscorp doesn't enjoy with its assets. And so does it make the anti competitive posture of technology platforms more pronounced, that they have access to this special liability protection that the people you represent don't have access to? David Chavern: Oh, absolutely. There's a huge disparity. Frankly, when our contents delivered through these platforms, we get the liability and they get the money. So that's a good deal from that end. We are responsible for what we publish, we publishers can and do get sued. On the other hand, the platforms are allowed to deliver and monetize this content with complete lack of responsibility.
9:00 Senator Dianne Feinstein (CA): We know that Russia orchestrated a sustained and coordinated attack that interfered in our last presidential election. And we also know that there’s a serious threat of more attacks in our future elections, including this November. As the United States Intelligence Community unanimously concluded, the Russian government’s interference in our election—and I quote—“blended covert intelligence operations, such as cyber activity, with overt efforts by the Russian government agencies, state-funded media, third-party intermediaries, and paid social-media users or trolls.” Over the course of the past year and a half, we’ve come to better understand how pernicious these attacks were. Particularly unsettling is that we were so unaware. We were unaware that Russia was sowing division through mass propaganda, cyber warfare, and working with malicious actors to tip scales of the election. Thirteen Russian nationals and three organizations, including the Russian-backed Internet Research Agency, have now been indicted for their role in Russia’s vast conspiracy to defraud the United States.
2:33:07 Clint Watts: Lastly, I admire those social-media companies that have begun working to fact-check news articles in the wake of last year’s elections. These efforts should continue but will be completely inadequate. Stopping false information—the artillery barrage landing on social-media users comes only when those outlets distributing bogus stories are silenced. Silence the guns, and the barrage will end. I propose the equivalent of nutrition labels for information outlets, a rating icon for news-producing outlets displayed next to their news links and social-media feeds and search engines. The icon provides users an assessment of the news outlet’s ratio of fact versus fiction and opinion versus reporting. The rating system would be opt-in. It would not infringe on freedom of speech or freedom of the press. Should not be part of the U.S. government, should sit separate from the social-media companies but be utilized by them. Users wanting to consume information from outlets with a poor rating wouldn’t be prohibited. If they are misled about the truth, they have only themselves to blame.
Design by Only Child Imaginations