POWER PLAY's Ayden Férdeline asks Clara Tsao who is spreading misinformation and disinformation, and what steps can be taken to stop this.
Clara Tsao is a fellow with the Atlantic Council. She previously served as the Senior Advisor for Emerging Technology at the United States Department of Homeland Security.
Intro [00:00:04] You're listening to POWER PLAYS, the podcast charting how important decisions about the Internet, its infrastructure and its institutions have been made. Here's your host, Ayden Férdeline.
Ayden Férdeline [00:00:28] Welcome to POWER PLAYS, I'm Ayden Férdeline. Today on POWER PLAYS we are joined by Clara Tsao. Clara Tsao is a fellow with the Atlantic Council and a civic entrepreneur who has recently launched the Trust and Safety Professional Association. But her career has spanned both the public and private sectors. She previously served as Senior Advisor for Emerging Technologies at the US Department of Homeland Security and as Chief Technology Officer of the U.S. Government's inter-agency Countering Violent Extremism and Countering Foreign Influence Task Force. In addition, she has had stints with Apple, HP, Microsoft, Mozilla and Google. Clara Tsao, thank you so much for your time. It's a pretty impressive resume. How did you get involved in the technology sector?
Clara Tsao [00:01:15] Yeah, you know, for me a little bit about my background. I grew up in the San Francisco Bay Area where a lot of what I saw around me was this massive growth of startups in the early stages growing into huge technology companies like a Facebook. And I've always been, I've always found the intersection between policy and technology challenges to be very interesting because technology as a sector has grown so quickly that it's been really hard for a lot of governments and policymakers around the world to figure out what to do. And as a byproduct, it's also been very hard for everyday consumers and users of these technologies to really think about the safety considerations, the considerations of what information online has in their daily rights of finding information, on how to vote. It has huge implications the way people see the world as people increasingly use these technologies and they become a part of their life, not just an optional accessory. So that's a little bit about me. I've always been drawn to technology and the Internet and the interesting things that people choose to do on it.
Ayden Férdeline [00:02:21] How did you wind up in Washington then?
Clara Tsao [00:02:22] I first entered the policy space when I was living in Los Angeles. I started an organization there called Hack for LA. And this is a time where the Los Angeles government was trying to open up its data for more transparency to everyday people. And I saw a lot of challenges of data being there, everything in place of being able to share and build really interesting tools that the public could use. And that was my first turning point in my career of seeing how difficult policy challenges was when even if you have the best, smartest people at the table, if you don't have policymakers understanding these challenges, it's really hard to impact change. So that took me to D.C., where I was a technology policy fellow with Google, and I did a lot of briefings with members of Congress on emerging technology topics like blockchain and NSA surveillance. And I saw how big of a divide there was between D.C. and Silicon Valley, just the types of language people use. A lot of people that are making laws and policy in the US do not use these technologies day to day. In fact, they are having assistants use Blackberries or printing out their emails. So when they're thinking about decisions like how do you ensure the safety of users when there are issues with particular platforms and keeping them accountable, it's very challenging to even speak the same language and to think about what is thoughtful policymaking. And vice versa. I think a lot of companies don't really have that understanding of where policymakers are coming from. And for me, after I spent some time in DC, I spent a few years at Microsoft in the public sector team there. And I saw very few people having any experience in government. And that was a really interesting point, to say, 'how are we working to ensure that people in these different public sector services, from government to education to health care, how can we ensure that we're doing the best we can provide technology for them', when in fact we know firsthand on the ground experience there?
Ayden Férdeline [00:04:32] Thanks, Clara. That's a really interesting comment that you just made about specific expertise that is sometimes lacking in those who work in government. And so I'm curious about what projects you worked on at Microsoft and what expertise you later brought into the U.S. government from Microsoft when you joined the U.S. government at the highest level. I know that within government you focused your work on the weaponization of misinformation. Was there a project at Microsoft that you led or that you contributed to that allowed you to develop very specific expertise in misinformation or disinformation?
Clara Tsao [00:05:08] Yeah, no, absolutely. Thank you for bringing that up. So I was first exposed to the cost and consequences of disinformation while leading a project that Myanmar we were doing with the University of Washington School Information Technology alongside the Gates Foundation.
Clara Tsao [00:05:26] And we were looking at societies in transition and specifically how to help Myanmar, a country that has been under military dictatorship for a very long time, have a fair and democratic election. And a lot of this came down to digital literacy. Facebook at the time in early 2013, 2014, had just entered the market and Myanmar had been without Internet for a very long time. So for most people that were seeing the Internet for the first time, they skipped the desktop generation of Internet access and really started seeing things straight from their phones, started to only see Facebook as the Internet. And so, in fact, Facebook, when they entered the market, came in to define how people even speak. There's tons of different local dialects and languages in Myanmar, but Facebook came and picked one, and that is how people speak today in the country. So part of my work was working with a number of civil society and activist groups in Myanmar to understand how we can design a way for digital literacy to allow people to learn information about different candidates. There was obviously no bias there. But to say, you know, this is how you can you can have much more fair access to information and learn about the upcoming elections. And that was incredibly challenging because with the spread of Facebook entering the country, there was so much fake news online. People were spreading news about particular ethnic groups, people getting raped that went viral, even though a lot of it was not true and led to actual physical riots. And I remember seeing, during this, from exposure thinking in my head, I cannot believe this is happening. I cannot believe that nothing is being done. I reached out to my contacts at Facebook and they responded, but they did not seem like they had enough executive support to do much. They did not have anyone that spoke Burmese or that particular local dialect that can even moderate what was happening. So the fake news just kept on ongoing until it reached a crisis moment where it was too bad, so bad that they had to step in and they had to do something. But what made me really upset was these local voices. These organizations have been mentioning this for so long and they got zero support. And this was a country that I felt like was treated almost like a second class citizen because if this happened in the United States or it happened in another Western democracy, there would be immediate action to really help curveball this. So for me, that was really my wake up call to some of the offline consequences that disinformation could have. And you can argue it may not have been the fault of Facebook because they had no idea things would turn that way. But at the same time, even if they had, for example, a lot of budget to put in, they wouldn't even know where to start, where to find the right local language experts, how to think about better moderation practices. So a lot of these rules during crisis moments fall to Trust and Safety Teams at companies that have now slowly grown and they tackle a number of issues from terrorist content to child exploitation all the way to issues like this, where it's matters of election security and the outcome of elections. So for me, that was just a big wake up call, that there is a lot of issues that are not going to be solved by advocacy alone, that we must work with government, but we also must work with companies in ensuring that they have the right tools, right resources, the right support to be able to deal to tackle this problem effectively. And it's not just Facebook, there are other platforms that enter different markets that face very similar challenges, and when you have a company that's built initially by engineers that don't come from public policy or public science backgrounds, it's really hard for them to even see what could go wrong. What could go very wrong. So.
Ayden Férdeline [00:09:53] That is such an interesting case study that you referenced there, Clara, and it's making me wonder from a media literacy perspective, what is easier? Is it easier to teach someone who has never used social media before how to identify misinformation? Or is it easier to teach someone that is already familiar with social media how to change their practices so that they are aware that they are being manipulated or exposed to fake news? Do you have any insights there?
Clara Tsao [00:10:22] Yeah, I think that's a great question and at the very end of the day, when it comes to fake news or misinformation or disinformation. It comes down to the rate of spread, right, if it's someone that is not as visually literate, they could, for example, be very influential in their town. So I'll use a more rural example. They ccould be very influential in their town. And those people don't really surf the Internet or read news. And they see an article. They don't know how to evaluate it. And they could spread by word of mouth something that is not true because they quickly read it. And so if you are at a company, you would not then be able to quickly see that this person might be actually really influential in offline environments to sway decisions on something. Right. And then on the flipside, you also have people that are very literate, very savvy. They might be social media influencers that have thousands of followers. And when they accidentally share a piece of fake news that they don't think critically about, that could have secondary consequences of just spreading very quickly to people that believe everything that they say. And those people may or may not use their own critical judgment because they think this person is influential enough to not spread fake news. So I think I think it's really important to think about this problem in really the rate of spread and how that spread as opposed to its something black and white, because there is just as much damage that could be done for somebody that is not as well digitally literate, but can also network in an offline environment. They could be mayor of a local city. Right. And be talking about this and change their political opinion based on the news that is not correct.
Ayden Férdeline [00:12:07] How or who should be addressing this, Clara? If I summarize what I'm hearing you say is humans have a tendency to be receptive to information that supports our preconceived biases. So do platforms have a responsibility here to do no harm? Why are they facilitating, even unintentionally, the spread of disinformation and misinformation?
Clara Tsao [00:12:30] Yeah, well, from a product standpoint, as companies are thinking about developing products, a lot of companies in the US, the way that they actually have a metric for success, for their actual growth. So if you're a company that's raising seed money or series A and you're looking for series B, if you're not actually making revenue, they're looking at user growth numbers so that these platforms are just trying to grow as fast as possible. And the way you think about how you get people to become addicted to your platform, how you can actually encourage growth, is by using recommendation engines to say, I'm going to feed you content that you like because we could have a bunch of users. We could have static content that isn't based off of A.I. and recommendations. And you would get bored very easily. You would say, oh, this is very unengaging, why am I on here? So that's a byproduct of both what I mentioned earlier. I think the business model of how most startups, and especially in Silicon Valley, are set up, alongside the need to show consistent, large and long term growth to head that second milestone for funding. But in addition, from a fundraising standpoint, a lot of companies today are looking at advertising revenue. And so in order to keep their advertising content also relevant, a lot of these recommendation engines are also built together to really get users to be on the platform for as long as possible and with unwillingness to leave. And sometimes that leads to good, right. If you're looking, if you care a lot about saving endangered species, you could find quickly a network of great people online through these engines. So there's a lot of good from that. But when you don't think about the unintended consequences - for example, if you end up, you end up accidentally learning about how to how to join ISIS or some kind of ISIS propaganda magazine, you're going to be quickly surrounded by a network of other people that will find you and lead you down a darker rabbit hole of bad things that could be around conspiracy theories, right, the anti-vax movement. There's a lot of people that end up in these communities and they just get a lot more people to support their perspectives. So I think it's a very challenging problem. And I think there is a strong play of people with not so great digital literacy entering these, but there's also people with very, very proficient digital literacy that also end up in this because they don't realize that there are actually algorithmic biases happening every time they click on something. So when they suddenly see 10 videos that represent their point of view, they think the actual support behind it is much bigger than what is perceived.
Ayden Férdeline [00:15:21] That's really interesting, isn't it, Clara? One thing that I have read and some context, of course, we are in the age of Covid-19 and health misinformation is spreading, I was reading a study that showed that unlike political misinformation, what people really do believe what supports their preconceived biases, when it comes to health information, in the context of Covid-19, people are surprisingly open to having their biases challenged and are open to readjusting their point of view as new information comes to light. And after I read this study, it made me wonder, as we try to learn some of the lessons from Covid-19, the good and the bad, will there be some kind of insights into, once a misinformation fact is lodged in the brain, how we can remove it? One question I have for you, though, is, say I'm an individual user of a platform. I want to fight misinformation. I don't want to be complicit. What are some small, everyday things that I can do to help? Can I help? Is there anything that I can do as an individual user or is this really just a platform's responsibility to sort out?
Clara Tsao [00:16:27] Yeah, I think that's a great question. I think there's a couple. I think the first is and I want to go back to also your health misinformation comment, because I do think there's more to expand on there. Specifically, around responsibility, a lot of people don't realize how much moderation happens from users flagging content and that actually enough of those users flagging content and it depends on the design on the platform because certain times you can't even flag on a certain platform, content just stays up.
Clara Tsao [00:17:01] So but if we're talking about a social media networks like Facebook, there's a lot of people that flag content that makes Facebook realize things going very wrong. And I'll use a funnier example. If if I break up with someone I'm seeing and I have photos of them on Facebook and I deleted all the photos. But Facebook has a memory feature to say, 10 years ago, do you remember your time with Person X? That could create a lot of feelings for me. And it's not to say that's good or bad, but those are examples of products being built in which there's unintended consequences based off of design. What you're seeing content that you may not want to see, and even though you might have removed it, Facebook in their database doesn't realize that and sometimes resurfaces those. So that's one example. And that actually has happened as well. Those kind of memory features where people end up resurfacing ISIS content that has already been taken down. So there's a couple of examples of that.
Clara Tsao [00:18:04] I think so. So number one is it's so important for users to speak up and for users to flag. But one thing that's also very interesting is there's been a huge role that advertisers have also played. And so a simple example of this is YouTube for a while did not realize that there were a lot of content that they, they had advertising content on top of where people were pushing out ISIS propaganda and the major brands like Coca-Cola and Audi had their advertisements on top of it. And so they did not want to be associated with a terrorist organization. So it was actually the advertisers threatening to pull out of their contracts with these large companies that companies like YouTube started to take stronger stances on figuring out how to remove terrorist content. So, you know, it's interesting because when you talk about health misinformation. One thing that has been fascinating to watch evolve over time is the fact that there are certain information that is really universally agreeable, that is that so we can say child pornography is bad. That was very easy. It is very black and white. Under age kids and their photos should not be circulated. And there's actually really stringent regulation, at least in the US and also in Europe, to quickly remove that. There was also terrorist content universally is considered bad, I think, by most people. But definitions of what is a terrorist group sometimes, right. Some people might believe that white supremacist extremist content isn't considered one. And so, therefore, content should be seen online and sometimes it shouldn't. So there's murkier categories like that. But then when it comes to health misinformation it is even murkier, because you could even with new new viruses like Covid-19, it's very hard to tell what is right and what is wrong. And you have medical experts providing their expertise, but there's still a lot that is unknown out there. There's also a number of problems that will be on fake news that has been spread like drinking bleach, drinking bleach is not considered illegal by any means. There's no law that says you cannot drink bleach. But obviously, if you drink bleach, there is very detrimental health consequences. So when there's disinformation around Covid-19 that drinking bleach could help you avoid getting the virus, a lot of companies have stepped up.
Ayden Férdeline [00:20:31] When we think of something like that going viral, drinking bleach as a Covid-19 cure to people really believe that, or are people sharing it because it appeals to emotion or it's funny or it's illogical and people share it? Not so much because it's true, but because as humans, we have a tendency to share things that are sort of sensational or outrageous.
Clara Tsao [00:20:53] Yeah, I think that's a great question. I wish I had an answer. I think people, when they are online, just behave in very strange ways. And it's a very interesting, I used to joke around that if Ayden, if I wanted to really get to know you, I would look at your browser history and see what you search. And so there is I don't know why people think certain things are more interesting than others, but if you wanted a similar parallel a few years ago, there was an Internet challenge called the Tide Pod Challenge, where people thought it was funny to just swallow Tide Pods, which is the detergents that you use to do laundry. And it has huge choking consequences. You're not supposed to drink that. And people choose to share it. There's been a lot of viral content that puzzles me why it goes viral and there's others that I think are hilarious that should go viral. But everyone has their own point of view. And I think around health disinformation, it's probably along the similar lines. Some people might genuinely just want Covid-19 to be over because it's had such severe detrimental consequences on the economies, on jobs. They want a solution and they want something instead. And there's others that might think it's funny or they might they might think it's like one of the many or there are a lot of people that have written well well, not well sustained, they have written articles that they believe justify why bleach is a credible solution. And, you know, it's really hard sometimes for medical experts to debunk this, because some of these articles are written in a very convincing way. While very biased, they're written in a very convincing way for people that read them without other types of information to compare against the thing that's true. So I think the Covid-19 question has also emerged a number of other issues to the general public they never had to think about that also falls under trust and safety and the way users behave online. One of these examples is price gouging. Price gouging is not like misinformation, but it does lead to a lot of secondary consequences of people not being able to access masks, access equipment, access medical supplies, when e-commerce platforms don't enforce against that. So a lot of e-commerce platforms today are increasingly seeing their need to police against not just content itself, but other types of bad behavior that people choose to take advantage of and how to work with local law enforcement, how to work with different officials to ensure that there's enough supplies for everyone. So that's one example. And then the third example there has been interesting around Covid is there's a number of platforms that don't necessarily have any content that they have to moderate but have to think about the safety of their users offline. This includes delivery companies, right? If someone is a driver for a particular company, they're tested positive for Covid-19 and they need money to continue to work and they could be actually asymptomatic. It's that there are some ethical principles of like how do you know that they are not driving or are they? How do you ensure the safety of food delivery in food delivery start ups? Right. So it's very fascinating. A lot of people think about disinformation purely in the realm of content. But there's also so much information around how people misrepresent Airbnb listings, how people misrepresent their health situation when when you're in the middle of a pandemic. So these are really fascinating questions where suddenly companies are starting to grapple with. They have no idea what to do. They don't have experts that are medical experts. And they have to suddenly think about policies and procedures on how to fix this. And a lot of governments don't even know where to start because there's not really any regulation or policy they can do to enforce against this either.
Ayden Férdeline [00:24:47] These are really challenging and intimidating questions because you're right, even if a government does legislate, that doesn't necessarily solve anything. Regulation is not always the panacea. I wanted to jump ahead a bit. So after your time at Microsoft, this is when you joined the U.S. government at the highest level and you served in various national security roles as a senior adviser and as a chief technology officer. And I'm curious, as you think back to our previous conversations about misinformation and disinformation, based upon what you have seen within government, who is spreading misinformation? Because I think a lot of the narrative in the media is that it is propagated by bots or coming from Russia. And I suspect that is not necessarily a complete picture. Some nuance has been lost in some of these conversations. What do you think?
Clara Tsao [00:25:38] I think that's a great, great question. I don't know if there has been any study large enough to really evaluate, you know, who is spreading the most amount of misinformation. But I do think that the digital literacy angle definitely has a huge impact. On a lot of seniors online, they're not quite sure where to look for credible news and they see something and they'll spread it. So there's a huge amount of content in some studies that have been done around the 2018 midterm elections, for example, of seniors spreading a lot of fake news then by accident. I think there's a lot of it also comes down to the line of what is considered fake news and what isn't. Right. Which is a thorny question because a lot of people would argue that something is real news when other people see it as fake. I think the big challenge today is it comes back to platforms not really having a stringent line for journalism. And that happened a lot in my earlier example of Myanmar, where people said they were journalists, Facebook had no idea how to verify. And the journalistic standard of who gets to write and edit online really declined. And companies would then put a new section and allow publications that don't have as stringent of a vetting criteria to also be there. So everyday consumers that are looking at news, they will then get recommended articles that may not have the best journalistic standards for how information is vetted and sourced. I think the other thing to note about who spreads fake news is people are most likely to spread news that are exciting, that are click bait. There's actually a chart that Facebook had published a while ago in their research team that shows the line of acceptable policy. So the line of acceptable policy includes content that is just the fine grain of being allowed and being not allowed. It could be a mass shooting, a 17-minute video of a terrorist shooting up a synagogue. As an example, when that content gets removed, people are curious and they want to find it, right. So when people start to get closer to that line of acceptable policy, it becomes more likely that people want to share and find it. And I think that is very, very challenging because a lot of the fake news are sensationalist content that are really interesting. They have clickbait titles. And the question is, how do you then write news in a way that is unbiased without overrepresenting or overexaggerating the situation? And it's it's hard because today a lot of journalists that are very credible, they're struggling with this and competing against other media empires out there that only push sensationalist content. And users are seeing that. They're seeing a lot of users really liking the clcikbait stuff because it's just exciting, a lot of content online has also regressed to very short form content that is only a few minutes long. And so it's also sometimes hard to fully represent an issue when you have shorter form content and then you add different issues like deep fakes on top of that, that leads people to want to spread and share something that may or may not be well vetted or might be highly biased. So I think that's a little bit of the challenges of who shares and who doesn't. I think when it comes to sensationalist content, I would argue everyone shares it. They think it's interesting. They think it's entertaining, but it also leads to some of the worst stuff out there to be spread really quickly. Sometimes it's misrepresentation, right? You could have tons of truth in an article and just a few things that are misrepresented or overexaggerated. And people will think that's true, even though there is a lot of misrepresentations inside an article. And that's really hard to catch. Right. Because everything else seems so credible. So I think at the end of the day, it is everyone is susceptible to it. There's people that might share it more in certain circumstances because of digital literacy barriers. But everyone plays a strong role. I try to turn it off my, any kind of recommendation engines every time I search on YouTube, make sure they're not tracking me for that same reason because I don't want to accidentally spread something that I don't spend enough time looking at.
Ayden Férdeline [00:30:10] That's really interesting, Clara. Where I'm conflicted on this issue is I feel like misinformation is not new. It's always been out there. The National Enquirer used to print their misinformation every week on newsstands and there were tabloids in other countries as well. I don't know if anyone ever believed the content in those publications. I don't know who bought some of these publications. But they sold and they were visible and millions and millions of people were exposed to them. Maybe we just all knew they were false or that there were just some grains of truth within those particular publications. I just know that part of me thinks this problem is not new. It might be exacerbated, it might be more visible. And I don't know what the solution is. These are really deep and difficult questions. And I'm so glad that there are people like you trying to answer them. Can you tell us now, after your time in government, Clara, what did you get up to?
Clara Tsao [00:31:12] From there, I, I left the US government and joined Mozilla as a fellow together with you Ayden, and that was a lot of exposure actually to a side of the human rights and Internet advocates that are on the front lines every day fighting for making sure that everyone has equal access to the Internet, that their privacy is protected. So that was a really incredible time for me to see other people involved in this ecosystem. And in my time at Mozilla, I was looking at different tools that could be used to counter disinformation. I was also evaluating a number of platform policies across different companies and how they were thinking about enforcement detection and also the aftermath of educating. So that was a lot of my time. But I also spent most of my time in my fellowship starting to build the early bricks of something that I'm launching quite soon. It's called the Trust and Safety Professional Association.
Ayden Férdeline [00:32:19] So a listener note. We are recording this interview in May 2020. However, by the time this interview was published in July 2020, your organization will be live. And I'm so proud of you, Clara. Roughly a year ago, we were in Tunis, in Tunisia at Rightscon. Rightscon is this gathering of human rights defenders that is organized by the truly inspiring nonprofit Access Now. And I remember at Rightscon, you were everywhere Clara. You were on calls, you were hustling, you were networking with different stakeholders, and you were sharing the idea for a new organization that you were founding and getting buy-in very successfully from funders and other stakeholders. Proof that you have been so successful is in the fact that you've secured two million dollars from Cognizant; other funding from Omidyar. You have everyone from Airbnb to Facebook and Google signing on board as founding members. Can you introduce us to the idea behind your organization? What will it be doing and why did you decide that there was a need for a new organization in this space?
Clara Tsao [00:33:26] Yeah, no, absolutely. What a journey down memory lane, because with Covid-19 everything from last year seems like it was much longer ago. So you are correct. Last year at Rightscon, I was talking with a number of different stakeholders around a new organization that I've been building for a little bit over a year now called the Trust and Safety Professional Association. And the reason why I really wanted to pursue building something like this was because I realized through my previous experiences, there was really no centralized place for professionals working on trust and safety issues ranging from disinformation online that we just talked about to terrorist content to election security, to other types of thorny issues like health disinformation and other behavioral consequences that malicious actors have used online to get together and talk about tactical operational practices.
Clara Tsao [00:34:24] Today, a lot of operational practices are done in silos, in companies trying to figure out best practices themselves. Other times it's people knowing who in their network they already know based off of previous jobs at different companies, they've all worked together. They're informally asking for advice around different types of emerging challenges. So I first got exposed to this community of professionals here through my involvement in the Content Moderation at Scale conference series. The first took place at Santa Clara Law that brought together academics, that brought together people at companies, to really talk about content moderation. And there was such great energy that we scratched the surface on tactics and techniques. But we didn't quite go deep because this conference was completely volunteer driven. And there were three more conferences after that that took place in different cities from Brussels to New York to DC. And all of these conferences were really incredible because there was just such a great community of professionals just looking to connect, looking to share. And they had so many different challenges that they wanted to talk about with each other. They couldn't because trust and safety as a field hasn't been necessarily well organized by a lot of companies. A lot of them have different terms for what these teams mean internally. And so if I'm trying to find a point of contact at Amazon, for example, that does this work, it sometimes is very hard to get to that right person. So for me, why did I want to go into trust and safety? I spent a few years like you've mentioned earlier working in government and bringing together a number of people working in this case together, and I saw a lot of information gaps and information sharing gaps, but training gaps as well.
Clara Tsao [00:36:25] And I also developed the US government's first training that was pushed out to all companies on terrorist content online, going through the history of how terrorist content spread in an offline environment and how that evolved in the online environment.
Clara Tsao [00:36:38] So a lot of these experiences really came together to show me that we really need to build something to bring together this community. And I brought together my co-founder Adelene, who spent her career and in trust and safety roles since the early start of her career. And she's had to figure out how to best support her team against, for example, doxxing online when they make certain decisions and a lot of employees, their lives are at risk. She's had to think about wellness, right? When you're viewing and thinking about moderating a lot of content online, some of it very graphic, how we think about that, how we think about finding information to make adequate content policy decisions. Right. So there are a number of challenges that are very, very difficult that these professionals that I call the policymakers of the Internet, they have to think about. And it's not just people running the actual acceptable content policies. You're also seeing people that are looking at advertising review. When you are a company and you want to advertise on the platform, what is acceptable advertising? Which has had consequences on places like elections, when you have campaigns that anyone can pretend to be a campaign official. There's a quote that I love that JFK once said which states, "we go to the moon in this decade and do the other things, not because they're easy, but because they are hard, because that goal will serve to organize and measure the best of our energies and skills, because that challenge is one that we are willing to accept, one we are unwilling to postpone, and one which we intend to win." So JFK said this quote when he committed the US towards an ambitious goal of landing on the moon. And I believe that specifically the field of trust and safety is just as important right now because policymakers are struggling to figure out how to protect society teams right now, really struggling to figure out how to have enough information to make acceptable policies on their platforms. And then at the end of the day, you have everyday users of these platforms being completely helpless, as I've seen in the case of Myanmar and being able to decipher what is truth versus misinformation or disinformation. So there are huge consequences of how the world is seen and things. The world is organized based off of the decisions of trust and safety teams and the ability of them to be successful. I really do think this is one of the most important fields today. And a lot of governments, unfortunately, around the world have been so, so helpless in feeling like they can they can work with companies that they've led to complete censorship of their Internet when they feel like a platform is not able to comply or accept something that they're trying to support, their society gets. And there are extremes to that where, like I said earlier, challenges between how policymakers understand solutions can be in this case and what the nuance details can be, because there's a lot of people that are incredibly smart thinking and very, very thoughtful about these process safety teams that platforms that are trying to think through every single scenario out there that will make every single type of user using their services happy. And unfortunately, they will always make someone unhappy and every single one of their decisions. So that's a little bit of a background and why I care so much about this space.
Ayden Férdeline [00:40:16] And just one last question, Clara, before we'll wrap up. You mentioned earlier that trust and safety teams are often mopping up issues after the fact. You said that particularly for smaller platforms, there are not always the right incentives in place for them to prioritize building out their trust and safety teams thoroughly in that earliest stages. I'm wondering what you think about the oversight board that Facebook has announced and has now staffed up and which has had members appointed to it, do you think this will in some way be a solution or a model for other platforms to adopt? If there are more of these neutral, and I use neutral there in air quotes, neutral, independent bodies able to make decisions about what content is taken down or removed, will this leave trust and safety teams with more of a buffer or space to be able to do that, to do their work more proactively rather than retroactively?
Clara Tsao [00:41:11] I think with the Facebook oversight board, it's fascinating. I think they are looking for other companies to come in and adopt that same framework, even though Facebook has been in the leadership to try to fund a large portion of it. I think it's great that they are looking at having a set of diverse experts come in to weigh in on decisions that are very hard to do. I think the missed opportunity in the people announced to date that are part of the oversight board is there are a lot of lawyers, there's a lot of academics, a lot of people with very, very strong backgrounds in different areas.
Clara Tsao [00:41:51] But there are very limited to zero individuals serving on it that has worked at a company and has been interested in safety. And so the question then becomes, you can have a lot of people make decisions. But in terms of implementing longer term decisions in a realistic way, I think there's a lot of gaps that is misserved without that. And so the organization that I've been building, you know, we really are we hope that at some point there is increased collaboration with groups like the oversight board that we can do to help them have a better sense of the day to day operations that take place at companies of all sizes. I think it's one thing to pass policy. I think we've all seen, Ayden, both working in tech policy, there's a lot of policies that have passed that do nothing right. And so that operational layer has to be there, of how you implement policy and how you execute on it in a well designed way for things that you're trying to fix to actually get fixed. And with the oversight board, I do think its a missed opportunity they had nobody with actual strong industry expertise in the trust and safety that has done the work and understands the challenges from the inside serving on it. So I think that's an area of improvement.
Ayden Férdeline [00:43:10] Clara Tsao, thank you for your time.
Clara Tsao [00:43:13] Thank you for having me. I'm so excited to be on your podcast and I can't wait to hear more episodes from you.
Ayden Férdeline [00:43:20] I'm Ayden Férdeline, and that concludes our interview with Clara Tsao, a fellow with the Atlantic Council. Next time on POWER PLAYS we speak with Marilyn Cade who, before she retired, was AT&T's chief lobbyist on Internet issues. She takes us back to the time that she started ICANN, the body that manages the Internet's domain name system, with her corporate credit card.
Outro [00:43:42] This has been POWER PLAYS, the podcast that takes you inside the rooms and into the minds of the decision-makers responsible for some of the most instrumental decisions that help shape the Internet, which we all use today. If you'd like to help us spread the word, please give us a five-star review and tell your friends to subscribe. We're available on every major listening app as well as at POWERPLAYS.XYZ.