Season
2
, Episode
2

Richard Whitt on human autonomy and making the Web more trustworthy

Interview by
BROADCAST on
July 13, 2021
Share

In this episode...

Richard Whitt - an 11-year veteran of Google's policy team, and now Fellow in Residence with the Mozilla Foundation - speaks with Ayden Férdeline about how we can create a future where the Artificial Intelligence lurking behind our various digital interfaces doesn’t automatically cede to an institution's priorities and incentives over a person's well-being.

Richard Whitt

Richard Whitt is a technology policy attorney, the President of the GLIA Foundation, and a Fellow in Residence with the Mozilla Foundation.

Transcript

INTRO [00:00:07] You're listening to POWER PLAYS, the podcast hosting conversations between policymakers, engineers, business leaders and others who are influencing the Internet's infrastructure and institutions in ways that impact all of us today. Here's your host, Ayden Férdeline.  

Ayden Férdeline [00:00:29] Welcome to POWER PLAYS, presented by Grant for the Web. I'm Ayden Férdeline. Today on the show, we have technology policy attorney Richard Whitt. He is an 11-year veteran of Google, led public policy globally for Motorola, and is now a Fellow in Residence with the Mozilla Foundation. And aside from being an all-around nice guy, Richard Whitt is a strategic thinker with some very unique and pragmatic takes on privacy, cyber security, intellectual property, Internet governance and free expression. Richard Whitt, a very warm welcome today to POWER PLAYS.

Richard Whitt [00:01:02] Thank you, Ayden. It's great to be here.

Ayden Férdeline [00:01:04] Now, a question I always ask guests on POWER PLAYS is an icebreaker. And that is, what is a contrarian thought that you have about business or culture that others might disagree with you on?  

Richard Whitt [00:01:13] Well, business, I guess I would say there are a fair number of people out in the world who believe that sort of the optimal way of looking at a marketplace is as a free and open space where there's minimal regulation. Right. There's minimal government involvement. Laws and regulations largely don't apply. And that you just sort of, in Adam Smith style, you know, human nature takes over and supply and demand sort of comes into balance based on mutual sharing and value exchange. And that's basically bunk as far as I'm concerned. Markets definitely do channel human nature, but human nature, as we know, has good and bad aspects. But more importantly, there really is no such thing as an organic market, right? Every marketplace requires rules of the road. And we talk about global markets or even just looking at the United States market, for example, it couldn't exist absent contract law, absent property law, right. So the recognition that one person's property is in their possession, their ownership, and then can be acquired by somebody else. That requires a legal backing, framing, and enforcement. So without that we're basically back to the days of carrying clubs around and taking what we want. So markets definitely need intervention from governments and even more so in the case of global market spaces. And I think a fairly healthy involvement and that's not a bad thing. There are a number of people who would disagree, who think government is always providing negative feedback into market spaces and I just don't think that is right.

Ayden Férdeline [00:02:52] There are definitely market failures that government can address because when functioning properly, government is able to help make free markets possible, as you said, by enforcing contracts, resolving disputes. But I guess some would also say that there are government failures, too, in some parts of the world.  

Richard Whitt [00:03:13] I would totally agree with that. I'm not suggesting that government is always right either or that even necessarily has some sort of monopoly on veracity and common sense. But what I think I think it's overblown, oftentimes, when you talk to a lot of business people, it's overblown, this notion that somehow government is intruding on their space and preventing them from doing what they want to do. I mean, for the most part, they want to do something that is lawful. They have the freedom to do that, at least in the US.

Ayden Férdeline [00:03:44] I want to turn now to an article that you published in the first quarter of this year in the Colorado Technology Law Journal, a listener note there's a link in the show notes so you can download a copy of the paper. It's open access and you can follow along. You are a very prolific writer, Richard.

Richard Whitt [00:04:02] I think I should say thank you, I'm not sure.

Ayden Férdeline [00:04:08] It's certainly intended as a compliment. This is one of your more recent pieces, but you have a lot of writing out there. And in this article, you explained why we need a new web paradigm and how we can get there. And I think the need for this new paradigm is obvious. We've had a number of people on POWER PLAYS like Richard Hill and Dominique Lazanski talk about shortcomings in the Internet's architecture and in how the web is engineered. We've spoken about these topics as though they are irreconcilable issues because of technology or because of political interference or because of business models. Yet when I read your article, Richard, you introduce a concept that many of our listeners may be familiar with at a high level, but haven't had a handy term at their fingertips with which to label it. And that term, as you've coined it, is the SEAMS paradigm. What is it?

Richard Whitt [00:05:06] So I have to first off, tip my hat to Shoshana Zuboff right, so the "Age of Surveillance Capitalism". She coined a terrific phrase. Right. And that's one that resonates with so many people. What I was really trying to do is and it's a great book, I highly recommend it. But in some sense, what she focuses on actually under, in some ways, underplays what's going on here. And so SEAMS is attempting to capture the four major functions and what I see is this feedback loop that has been generated over the last roughly the last two decades on the Web by companies, but also by governments. So this is not just, for example, a capitalism thing. This is something that governments get involved with as well. So SEAMS is an acronym. It stands for surveillance, the first part. So surveilling using the sensors and technologies in our environment, on our bodies, et cetera. The E is extraction. So that's the actual the data that's being pulled from us. Oftentimes we're surrendering it freely, quote unquote, in exchange for those free cat videos and such online. But the extraction part, and I like that term because it is sort of suggests this notion of data mining. Right. Which in all of the industrial overtones of that, the third part is the analysis, and that's the I the computational systems, the algorithms that are using the data essentially as the raw fodder to come up with insights about these surveilled individual or for example, and in the last part is M for manipulation, which to some people is a harsh term. But I think it's actually quite, in this case, quite apt, because it's basically it's not just about trying to take your information to know more about you. There are active attempts to influence your behavior, to modify the way you think, the way you act. And it's not just about trying to sell you jeans you want, they're trying to sell your jeans you may not want, but they want to convince you that it's something that you want or it's to change your vote in an upcoming national election, or there's tons of ways that your agency and autonomies are, as I talk about it in the paper, are being narrowed and channeled to serve their particular needs. And the capitalism side is about companies who want to make money. On the government side is about power and control over citizens or people they want to exert that over. So the SEAMS paradigm to me, I think captures this notion that these are these four functions. But it is a feedback loop. It's constantly, the manipulation itself yields additional surveillance opportunities, which then yields the other aspects of the cycle. So I thought that was hopefully a useful way and a nice coining of a phrase to convey to people that this is an end to end sort of operation that has been increasingly finessed, made more sophisticated, and made much more much deeper in our lives over time.

Ayden Férdeline [00:07:54] There's another concept that you introduce in the paper, the HAACS paradigm. Basically, you've concluded that the Web needs a more humanistic or anthropocentric ethos to ensure that new technologies and ecosystems are able to empower human beings everywhere.  

Richard Whitt [00:08:12] So I chose that acronym in part because it sounds like hacking. You know, folks are out there hacking into networks. There are the black hat hackers and the white headed hackers. In this case, Hopefully it's a white-hatted concept and HAACS is basically the notion is so much of the focus has been on technologies and what they do to us. And if you look at privacy or data protection laws, if you look at content regulation, even competition laws, it's about how do we hold the incumbents accountable and make them hurt us less. And while those are very important, necessary functions, it also feels like we're missing opportunities not just to prevent the bad from happening, but actually using technologies to promote our best interests as human beings. So the HAA in the acronym stands for human autonomy and agency. Autonomy, roughly correlating to freedom of thought and agency to freedom of action. And I talk about it at some length with some grounding both in philosophy but also in modern sort of human psychology, the notion that we always have limited agency and limited autonomy, but the technologies we should use should not be further constricting those. I mean, very important qualities of being a human being. In fact, they should help us channel them in ways that hopefully to our betterment as people. And so HAACS is you come up with a human being first, as you say, human centric. That's the starting point. Not the technology. You build the technology around the human being. So once you have that notion of how do you create, channel, enhance, promote autonomy and agency, the CS is just basically the computational system. So it's human autonomy agency via computational systems. So the systems become the tools as opposed to the center point of the conversation. And so I think if you shift from the SEAM paradigm, which is all about technology being used to control us for power of the few over the many, try to reverse that so that human being starts out in the center with the technologies then being used to promote their interests.

Ayden Férdeline [00:10:14] Something that you just mentioned, and which I really appreciated about your piece, was the pragmatism of everything. You see there as being an opportunity to seize back human autonomy and agency and to make technology work for people everywhere. We don't need to throw our hands up and surrender. In your article, you wrote that and I'm going to quote from it here, "from the perspective of ordinary users, today's web is missing at least two crucial components. One is basic trust, the other is helpful support." Let's take these one at a time. On trust, I guess this is not limited to the Web. Institutions everywhere have declining levels of trust. What enables trust in your evaluation?

Richard Whitt [00:10:57] It's a great question and there are many experts out there and books I read when I was trying to research this. This actually, this very point. People can have trust at a distance, so it's not impossible to sort of take the human out of the loop, as it were, and trust an institution or an entity, something sort of in a more far off place. But most genuine trust is human to human. That's the way we've been wired. We are social creatures. We grew up in tribes and communities with relatively small numbers. And so we were used to daily interactions with people and built trust based on those interactions, based on expectations being set and either surpassed or failed. If they failed, then the distrust levels went up. If they were surpassed or exceeded, then the trust levels would rise. So that's the sort of the way it comes from us at human psychology level. And then you take that into the twenty first century in a world where we feel like to some extent, this is my laptop, this is my browser, this is my online experience. But all of that is being mediated for us by somebody else. And they're doing it for their own motivations. And we oftentimes don't see those motivations or see the ways that they're using, for example, the computer interfaces to try to control our behaviors. So I agree with you. Trust is missing across all institutions and declining in different ways and different trajectories. But the Web has contributed to that. It's not the only reason, but the Web certainly over time has created this, this and we can go into the details of what in some ways why that happens from the technology standpoint. But the bottom line is the Web has turned us into passive users of its facilities, of its functionalities, and we are expected to have trust in a sense. We are expected to basically interact and give up our data in exchange for the things that we in the moment at least feel like we want to have. But without basic human trust, I feel like we really can't get very far. Then it is about how do we hold these folks accountable again for hurting us less as opposed to how do we engender genuine trust to the point that somebody that I now will look to and say, I will give you sensitive information about myself, I will give confidences to you, I'll be open and transparent with you, because I've got in the background, a trust level has been created by something other than what we have in the Web today. And that's a large part of what I talk about with the HAACS paradigm is what that thing is. What is the missing element that people would glom onto as a real trust building measure?

Ayden Férdeline [00:13:21] Let's take the other element of that quote. In addition to trust, you said that what users miss is helpful support. And you say in the paper, quoting again, the basic objective of support is to protect someone online, do not track me, do not hack me and be responsible to me when something goes wrong. Actually, I said I was going to quote you. I just paraphrased you. That's not a direct quote. Anyway, you then go on to say that there needs to be some kind of third party emerging out of the principles of fiduciary law to ensure that these types of relationships can be built. Perhaps you could expand upon why we should turn to fiduciary law rather than property law and what some of these third parties could potentially look like?  

Richard Whitt [00:14:10] This is going back three plus years now when I first started the GLIAnet project, as I call it, and the idea was, where do you fill in these trust, these trust deficits, particularly in situations of power asymmetries, where somebody has much more expertise than you do, they have maybe more control over a particular situation you find yourself in. And also they benefit from your giving them sensitive information about yourself, confidences that you are you're volunteering to them because you're hoping that it gives you a benefit or sometimes it's being taken from you despite your best efforts. So you're in a situation where someone has power and you don't. And over and over again, and this is both in the common law of Europe and in the U.K., but also other cultures, which I really found fascinating. The answer was you create duties, you create obligations for these entities or these individuals that have this power. And in the common law of Europe, in the U.K., it's the fiduciary law. And it's there's a, Tamar Frankel is a professor who has written other books about this. And she basically says fiduciary law is the law of unequal relationships. I really like that. And so the notion is you use the duties, in this case duties of care and duties of loyalty to me as an individual to try to minimize that imbalance or even to right that balance altogether. So people would be most familiar with fiduciaries in the analog so-called real world. Right. So doctors, physicians have fiduciary duties of loyalty to us. Attorneys had these duties, certain kinds of financial advisers, not all of them, but some of them do. Even your local librarian. In the United States post 9/11, there were instances where the FBI was trying to get the library records of folks that they wanted to investigate. And the librarian said, no, we are essentially acting as a fiduciary on behalf of our library patrons. We don't reveal that information because they came to us looking for knowledge. And in exchange for that, we have confidentiality, absolutely, around the records that we retain. So they form a fiduciary. So the point is in society, we've already done this over a span of hundreds of years in different professions and different ways of people in the workforce. Even parking lot attendants, when you give them your car or the dry cleaners, when you handed over your coat, they have a bailment obligation, which is not quite fiduciary, but it's a similar thing. They watch over your goods and they make sure that they don't get harmed. We have nothing like that. We have no bailment law. We have no common law of any sort, but we don't have a fiduciaries in particular on the Web. And so the notion within my article and then now becoming the basis of this new start up I've just begun is to say we need digital fudiciaries. The twenty first century, the time is ripe, our data is just as much, if not more so, valuable to us, or should be, as our health, as our financial situation, even our library records sitting down the street at the local library should be just as important, if not more so. And in fact, it encompasses all of that. So if that's the case in the Web side, we should have an entity that looks out for our interests just as much as they would in the analog world.

Ayden Férdeline [00:17:12] We can talk about your new venture later, but first, I'd like to discuss the work of the GLIA Foundation and the technologies of the GLIAnet ecosystem. How did the ideas here emerge? What led you to think that this was the best way to address some of these challenging issues that you saw going on on the Web?  

Richard Whitt [00:17:31] Yeah, so really, my experience the last few years at Google, I was in the public policy team. I could see first hand a lot of these issues that were cropping up for the company around privacy and data protection, cybersecurity, content moderation, concerns about competition. And I was getting increasingly concerned that it wasn't just a Google-specific thing, it is really that the industry in general, the large platform companies, the ecosystem of advertisers and marketers and data brokers, collectively, we're looking at us basically as users, as I mentioned before, passive users, which meant they had zero obligation towards any of us. And it just felt like this can't be the way the web ends up. And in Sir Tim Berners-Lee, with his Solid project several years back now, he himself sounded the alarm about the very thing that he created and his concerns about where it was leading. And I shared a lot of his concerns. So the notion of when I left Google, I started a fellowship at the Mozilla Foundation and it was there that I developed these ideas around GLIAnet. And so Glia is the ancient Greek word for glue. And I chose it, one because it kind of sounds snappy, but really it was because this notion of back to trust. Right. And there's this saying, trust is the social glue that binds us together as human beings and relationships as markets, as governments and citizens. And that glue to me was missing or had been eroding away over time. And so the GLIAnet Project is about how do you bring that glue back together with this combination of a governance, which is the fiduciary law principles for entities, and then certain kinds of edge technologies like Solid project with data pods that Sir Tim Berners-Lee has, but many other ones, including some blockchain related things, bringing the control and power back to the edge with the technologies. And these two together would be GLIAnet. So that's where that came from. And the GLIA Foundation is a 501c3 here in the US, it's a nonprofit foundation established here, and the GLIA actually does stand for something a little different there. So the GL stands for Global Laboratory and then the IA stands for a number of things. The two I'll just mention right now, one is Individual Agency. So we talked about that earlier on the human side of the agency principle. And a second one is institutional accountability. And there are some others, but those are the two main ones that I focused on, at least in the tech sector. So the idea for Glia Foundation, and it's a small project right now, it doesn't have a lot of funding yet, but hopefully over time it becomes a place to do the research and development around some of these concepts. And then from there, hopefully start to actually create the platforms, these alternative platforms based on the HAACS paradigm and based on some of the other things that I've been writing about, about the need for a way to really change the way we utilize the web.

Ayden Férdeline [00:20:12] So you've been working on this idea of a GLIAnet ecosystem for three years now. You've been socializing the idea with an array of stakeholders. What have been some of the obstacles or barriers that you've found difficult to address, to make the GLIA ecosystem a reality? And if I can throw in the second question, what stops fudiciaries from emerging organically? Why isn't there already a market for services or data trusts that make these commitments to act in our own best interests?

Richard Whitt [00:20:46] Yeah, that's a great question and a number of reasons in my mind that we don't see them. For starters, it's a mindset shift. We're looking at the web as it is. And people within the current paradigm, even though they're talking about ways of empowering users, for example, it's like, OK, let's help you monetize your data. Right. And that somehow getting an eight dollars a month check from Facebook somehow will offset all the other infirmities that are happening in the current Web ecosystem. So I think a part of it is we have existed in this world for roughly two decades, as I said, and it's hard to break free of it. It's hard to look at other options or other ways of doing business, of serving people. And the fiduciary concept in particular is a big shift conceptually. And we tend to think about it as reserved for those relatively select few situations, mostly in the professions, as I mentioned, where your health or your financial world or other places where you have these sort of imbalances of power. And I think it's now dawning on people that these imbalances of power are very real on the Web. I think the connection really hasn't happened yet in terms of, OK, so what's outside this accountability paradigm? As I say, hold them accountable, which is again necessary but insufficient. Where else do we go beyond that? So I think moving beyond accountability mode is also a challenge for people, but also I think being a fiduciary means raising the bar. So you are not, it's not a total profit-driven squeeze every last bit of data and last cent out of the user, quote unquote, to set up yourself to have a duty of care, which basically means do no harm and then a duty of loyalty, which is to not have conflicts of interests and to promote the best interests of your patron or your client or your customer. You're taking on board, in addition to the added expense of sort of serving people in that way, there is a certain liability. There's a certain risk factor to the extent that you don't do it correctly. And so you're holding yourself out in a way that might attract attention and might attract pushback. People may not appreciate what you did. Maybe the services are not adequate. That also comes back to the question about support that you asked earlier. If you don't have great customer support online. In fact, customer support online often is terrible because they don't invest in it, because again, we are a user, we're not a customer or client. And so there's very minimal sort of overhead they want to devote to that particular function. So being a fiduciary, you have the trust which requires some investment, you've got the support level and you are increasing some exposure. So my hypothesis through all of this is that, yes, that is the case. But if you actually get in that trust relationship with somebody, then the counter is, they're willing over time to open up more of themselves to you, much more potentially than anything you could surreptitiously take from them, under the current web paradigm, under the SEAMS paradigm, and so that over the long term, being a fiduciary can be a hugely remunerative place to be. Right. You could actually make lots of money, lots of very good revenue and all of that by serving people the right way. And we have many companies over history that have done that. So why can't, again on the web, why should we not expect a higher basically a higher bar that companies be willing to accept to take on that challenge, but then also take on, I think, the immense value that's waiting to be unlocked there.

Ayden Férdeline [00:24:07] Have there been any fudiciaries that you've seen emerge recently? In the crypto space I was thinking back to a conversation I had with Mance Harmon here on POWER PLAYS, he's the chief executive of Hedera Hashgraph. And I'm not sure it's a fiduciary in the traditional sense, but it looks to me like Hedera is trying to become one. And MasterCard as well has been looking into creating in Ireland a data trust to safeguard transaction data. Have you noticed any changes? Is there more of a willingness in industry to explore voluntarily taking on additional responsibilities?

Richard Whitt [00:24:42] So I've seen a few blockchain-based companies doing or going down this road of exploration. I'm not actually seeing, as far as I'm aware, I'm not seeing an actual express adoption of a fiduciary governing model, for example, by any of these companies or entities. And some of them are the DAOs, right. The decentralized autonomous organizations, which I think, again, can be can be problematic if not governed appropriately. That can be sort of chaos. And then you have one or two strong personalities who end up taking over, sort of in the long political history of our species. MasterCard's a really interesting and in a very interesting place because they already exist. They already deal with regulations, right. Their space is pretty heavily regulated and they also have seen and been exposed to fiduciaries in the financial world. So people who are actualm take on that role voluntarily, right. It's not something that's been imposed on them. Many of them do it voluntarily. So I would be thrilled if they actually, again, more expressly adopted a fiduciary type model. But it's interesting because people say, well, to govern the data and my article goes into some of this as well, data is something. It's a concept. It's a human concept, like a marketplace. Right. We've defined it that way. It happens to be the digitalization of our life experience and in some ways, to call it data is selling it short because then it seems like, oh, it's this thing that just you know, what I did online for the last month. That's my data. Well, yeah, that's part of your data. But potentially there's so much more of your life you can digitize and make available for-profit purposes, increasingly for nonprofit purposes. I think medical data is a great example of that. Right. If I knew I could trust somebody operating at certain fiduciary duties, I'd say I would share a lot of my medical data with them because they were using it for research that would end up finding the cure for cancer. That would be awesome. I think many people would do that. The problem is we don't see those institutions out there today. So at a loss. So I see entities exploring. I haven't seen too many examples yet, which is why I think when I announced at MozFest in March, I think I said, as far as I know, my startup is the world's first personal digital fiduciary and I haven't been sued yet. So I think, I feel pretty secure that that's, in fact the case.

Ayden Férdeline [00:26:58] One more question before we talk about the new business you're launching, there's one part of the paper that I didn't fully understand, no doubt because of my own technological ignorance,

Richard Whitt [00:27:10] Not just user error, it has to be a translation function on my end.

Ayden Férdeline [00:27:12] And that's that GLIAnet relies on the edge-to-all principle, rather than the end-to-end principle that has supported a free and open Internet for decades now. I think Lawrence Lessig would argue that the primary characteristic of the Internet's architecture that has enabled innovation is the end-to-end principle. And I know many human rights and Internet freedom advocates who would not be very supportive of abandoning the end-to-end principle. So perhaps you can explain that part of the paper to me. How is it that edge-to-all differs from end-to-end? And again, pardon my ignorance here, are there any risks involved in this approach?

Richard Whitt [00:27:53] Yeah, so excellent question. So the end-to-end principle was in fact baked into the Internet in the '70s into the protocols, into the governance structures. That is the notion that basically the intelligence where you can resides at the end of the network, either end, and the so-called dumb pipes in the middle, which is one of the reasons why the telcos hated it so much because it turned them into what they thought were dumb pipes that wouldn't get any value out of the bits flying over them. But if the intelligence resides at the ends of the network, that means that people on both ends are more or less in control. So what I suggested here, and I'm not a network engineer, but I think I know just enough to be to be quite dangerous. The edge-to-all is intended to be as an overlay. So you still have IP and the way it's set up as end-to-end, the true original peer to peer network. Right. And that what happened was we had the web that came along in the nineties which introduced a client-server relationship which eventually evolved into the cloud. And that changed that, right. So you still had the end-to-end, but now one end, which ironically was called the client and the other ironically called the server, when in fact those functions are more or less reverse reversed. The cloud, which is the server side, has become the powerful point. Everything sits there in terms of content, in terms of the applications, in terms of the functionalities. That's all there. And we, the so-called clients are tapping into that through what have become, even with the smartphone, relatively dumb devices compared to the power at the other end, the server side. And so what I'm trying to suggest with the edge-to-all notion is we need an overlay that replaces or supplants in some ways the current Web paradigm. So in some ways, I want to go back to the Internet. I want to go back to end-to-end, except with an important correction, which is to say, rather than one end to another, the one end, which is those of us who are the end users, which often is called the edge of the network, the protocols and the functions and the technologies should be, again, like the HAACS paradigm, should be focused on us. So we should really be in charge. That really is my browser. It's not your browser. This really is my phone, not your phone. This is my operating system. This is my personal data, part of my living room. Right. All of those are examples of what I call edge tech, which is bringing the technology to the edge of the network. And then from there, I have the power then to go out to all, everything else, and that's why it's called the edge-to-all, or E to A, and so to me it is an overlay, it's not replacing end-to-end, and it hopefully attempts to correct the imbalances, the power imbalances again, that have happened, unfortunately, because of the Web over the last twenty five years or so.

Ayden Férdeline [00:30:35] And you've recently launched Deeper Edge. My stream of consciousness says I have to ask if there's a connection between the name of your startup and edge-to-all.

Richard Whitt [00:30:45] No, just a sheer coincidence from a night of too much adult beverage and vibing. No, Deeper Edge is intended to say, that the point is, we already live on the edge of the network. But rather than continue in this role of being sort of the subservient user to this vastness of the cloud, and the platforms in the SEAMS paradigms, make the edge deeper, richer, more in control. And so Deeper Edge is intended to combine, as I mentioned, the fiduciary duty, the human governance side with the edge technologies and then build surfaces on top of that, so build applications and technologies as part of that, that then you provide to people. So you're both trying to shift both the way that that they should think about themselves now as clients or customers or patrons of an entity that is serving their interests. And then you're trying to give them stuff on top of that, the services that are premised, again, not on Web technology, but on this edge technology.

Ayden Férdeline [00:31:45] Got it. Thanks. So Deeper Edge. What is your plan there? Who's working with you on it? From what I understand, it's going to be a fiduciary, a data guardian, a mediator, an advocate for the best interests of the persons it is protecting. Is that a fair summary?  

Richard Whitt [00:32:01] Yeah, and the genesis of it really came about over the holidays, so late 2020, when I looked back over the last several years of work I'd done in terms of the writing, the research, the advocacy, lots of podcasts, lots of conference panels and the like, I felt very good about it. I feel like there's a strong sort of corpus there of concepts to draw upon and research that I think supports it. But I was frustrated that I wasn't seeing it in the world. You mentioned, where are the fiduciaries? And then the answer is, there really aren't any. And so rather than continue talking about it, I thought, well, OK, I'm fortunate enough to live here in the heart of the Silicon Valley. Why not just actually become one? Why not put on the entrepreneurs hat, as it were? Start a company that is premised on the very concepts that I've been talking about. So that's what Deeper Edge is intended to be. And then I hasten to add, and my first blog that I put out a couple of months ago now, tried to explain, I don't currently have customers, I don't have a million people who I can now call customers, I have ideas for services, I have ideas for technologies. I don't really have those yet either. But in exchange for not having what people would think of as sort of what a company should have, instead I'm calling it a proof of concept or PoC company. And part of the point is, unless you wrestle with these issues, not just in writing white papers and law journal articles, unless you actually wrestle with them on a daily basis, like figuring out what is the privacy policy going to be for this company, right? Most people would say, just go find somebody else's privacy policy and make sure you don't take people's data and you're done. Yeah, but isn't this an opportunity to rethink the whole idea of having a privacy policy? So one of the thoughts I've had, I've been working with a few volunteers, friends, so what about if everybody who comes to the website and comes to the company, why not their privacy policy? Why don't they tell us what do you want and the technology, it does exist, I can be one hundred different things to a hundred different people simultaneously based on what they convey to me. You are essentially their terms of service. And this is something, so Doc Searls, whose a friend and long time, I mean brilliant, at the Berkman Center with Harvard University has written about this. Right. The intention economy where the intentions of the people involved, he also calls it Customer's Common, come into play. And I talk about this as an example of edge tech. Edge pull, and edge push. So rather than the cloud taking stuff from us and then pushing stuff to us, why not we reverse that paradigm so that we are taking from the cloud that we want and then we are pushing to the cloud what we want? So we're basically reversing that whole paradigm. An example that he talks about is this notion of terms of service. I can project my terms of service to every website I go to and then I've got maybe a personal AI or some other bot that is basically helping me as I go to every website, indicate you know what, this website doesn't measure up to what you want. Lots of red flags here. You don't want to go there. Or maybe you create a negotiation between your website and that website and maybe there's some sort of accommodation that could be arranged. But the larger point is just simply creating a different kind of company where it's not about the passive user, it's about the active customer and ways you could support them and build the trust levels. So I figured you really can't understand that and go into it any depth without actually being it, without actually wrestling with it on a daily basis. And so that's part of the notion here is that you spend the next whatever time it'll take three, six, nine, 12 months or more to develop some of these policies, develop the services, develop the business model. Right. Is this ads based or not? I mean, that's a huge question. I think the ads economy can be totally revamped in ways that makes it much more user friendly, but that's just a supposition. So if I actually had to wrestle with it, talk to advertisers, data brokers, figure out that alternative marketplace, that then conveys to me that there's empirical evidence that this can work or maybe more likely not work, but then at least I'm given it a go.  

Ayden Férdeline [00:35:50] When we think about users as being customers, there is increasingly a recognition of the issues here. We see Apple, at least in ad tech, trying to help the market correct by providing alternative ways of distributing personalized advertisements that don't rely on such invasive data collection practices. But again, it's within this paradigm of people being customers, and not every person being forced to interact with technology is a customer. I'm thinking here about low-wage workers. There's a piece in Vice today - we're recording this interview on June 21st, 2021 - that showed this new workplace surveillance tool by Live Eye Surveillance that is being deployed by some 7-Eleven franchises. It's a surveillance camera system that keeps constant watch over stores and lets a human operator in India remotely intervene when they see something deemed suspicious. This is part of a trend, of course, smart communities are being proposed. Cameras and sensors are going in everywhere. And when we're customers, we can perhaps object and GLIA or Deeper Edge can look out for our interests. But again, not everyone is a customer, workplace surveillance is really increasing in really creepy ways that absent, perhaps government regulation, how are we going to protect all people everywhere from disproportionately invasive surveillance technologies?  

Richard Whitt [00:37:28] Yeah, so the notion of the digital fiduciary - the thing I'm trying to develop - it's really, I mean, it's fair to say and I'm upfront about it. It's a fairly Westernized, individualized concept around one-on-one relationships between customers or clients and an entity. A company, let's say. There are other forms of fiduciary institutions that are being explored today, the one that I think is most popularly known is the data trust. And that's the idea of trustees and trustors and grantors. It's very much similar to somebody running a trust for financial purposes, for example. But I'm aware of, I can't recall the name unfortunately, but there's an entity that's setting up a union of Uber drivers in Europe. And then part of what they do want to do is fully protect themselves under GDPR. But it's also beyond the protect phase. It's like, how do they promote their interests? How do they confront the surveillance that Uber wants to put into all the cars? Right. To monitor the drivers. Similar to what's happening here in the States. There's some pushback when Amazon wanted to monitor their delivery vehicles. Right. And the people driving them. This notion of workplace surveillance, essentially, which is another side of surveillance, capitalism, I suppose. Yeah, it may not be addressed by my approach necessarily and maybe be better dealt with by creating some sort of collectivized situation where people can use the power of the collective of the community to push back. And these edge technologies are available. The one that I'm particularly keen to explore and hopefully instantiate one day is the personal AI. So these AI systems, what's really what's happening using the sensors and the environment and on our phones, it's AI systems are doing a lot of the heavy lifting. They're doing a lot of the work. What if I had an AI system I owned, trained on me that represented me, that was able to challenge, to push back, to oppose, to question the decisions coming out of these systems, even the fact of surveillance itself? Why not be able to use obfuscation technologies to fudge my face, for example, so that facial recognition cameras have a hard time glomming on to me. I mean, there are a number of ways you can use technology. Essentially, it's like the big army, versus the little army. The little army has to take on different kinds of tactics. And so maybe you have to take on tactics to protect people, whether as individuals or as part of the collectives, say members of employees of a large company in a retail store kind of thing. So anyway, my particular model may not address that, but there are others looking at these data trusts, data commons, data cooperatives all around this notion of helping people manage their data flows, but then also challenging where the authority structures are trying to control them all the more with these technologies.

Ayden Férdeline [00:40:03] Thank you. And it would be a bit unrealistic to expect Deeper Edge to try to solve every issue in society that has been negatively impacted by technology. But your proof of concept is tackling one extremely important and prevalent use case.  

Richard Whitt [00:40:19] I'm also looking for funding, by the way, so I'm actively on the funding route. So folks who actually have some resources, please, I'm serious. This is a moment of time. This is a window, I think, to make real change. We're going to see Congress, I think, adopting comprehensive data protection legislation next year. The competition issues are on the rise. Lina Khan is going to be the chair of the Federal Trade Commission. She wrote a piece five years ago on Amazon's competitive challenges or challenges to the current competition on the Web. So change is in the air. And what I would love to see is that in addition to these, again, these accountability sort-of mechanisms, we also think about mechanisms to give people more power and control over their online experience. And this is where the investor community can totally step up and not just talk about it, but put money, doesn't have to be into Deeper Edge, there's other small startups out there trying to do similar types of things that give people more agency and choice online.

Ayden Férdeline [00:41:11] Are you seeing VCs do this? Is this on their radar?

Richard Whitt [00:41:16] It is. That's one thing that's encouraging to me. Until recently, it just felt like unless I had a pitch deck that used the word "monetize" at least 17 times, that I wasn't going to get anywhere. There's also been, interestingly, this shift, it used to be you build a business in Silicon Valley, mostly on the premise that three or five years down the road, you're going to sell it to Google or Facebook or Amazon. Right. One, people are less sanguine about those companies and dealing with them. But you're also now seeing the regulatory overhang where these companies are less likely to invest in potential future competitors because that can be seen as a competitive antitrust challenge. So they're increasingly building different stuff. They're not just mimicking the current paradigm and blockchain is part of that. There's a lot of crazy projects out there that shouldn't be funded, I don't believe, and blockchain itself, I think also proved its utility beyond just simply the cryptocurrency world, but I think it does have that utility, and if people are smart and build stuff on top of it, that can create all kinds of different kinds of changes to the current power structures. So I think as much as there's change in the policy world, I think out here swirling in the Valley and elsewhere around the world, more and more folks try to figure out, OK, we don't like this paradigm either. How do we change it for the better?

Ayden Férdeline [00:42:30] That's really encouraging to hear. Just on the proof of concept that Deeper Edge is developing, how is that coming along and when can we experience its protection?

Richard Whitt [00:42:41] Yeah, this is my summer of code, I'm working with folks to determine wireboarding. Right. So you're sort of basically creating all the different functions, the user interfaces, how it all fits together in the back end and ultimately a prototype, or an MVP, minimally viable product. When it happens unfortunately I can't really say, but I'm basically aiming towards the fall to have something that I can demonstrate. I have on the website and I've got a pitch deck and some examples of the kinds of things I want to invest in, the types of applications and services I think will serve real needs that people will want to use or enjoy using them. And the bonus is coming from a fiduciary that is promoting their interests and not selling their stuff to data brokers online. So, yeah, sometime in the fall period, I'm hoping I'll have more tangible things to demonstrate.

Ayden Férdeline [00:43:32] I imagine it can't be easy. All of these impact assessments you must have to conduct in order to bake in privacy by design from the earliest stages.  

Richard Whitt [00:43:41] It's even agency by design. So privacy again, privacy is somebody else blessing, somebody's blessing what somebody else is doing to your stuff. And I would love to make it again, raise the bar, why not more about me as the individual, as the center point, building from the ground up thinking about me always the one that should have that control function. So agency by design, maybe the better the more apt phrase here.

Ayden Férdeline [00:44:06] Agency by design. Well, I look forward to seeing what Deeper Edge comes up with. Richard Whitt, thank you very much for your time today

Richard Whitt [00:44:14] Thank you Ayden, I appreciate it. Thanks for having me on.

Ayden Férdeline [00:44:16] I'm Ayden Férdeline and this has been POWER PLAYS. Thanks for joining us today. Next time on POWER PLAYS, we're speaking with Nathan Schneider about co-operatives and how platforms can responsibly share power with their users.  

OUTRO [00:44:31] This has been POWER PLAYS. POWER PLAYS is a production of ETUNU. The guests on this program speak only for themselves and the views expressed do not necessarily align with those of ETUNU. Copyright 2021, ETUNU Corporation. All rights reserved.

Check us on youtube

Feel more like watching a podcast? Check us out here: