Video: The Intelligence Revolution and the New Attention Economy: An Ethical Singularity

February 19, 2020
Peter D. Hershock
Peter D. Hershock, director of the Asian Studies Development Program at the East-West Center in Honolulu, spoke at the CSWR on Feb. 13.

Considerable attention has been directed to the possibility of a technological singularity when artificial intelligences “wake up” and start acting in their own self-interest. Long before then, however, humanity will confront an ethical singularity—a point at which the evaluation of values systems acquires infinite value.

Drawing on Buddhist resources, this talk makes the case that our prospects of realizing more humane global futures depends on changing how we are present and developing both capacities for and commitments to compassionate ethical creativity.

Peter D. Hershock is director of the Asian Studies Development Program at the East-West Center in Honolulu. He has authored or edited more than a dozen books on Buddhism, most recently Philosophies of Place: An Intercultural Conversation (edited, 2019). His current project, initiated as a 2017-18 Fellow of the Berggruen Institute in China, is the monograph The Intelligence Revolution: The Challenges of Humane Presence in an Era of Artificial Agents and Smart Services—a reflection on the personal and societal impacts of the attention economy and artificial intelligence.


FULL TRANSCRIPT:

[MUSIC PLAYING]

Good evening, everyone. Evening. Welcome to the Center. My name is Charles Stang. I'm the director here. I know many of you, if not all of you, and it's my pleasure this evening to welcome Peter Hershock all the way from Honolulu, Hawaii to the Center. Professor Hershock is here as part of a new series we're hosting just this semester on, quote, ancient modern AI. This is a series-- the series is really the brainchild of Andre Yule here in front, who is a resident and research fellow here at the Center, and a PhD candidate in the newly renamed Department of Art, Film, and Visual Studies here at Harvard-- formerly Visual and Environmental Studies. So Andre will introduce Professor Hershock in just a few moments and will say a bit more about the series, of which this is a part.

But let me say if you're interested in the series, please note that the other guest lecture will take place on Thursday, March 5th. Adrienne Mayor from Stanford University will be speaking on gods and robots-- myths and ancient dreams of technology, and both guest lecturers will be leading seminars on the day after their public lecture. So tomorrow, Professor Hershock will be leading a seminar in our conference room out front from 12:00 to 2:00. Again, Andre will say more about that. So please join me in welcoming to the podium Andre Yule, and thank you again for coming out.

[APPLAUSE]

Good evening, everyone. It's really wonderful to see so many of you here and showing interest in this new topic, emerging here at the Center for the Study of World Religions. As all of us probably know-- that this artificial intelligence has become a real sort of key theme across many different disciplines. It has become a concern of public discourses that have to do with recent breakthroughs in the science communities around artificial intelligence, but it has also proliferated a whole wide range of different discourses regarding the ethical implications.

So we have seen the emergence of several research initiatives, both here at Harvard and MIT, and at other academic hubs around the world, but also ethics boards that are criss-crossing uniquely-- the academia and the industry. And these initiatives are trying to make sense of how we actually govern an emerging technology that has so far reaching implications of how we, as a human species, are evolving.

And so part of these discourses are actually evolving in a much more nuanced conversation, compared to what has happened before in more polemic conversations about the singularity about robots taking over the world or humans upgrading themselves into godlike beings and dominating the entire world. Instead, actually, we have seen more concerns around human rights-- questions like fairness, accountability, transparency have become key words in all these discourses, and so we have a new trend towards human centered AI. The question is, though, what does it actually mean, and who is getting to have a seat at a table in all these conversations?

So as we see a discernible push for a global ethics agenda in the governance of artificial intelligence, the question is really-- and this is something, also, that Peter Hershock is articulating so well-- is are we actually also facing an ethical singularity well before we reach a technological singularity? That means, who is actually getting to decide what kind of ethics standards and frameworks we are considering in a discourse?

And so part of actually our conversation tonight in this working group is really to broaden our horizon, in terms of what kind of ethics framework can actually interplay in the making of AI governance principles. And last year in fall, I had the pleasure of participating in a symposium that Peter Hershock organized at the East-Wester Center in Hawaii on humane artificial intelligence. And we had an interesting crowd of different people-- both experts from the industry, but then also, experts in cultural studies and religious studies that had a very interesting discourse about what are actually current conversations, and how do they actually also limit our understandings?

So we had one interesting presentation, also, on the perspectives and approaches that are happening currently in East Asia that are usually not well recognized here in the United States and in Europe. It is, for example, in Korea, where people are more considerate about the social hierarchies and ranks that artificial intelligence should be integrated into, or in China, the idea that human and artificial intelligence should form a new form of harmony. And then finally, in Japan, where it's much more like a partnership approach, where people are concerned how artificial intelligence and robotics can be integrated into society.

So these are different nuances to the discourse that need to be considered in order to have a truly holistic and global conversation around the ethics and governance of AI. And what they also show is, especially to scholars who are familiar with different cultural contexts, is that any kind of definition of responsible innovation is always embedded in the cultural norms and values with which we make sense of the world. And so thanks to Professor Stang's leadership, we have found a place here at the Center for the Study of World Religions to actually explore these questions.

So we have started an interfaith working group looking at the ethics of artificial intelligence from different religious and cultural lenses, which means if you look at it from a Buddhist, Christian, or indigenous perspective, are there different ways of making sense of integrating AI into our societies and political structures? And if so, how do we reconcile the different gods of our cultures? These are very interesting questions that I'm hoping today, also, to illuminate with Peter Hershock's talk.

So now I'm going to introduce our guest speaker for today. Peter Hershock is director of the Asian Studies Development Program at the East-West Center in Hawaii. He's author and editor of more than a dozen books on Buddhism, including his recent publication, called Philosophies of Place: An Intercultural Conversation, published last year. In his current project on artificial intelligence, he was initiated as a fellow at the Berggruen Institute in China. It's going to be a monograph entitled, The Intelligence Revolution: The Challenges Of Humane Presence in an Era of Artificial Agents and Smart Services. Tonight, Peter Hershock is going to draw on Buddhist resources to make the case that our prospects of realizing more humane global futures depends on changing how we are present and developing both capacities for and commitments to compassionate, ethical creativity. So please join me in welcoming our distinguished guest speaker-- thank you for joining us today-- Peter Hershock.

[APPLAUSE]

Thank you, Andre, for the wonderful introduction and to Professor Stang for the opportunity to come here from Honolulu. Hopefully, the jet lag won't be too bad and I'll be able to make it through without stumbling too many times over my own tongue. That sometimes does happen with jet lag. Apropos of this idea of ancient modern AI, I want to cast our imaginations back 500 million years. 500 million years ago, over a very brief geological period-- on a geological eye blink of just 10 to 20 million years, every modern phyla of animals on the planet emerged in this evolutionary cataclysm called the Cambrian Explosion. It was really an astonishing transformation that took place. We are now going through what I think can realistically be deemed a second Cambrian Explosion, but this new Cambrian event is not an evolutionary cataclysm in the organic realm. It's taking place in the realm of the synthetic.

So rather than a purely organic, genetically based evolutionary event, what we're facing is an event where you have human, very carbon based intelligences that are broad, but slow, and that are intention generating, being merged with intention implementing, silicon based intelligences that are super fast, but very narrow.

And what we've been seeing over especially the last five to eight years for technological reasons that we can get into-- over the last five to eight years is an explosion in the evolution of these new forms of agency-- synthetic agencies. We can't really quite call them agents because they don't reside in bodies the way we typically think agents do, but they are doing the work of implementing human intentions, and in the same way that the animals that emerged 500 million years ago in the Cambrian Era transformed the biosphere genetically, these new intelligences are computationally transforming the anthrosphere. Not the atmosphere-- the anthrosphere. They're changing the way in which human presence is occurring and the dynamics of our own sociality. This is not going to be a small set of changes that we're facing. This is really, really quite the event.

Now just as the organic evolution that took place 500 million years ago did so with animals evolving out of single celled organisms-- we went from a world that was just a fertile soup of single celled organisms to a world of multi-celled animals with their own intelligences operating in environments with new forms of sensing that were able to get into and create really new worlds of sentience-- of felt engagement with the world around them. This was a really monumental event, and that kind of organic connectivity that was taking place when single celled organisms came together to create a single genetic system for perpetuating new life forms is the same type of thing we're going through today. But these are not, as I said, organic phenomena playing it. It's the merger of human and non-human intelligences that is taking place, and these synthetic agents, just like the animal agents 500 million years ago that transformed the biosphere-- these new synthetic agents are in the process of transforming the anthrosphere today.

Now many people who look to the far scientific horizon have raised worries about this process, and they raise worries about-- it's called the technological singularity. This technological singularity is the proposed moment when this evolution that's taking place with machine intelligences and their mergers with human intelligences will reach a point that artificial intelligences acquire super intelligence. They go beyond what humans are capable of, and the worry with that is obvious. What if these artificial agents start acting in their own self-interest, rather than human self-interest?

And so people like Nick Bostrom have written about this in quite considerable length and detail, sort of trying to figure out, what would a super intelligence be like? How would such a super intelligence behave? And how important would human beings be to such an intelligence? Important questions to ask. Very prudent questions to investigate, and I think we ought to continue having that kind of discussion, but the best estimates are that's maybe 50 years out, 200 years out, 1,000 years out. Nobody knows at this point with any kind of certainty when we will develop artificial general intelligence.

Long before then, we have other things to worry about. The artificial super intelligence thing-- yes, it's prudent. We should worry about that. We should put some steps in position now because if it happens, it'll happen extraordinarily quickly for a lot of reasons that we can talk about it. It will not be a slow event. But right now, it looks like it's pretty far off in the history. Prudently, around the world as a precautionary cornerstone, everybody is talking about what was mentioned earlier-- designing artificially intelligent systems that are human centered-- that is, aligned with human values. And I think that that's an interesting thing. It's natural that we would want to do that. I think it's what we could call misplaced cogency. It's a really good response to the wrong concern.

We shouldn't really be concerned about the artificial super intelligence, which may or may not ever come about. What we need to be more immediately and profoundly worried about is the current transformation that's going on as human and machine intelligences are being merged into these new forms of synthetic intelligence that are aligned with current human values. Current human values. And let me illustrate by analogy why that should concern us. We know that with the fossil fuel powered technologies of the second and third Industrial Revolution-- so the machine revolution-- it took us 200 years or more to figure out that if we continued burning fossil fuels, we would eventually change the dynamics of the global atmosphere-- of the global climate-- and perhaps do so irreversibly.

Now, we've known that for 50 years or more. We've known it quite well for 50 years or more, and have we succeeded in stopping global climate change? No. Have we established really good protocols for how we go about doing that and put them into action with globally agreed upon really hardcore commitments at the governmental and non-governmental levels? No. It's taken 50 years, and we have failed to do that. We have not failed for lack of scientific knowledge. We have not failed for lack of technological expertise. We have failed because at the core of climate change, and the reason why humanity seems to be so ill-directed to resolving this predicament of climate change is there is a conflict of values. There are environmental values, economic values, political values, and cultural values, and it's the conflict of values that is keeping us from getting global commitment and action on climate change.

Similarly, the same thing is going to happen with the new revolution-- what I call the intelligence revolution. The intelligence revolution is also making us confront a conflict of values-- a nexus of conflicting human values. It's not about the values of the machine versus the human. These are human values that are in conflict with one another. Climate change and the intelligence revolution are alike. They are not technical problems. They are ethical predicaments, and unless we are able to resolve the conflicts of human values that are at the core of the intelligence revolution, no technological fix will be enough. No technological fix will be enough.

The thing is that this process isn't going to play out over 200 years. We don't have 200 years of allowing the attention fueled fourth Industrial Revolution, as it's being touted, to play out. This is going to take place in perhaps 10 years or 20 years. We don't have a big window of opportunity. Long before there's artificial agents that have super intelligence that are going to be able to assert their own interests against humans, machine intelligences' algorithmic systems are taking human intelligence-- human intentions-- and they are scaling them up at the kinds of speed that only electronic, silicon based intelligences can work.

These new forms of synthetic intelligence are intention implementing at this point, but they are capable of innovating. We now have a world of innovating machines, and that is something entirely new that we have no precedent for. These are machines that we built and that now are innovating on their own, acting creatively. And innovating machines and the computational factories of this fourth Industrial Revolution, I suggest, are going to, long before any technological singularity, force our confrontation with an ethical singularity. In mathematical terms, a singularity is a point at which a function assumes infinite value. So when I talk about an ethical singularity, I'm talking about a point in time when you arrive at this juncture, historically, when the evaluation of value systems assumes infinite importance.

While ethics can be described in a lot of different ways, I think that one very basic way of looking at it is it's the application of collective human intelligence, not to try to figure out how better to get to existing aims and interests, but rather to apply our collective human intelligence to assessing and evaluating our different aims and interests, and our means of arriving at them. In short, ethics is human course correction. It's the art of human course correction.

What I want to do is to draw on some Buddhist resources to suggest that one way of looking at what's going on with the new digital economy and the technological infrastructure that it's based on is that it's serving as a karmic accelerator. It's taking our human karma and accelerating that in the same way that the machines of the second and third Industrial Revolutions accelerated human application of our intentions, with respect to the material world and our transformations thereof. These new technologies-- these new synthetic intelligences-- are going to take our human intelligence, our values, the intentions that we have in applying our intelligence-- they're going to take our values and our intentions, and they're going to apply them at machine scale. They're going to do that with a rapidity that will be utterly astonishing. There can be really great things that come out of that. There can be some really frightening things that come out of that.

So what I'm going to suggest is that this new system-- this new infrastructure fed by the synthetic intelligences that are nurtured on human data-- we have to keep in mind that the new synthetic intelligence is not generating their own data. This is human data that they are taking-- attention carried human data. That is, the traces of our own human intelligence, and they're making use of that in order to shape human beings, in return.

So what I'm referring to is a system that is a recursively amplifying, wish fulfilling system that is designed to exploit human attention energy in order to realize previously unprecedented degrees of predictive certainty and behavioral control. It's a new system for taking desires and translating those into responses to us individually-- individually targeting our values, our interests, and our desires, and feeding back to us, as individual consumers and citizens, precisely what we have asked for and wanted, and that's why we should be worried.

I have a 17, 18-year-old son, and I've got a 40-year-old son, and I've seen an inter-generational shift just in that 22 year period between the two boys. And I can tell you that we are at a point of transition, generationally, that we've yet to really realize just how deep it is, and we don't have a big window of opportunity.

So the ethical singularity ahead, I want to suggest, is the point at which we will arrive at a kind of karmic juncture when we will have no more possibility of changing our own human karma than light has of escaping a black hole. A cosmological singularity. We will get to the point where the machine system that we have built is so effective at feeding back to us our desires-- our values as we are now as human beings-- in order for us to reach a point that we will no longer be able to find our way out of that karmic impasse, and that's the real ethical singularity that we face.

Now I think that history makes a lot of sense and we need to look back a little bit to explain where we're coming from-- where we're going. But so does the American phrase, if you want to really know what's going on, follow the money. And so what I want to do is to start with reflecting on the attention economy-- the monetization of attention that has been occurring over-- let's say it's the last 40 years or so, but that really took off only in the last 10 years or so.

So the argument could be made that the attraction and exploitation of human attention has something that you find in every human society. I don't dress like this in Hawaii. I dress like this for you guys. This is a show that, even if you come from Hawaii, you can put on a sport jacket and look professional. This is an attention mechanism, right? And we're all doing that all the time. It's the hairstyle, it's the makeup, it's, you know, royal-- the regalia that Kings and Queens have. You go to a religious ceremony. What do they have? They have incense, they have banners, they have musical instruments. It's attention grabbing.

Attracting human attention is something that has been going on in human societies as long as there's been human societies, but something started to happen with the print and the broadcast media that developed in the 18th and 19th centuries, and that really took off with electronic, electrical media-- electronic media at the end of the 19th and early 20th century. And what we found with these new print and broadcast technologies-- that it was possible to harvest human attention and convert it to revenue.

So Tim Wu has written about this in a couple of his recent books. The most recent one, I think, is called The Attention Merchants. But where we've got systems of advertising that capture human attention and convert that into revenue by using that to take this mass attracted attention from people consumers, and to then put in front of them products and services that they will then buy, and to then sell that service of attracting attention to corporations-- that's the moneymaking part of it-- attracting revenue. So the conditions were all set by the early 1970s for the emergence of a global attention economy. We had mass media, we had global communications systems, and in 1971, Herbert Simon the political economist, gave us the logic of an attention economy. He laid it out quite clearly. He said, a wealth of information creates a poverty of attention and a need to allocate that attention among the overabundance of information sources that would consume it.

So basically, he's saying attention is a limited resource. There's only so much of it, and when you have a world in which attracting attention and monetizing that-- turning that into revenue-- becomes a primary driver of an economy, then attention scarcity becomes an issue and you get competition among informational sources for that attention. Now Simon was writing in 1971, and at that time, we're talking pre-internet-- the nascent idea of the internet had been developed, but there was no real internet and there was no smartphone. And what the smartphone and the internet did was they took this phenomena of a network society that Manuel Castells wrote about so brilliantly in the 1980s-- mid 1980s-- the emergence of a network society, and it hyper-charged that development. And what's distinctive is a structural difference between hierarchies-- the old model of human organization and networks.

In a hierarchy, how important is it to belong in the hierarchy? Harvard's a hierarchy. How close are you to the provost? You have provosts here or presidents? Provosts. Both OK, either one! How close you are to the top or to the department chair in a department or the dean-- that makes a big difference in your career. If you're way down on the totem pole-- a graduate student, you know, somebody who's here as a visiting fellow, you don't count for a whole lot in that hierarchy. How close you are to the top of the hierarchy was what makes a difference.

Now interestingly, that's not true with networks. In a network, the value of being a member of the network is a function of how many nodes there are in the network-- quantity-- and the quality of informational exchanges taking place through the network, and the facility with which positive feedback through the network amplifies interaction and differentiation within the network. And what that means is that the value of membership in a network, unlike a hierarchy, is greater, the larger the network gets. What that means is 24/7 connectivity with the internet, Wi-Fi, smartphones supercharged the feedback loops within an already existing network system-- the network society that Castells talked about-- and created the conditions under which a very small number of corporations-- of connectivity providers, internet providers, and platform providers-- could get lock in on consumer attention-- lock in on people's attention for their social media, for their consumption, for their shopping, et cetera, et cetera.

That's been playing out now for over the last 12 years, 15 years. I think the smartphone was released 2003, 2004. Something like that. So we've been at it for a decade and a half. It's quite a mature phenomenon now, and the concentration of wealth that is possible-- the narrowing of competitors within this system, because as the network grows, you get income-- you're able to buy out competition. Every time a newcomer comes on the block, you buy out the competition. Facebook-- 2013, 2014-- bought a startup-- Instagram. They bought them for $1 billion. It takes a lot of extra revenue-- cash-- to come up with $1 billion to buy out a company that was competing with them on picture sharing-- just a photo sharing application. Instagram had 13 people working for it. Everyone became a multi-hundred millionaire overnight. $1 billion split among 13 people. That's quite an investment.

So what we have is a world in which you have Facebook and Google getting 73% of all ad revenue in the United States. You look at total web traffic-- visits to websites on the internet. The worldwide web probably has above a half-- is it 500 billion? Half a trillion websites. Half a trillion websites, but only four companies get one third of all web traffic. Four companies, all in the US-- Apple, Google, Microsoft, Facebook. A small number of companies.

If you look at things globally and look at market capitalization-- a measure of investment interest-- the top seven corporations in the world by market cap are no longer mining companies. They're not petrochemical companies. They're not financial companies. They're not companies that do anything except digital connectivity and AI. We're talking Apple, Alphabet, Amazon. You know Alphabet is the parent company of Google. Facebook, Tencent, Alibaba-- Chinese companies. I mean, we are talking about-- and Microsoft. Got to throw them in there. We are talking about a revolutionary shift of capital into these corporations.

Now, the old attention economy-- let's call that the attention economy 1.0. That operated on the basis of mass advertising of mass produced and mass delivered goods and services. The attention economy 2.0 that has been enabled by the internet, the smartphone, and the new developments in algorithmic intelligences-- machine learning algorithms that are behind the search engines and the recommendation engines that everybody's making use of-- the GPS, the guidance-- everything that is part of our day to day lives now. And if you look behind that, the attention economy 2.0 is not doing mass advertising. It is highly targeted, individualized advertising. It is now at the point where it's advertising down to the individual person.

I will tell you a little anecdotal story. A friend of mine traveling in Europe left her wedding ring-- terrible thing to do. Took it off in the shower-- no, no. Left her wedding ring in the bathroom at the hotel and just was devastated when she looked on her ring 100 miles away as they were driving to the next city-- realized she didn't have it. Talking with her husband, my friend, in the car as they're going. They would call the hotel, get them to look for the ring, et cetera. They would check into the new hotel. They go online to kind of look and see what's going on. Pop up ads about wedding bands. Pop up ads about wedding bands because the GPS system is tied into and providing the data that's being garnered by listening to conversations in the car-- a rental car-- and providing that to corporations, which are then allowed to target individuals with that information. It's happening for commercial purposes. It's happening for political purposes.

So if we look at the new global attention economy, what's really different about it is that with the 24 hour connectivity, contrary to what we thought about the attention market being a space of open competition, in the attention economy 2.0, there's no open competition. It is a system that, because of its network structure, is biased toward monopoly building. Is monopoly building bad? If you are a Facebook user, you're going to go, great service. I love it. If you're using your GPS and getting around Cambridge because you're not familiar with the area, it's absolutely convenient, you know. You can't deny that there are benefits from monopolistic practices in this new informational attention economy. There are huge advantages to that for the consumers, but if we look under the hood of that, then there are reasons for us to be worried.

And that has to do with the fact that what we are finding is that this new system-- a new infrastructure is in place. It's the internet, Wi-Fi connected infrastructure that is not just providing us with information that we're looking for. It is extracting, through our attention, by grabbing our eyeballs and our ears and our time at the laptop-- our time on the phone. Every instant spent on that-- every moment spent on that is human attentionary, and a human attention is like radio waves. It carries content. Our content-- our behavior on the web is being recorded. Those are traces of our own human intelligence that are then being made use of.

So the internet today-- the Wi-Fi system today-- this new infrastructure is functioning in a dual purpose mode. On the one hand, we are users doing double duty. We are doing double duty as consumers who are being individually targeted with digital and material goods and services that are-- the recommendations are responding to our digitally expressed values, interests, and desires. The systems will only get better than they are today. They are getting better every day because they are innovating systems. They are figuring out how better to get our attention. Did it get us to express more of our desires online? And if you think, how good are these systems getting? They can take a persons feeds through Facebook, through social media, and a computer system-- algorithmic system today can determine from the photograph of the person paired with their data, with 97% accuracy, whether they are straight or gay. Humans are only at about 82%. It's pretty astonishing. That level of incisive understanding of human desire and content to that desire.

So as consumers, that's one of it, but we're also the producers. We are the producers of the training data that machine learning algorithms are using to get better at making recommendations for us, answering our search inquiries, putting us into contact with things that we wouldn't have dreamed about ourselves, but that are aligned with our expressed values and desires. This is an entirely different system than we had before. This is a system in which we have artificial agents that are serving as proxies for human beings that would desire from us as much of our attention revenue as possible to turn that into real world revenue-- real world cash-- real world power.

And so when we've got this system, these machine learning intelligences are transforming the world in such a way that the data production is moving so quickly, we really can't fathom how fast the changes will come about. The amount of data that we're generating now-- it's a lot. I could give you a number. It's a number that you-- maybe this will make sense to you. How many of you remember DVDs? OK. DVDs for film. You put a whole film on a disk, OK. To get all the amount of data that we're producing every day on disk, you'd have to have a stack of disks that go from the planet Earth to the moon and back again. In five years, it's predicted that through the internet of things and the social media on the internet that humanity will be generating the equivalent of 10 hours of high definition television for every human being on the planet every day. That's a lot of data. It's a lot of data about each and every one of us.

What's going to happen as a result? Well, at the cutting evolutionary edge of the new Cambrian explosion, what we see are virtual agents. Those of you who use Apple phones are familiar with Siri. Those of you who use home systems-- you might have Alexa. You might have-- Google has Home, I think. There are a lot of these so-called personal assistants. They now are primarily search engines. There are also go agents. Has anybody heard of Viv? V-I-V. The scientific team that was behind the discoveries that led to Siri that were bought out by Apple and dumbed down by Apple for a varied set of corporate reasons-- the scientific team waited out their non-competition clause and they went into business for themselves, and they created Viv.

Viv is a voice activated system by means of which you can talk-- in vernacular, I can say, Viv, order a dozen roses for my mother. It's her 70th birthday, and I'd like to have them sent. And make a reservation at that restaurant that she likes that we go to all the time, and make it for 7 o'clock, and I don't want her driving. Send a car by 4:00. And make she's at the restaurant by 7:00. Viv will execute those instructions, contacting the florist, contacting the restaurant. How does it know what the restaurant is? From my emails, my past communications with my mother, her Facebook posts. It analyzes the background data and carries out my human intentions. That's the cutting edge of the evolutionary systems.

Well, we also have machine learning algorithms that can take scientific data-- 10-year-old data. 10-year-old data on cancer enzyme research, using enzymes to treat cancer-- take that data that's 10 years old and ask the system-- this is IBM's Watson-- to predict what new discoveries will be made using enzymes to treat cancer. Watson worked on this very, very hard for one afternoon. One afternoon. Read over 70,000 medical papers in one afternoon, and predicted seven of the nine major discoveries that took place over the succeeding 10 years in enzyme research for cancer. These new synthetic intelligences are going to transform business, law, medicine, education.

They're doing it in a way that we really can't fathom at this point, but in the Obama administration, they said, let's look forward. Let's convene a bunch of people to say, what are the effects of artificial intelligence on the economy? And working with scholars from around the world, they came up with a number, and that is by the year 2045, 47% of all the core tasks of all job activities-- all job activities, not working in factories, doing the mechanical work-- the machine assembly type stuff. We're taking all job activities from doctors, lawyers, professors-- all 47% will be carried out by artificial intelligence. And that's a real worry.

That should concern us, but even more concerning is the fact that as we give in to making use of these smart services-- smart service for things like remembering. My young son is a real outlier in his generation. He remembers a few phone numbers. I mean, has them in his head and he can dial them. Most people do not. He can actually tell you his address and how to get there. Most people can't do that today. They'll say, I just use the GPS and find your way.

We're outsourcing not just memory. We're outsourcing research. Those of you who are teachers know what students think research is today. It means go online and find some stuff and put it in different words, and present that as research. This is not research in the way that it used to be. Activities like remembering and research, and we can extend that to things like parenting-- we can extend it to things like governing-- those are intelligent human practices that take effort. It takes effort to remember. It takes effort in order to be able to do research, effort to be a decent parent. It takes effort to govern. And what we are doing is risking the de-skilling of humanity by outsourcing to artificial or synthetic intelligences what previously had been the domain of intelligent human practices. We are risking the de-skilling of humanity, or you could think of it as the atrophy of our intelligence. In the same ways that muscles atrophy if we don't use them, human intelligence can atrophy.

Now I've been giving kind of a corporate side to this and saying, that's a worry. There are concentrations of wealth, and that should worry us. There are abuses of what Shoshana Zuboff calls surveillance capitalism. Those should concern us. But this surveillance capitalism or smart capitalism can only operate with state sanction, and so what we see is that there's actually an agreement between what we could call the attention economy-- the people promoting the attention economy and benefiting most from it-- and the states-- governments around the world who are interested in the surveillance capabilities. Now we in the US will say, we don't do surveillance on our own citizens. A white lie. All of the data gathered on all American citizens can be gotten by the government any time the government wants in what are called fusion centers, where corporations store the data and allow the government to come in and access that data, which is legal because the government didn't gather the data. The government only accesses that data if it wishes to.

Other governments are more brazen about it. The Chinese government is quite brazen about it and say, information, like property, belongs to the state. All information belongs to the state. That's obvious from their perspective-- the PRC. And in between, we have a huge range of possibilities, governmentally. But it's this marriage between the attention economy and the surveillance state, and it's produced-- it's a marriage. And you bring the two together and you get different kinds of progeny. There are different approaches to doing this new form of state economy that we could have called, in the old terms, fascist-- this new arrangement between the corporate sphere and the governmental sphere.

But there is an arrangement that's taking place, and it's being differently marketed. There's the American, there's the EU and their version, there's the Chinese version, and lots of people would say, what we're really looking at is this-- a global competition that has emerged between these major governmental players, and we're looking at a choice. We have to make this decision. Which one are we going to align ourselves-- which national or regional model? Is it the American or European? Choice celebrating. Freedom of choice celebrating internet. Or is it the Chinese, control oriented way of approaching things? And to present it that way is really good in political terms and in corporate terms. It's very convenient to pitch it like that-- as this global arms race. And so you hear people talking about a global arms race, and there is.

Don't get me wrong. The militaries all around the world are investigating artificial intelligence and the possibility of autonomous weapons, and they're doing that for all the speed and power reasons that the corporations are using it. It turns things around fast-- huge amounts of data crunched at a speed humans cannot possibly do, and response times that are in the milliseconds, as opposed to our sloppy human biological systems-- carbon based systems, where it takes quite a long time for a signal to go to brain to hand. We're not very good about firing weapons. Machines can do-- fire off 1,000 rounds in the time it takes a human being to pull one trigger. The benefits to the military are huge.

But if there's an arms race, it's really not about winning battles with weaponry. We shouldn't discount that, but that's not the real worry. The real worry is that the arms race is about winning hearts and minds. It's about the transformation of human sentience-- human patterns of felt presence and human desire from the inside out, not from the outside in. What we have is a new logic of domination. It used to be that domination, if we call it a new great game-- the old great game in the 19th century-- it was this global competition played for dominance in the access to and control over land, resources, and people. Those of you that know your history-- the late 19th century, early 20th century-- competition. Global imperial competitions.

There is a new great game going on. It's not about the control over land or resources. It does include control over people, but more specifically, it's the control of consciousness itself. It is trying to control the dynamics of human consciousness. That's what's really at stake-- changing the patterns of our own attention and awareness from the inside out. And what makes it so effective and difficult to resist-- it is not a logic of domination through coercion. It's a logic of domination through choice and craving. We simply give people what they want.

Now why should that worry us? Well, one thing we could do is look to Buddhism for an answer to that part of the question. One of the basic Buddhist teachings is that all things arise interdependently, and strongly interpreted, what that means is things in relationship are ontologically posterior to relational dynamics. Relationality is ontologically more basic than things related. Things related are abstractions from relational dynamics. You could think about that in a way like Luciano Floridi, whose Ethics of Information I can refer people to as a really interesting book. He kind of pulls on Aristotelian resources and says, you know, what we're looking at is different levels of abstraction. But if you're a dog in this room listening to this talk, it's maybe not very interesting. What's interesting is those information signals from the back table, and you're moving toward that table, maybe trying to lift up and see what's on the table to take a munch on. OK, it's a different environment. Humans have different environments.

So what level of abstraction are we working at? What's present? What exists, what's present, the things in the room, the processes playing out, the relationships that we are witnessing-- a partnership, a marriage, a friendship-- whatever it is. These are abstractions from relational dynamics. One of the great insights that comes from that is realizing that our conflicts, our troubles, and our suffering-- dukkha-- are not a function of the operation of natural law. They're not a function of mere chance and they're not a function of divine fiat.

The stuff that we're experiencing is a function of the karmic causality-- this recursive, spiraling causality-- a nonlinear form of causality that Buddhists refer to as karma, which is the process by means of which values, intentions, and actions bring about patterns of outcomes and opportunities that are consonant with those patterns of values, intentions, and actions. If you want to change what you're experiencing, change your values, change your intentions, change your actions. It may not be enough to change your actions, as anyone in a marriage knows. You can't always change the way other people feel about you simply by doing the right thing. You have to express it in the right way. You have to have the right intention behind it. The values have to all add up, so it's comprehensive and you see systemic change.

So the Buddhists are saying, if we're experiencing conflict, trouble, and suffering, what we look to is conflicts among our own values and interests. And ultimately, Buddhist practice is about identifying, in our own experience, these predicaments-- these conflicts of values that are driving us in the direction of relational dynamics characterized by conflict, trouble, and suffering. So traditionally, craving was thought to be one of the primary causes of dukkha-- of conflict, trouble, and suffering.

And you might think, well, it was written 2,500 years ago-- these old texts. And craving and desire-- weren't those Buddhists just a bunch of prudes? It wasn't about prudery or social and moral conservatism. It was about really understanding this non-linear, recursive kind of causality that's involved in karma. So to get better at getting what we want-- it sounds like a good thing, getting what we want. But to get better at getting what we want, we have to get better at wanting. And to get better at wanting, which is a sense of feeling of lack-- of missing something-- of needing something we don't have-- then we can't ultimately want what we get. That the karma of craving forms a desire is the karma of continual dissatisfaction-- of continual feeling of want and lack.

It's not that desire is bad. The Buddha was quite clear. It's not every form of desire that was bad. There's a particular kind of desire-- tanha-- clinging forms of desire-- that are problematic. And that's because of the way that karmic loop plays out. If your desire as a teacher-- as a bodhisattva-- is that your students do better than you do, that's a good desire because in order to get better at being a teacher that allows you to develop the kind of students that go beyond what you have ever been as a professor or teacher, you have to get better and better at dealing with more and more kinds of students and addressing their needs in ways that are effective with them. To have that desire puts you on a different track of development, entirely.

So now if we look at what's going on with this attention economy from a Buddhist standpoint, you know, the control-- the central appeal of the human, machine syntheses that are now taking place-- these syntheses of human and machine intelligences-- the primary selling point-- the way that they're being brokered is we are providing you practically frictionless freedoms of choice. Freedoms of choice about what to experience, what to listen to, what video content to have, where to travel, what to buy-- virtually frictionless freedoms of choice. What do we have to give up for that? All that we have to give up-- it's not much. Just our data exhaust. As long as we allow our data exhaust to be harvested by corporations and governments, and that they can make use of that, feeding it through machine learning systems and artificial intelligence systems in order to be able to feed back to us our desires.

Now those of you who have children know or who can reflect back long enough-- I turned 65 Sunday-- so I'm looking back to my teenage years and thinking, were they good years? Bad years? One thing I can say for sure-- I would not have wanted the values and the sensibilities that I had as a 16-year-old being fed back to me as a 65-year-old still. That what we are in danger of is getting into a karmic cul-de-sac where these machine learning systems are feeding back to us our values-- precisely our values, aims, and interests-- our cravings-- and giving us the chance to never be disappointed.

There are now systems that you can have your body scanned so that when you buy clothes online, it can be tailored to fit you exactly. You will never have to buy clothes that don't fit. You will never have to listen to music that you don't like. You'll never have to be exposed to people, if you so choose to do so, who disagree with you vehemently about basic issues of how to be human-- how to lead a dignified life. We can skirt all that. We can skirt, in other words, the necessity of exercising our own intelligence.

Now we might think, Luddite raving. You know, come on. This guy-- he's just beyond the pale. You know, and there's a point to which there's maybe some truth to that. I've never owned a television. It's done a lot for my attention and how much attention I've got. I never owned a radio in the house. You know, I have to play music myself. Damn, pull the guitar out. You know, we can make choices. We can do different kinds of things, but we're living in a world now where those of us who are university educated-- lots of us in the room-- PhDs. We like to think the rest of the world's like us, and it's not. It's not. I'm not an elitist of the sort who says, you know, the rich and powerful-- intelligent should rule. I'm elitist in the sense of saying, those of us who have had the fortune of that kind of upbringing, environment, and so on are not normal.

We are now at the edge of a transition from the so-called digital natives that were born into the smartphone world to a new generation that is synthetically socialized-- synthetically socialized. We now are living at the verge of a time when there will be young people who grew up and who have never known a world without a virtual friend-- without a virtual personal assistant. They will have not known a world in which their teacher was an artificial intelligence that made use of global databases and bio-social feedback from the individual student and their learning outcomes to use global awareness about what works to get that child to learn what the child needs to learn. They will not have lived in a world where the environments that they were a part of at play and at work were not densely populated by ambient intelligences, not visible-- not visible people in the room. Ambient intelligences listening and watching to what's going on in the environment through all the voice activated systems that are part of the internet of things.

And to make use of that in an absolutely devoted fashion to ensuring that these children growing up in this world will never have a disappointing play time-- that they never have to have a workplace scenario where they just can't get along with people and have to deal with it in person because everything will be systemically mediated. What we run the risk of is life in what I call digital captivation, which is a deprivation of our own attentional resources. The attention economy 2.0 depends for its vitality on more and more human attention energy being diverted into these electronic systems-- into that infrastructure to be able to get the systems-- the synthetic intelligences-- to be better and better at their job of figuring out what we desire, what we want, and giving it to us in a way that is profitable by those who are in charge of the system.

We should worry about the inequality stuff. That, to me, is really important. We live in a world where there's so much inequality that in 2014, the 80 richest people in the world had more wealth than the poorest 3.5 billion. Then it changed. 2018-- the number turned into the eight wealthiest people in the world have more wealth combined than the poorest 3 and 1/2 billion. That should worry us. That is a world of egregious inequality. We should be concerned about that.

But we should really also be concerned about the ethical singularity. Our arrival at a point, because of our investment in these systems, of emerging synthetic intelligences whose intention-- human implicate intention-- is to make use of our desires, our values, our intentions to feed them back to us so that we simply get more of what we want. The basic condition for which, as I said earlier, will be that we don't actually want what we get, and we'll need more. We will end up living on the equivalent of karmic cul de sacs in residences that are designed perfectly to make us happy-- electronic spaces to live in that will make us happy, happy campers.

But what we're doing is trading off relational wilderness for that digital cabin. Captivation. And it'll seem like a good deal along the way, and that's the frightening thing about it. For most people, getting what you want-- obvious, basic value. I mean, we stake our political systems on us as liberal individuals being able to act in our own intelligent self-interest. But if our self-interest is being formed from infancy forward through our interaction with synthetic intelligence that are programmed to do the bidding of corporations and states for their purposes, do we actually own our own attention? Are we in control of our own karma? Are we able to resist our karma in the way in which the Buddha suggested we resist our karma?

Buddhist practice is about taking responsibility for the karmic nature of causality and the world that we're in. It's taking full responsibility for the world that we are creating for ourselves for the experiences we're having, and doing whatever is necessary, in terms of practice, to embody the values-- the constellations of values, intentions, and actions, that are consistent with us developing greater generosity, moral clarity, patience, vital energy, attentive mastery, and wisdom. And to be able to conduct ourselves in the way that enables us to take any relational situation that we find ourselves a part of and contribute to it in a manner that enables those relational dynamics to take on a more liberating character for all who are involved.

I mean, this is a model that I think that we can move forward with as a basic-- this is not Buddhism. I mean, attentive mastery, patience-- these are global phenomena-- character traits. But we need to do it in a way that's consistent and that gets embedded within our technologies. It's not enough for us to do this individually. It simply isn't. We need systemic level change. We need changes in the way in which the human technology world relationship is evolving. That relationship is now evolving with the active participation of artificial agencies. We need to take account of that and to start shaping those artificial agencies-- shaping them by infusing them with a different set of sensibilities and sensitivities.

So I would like to suggest that if we're going to move forward-- if we're going to avoid this ethical singularity-- this point of karmic collapse where the opportunity space for us to change our karma personally collapses around us because of these larger systems that we're a part of, we're going to need ardent artificial intelligence and synthetic agencies that support, rather than supplant intelligent human practices. That should be really basic.

If something is going to supplement human intelligent practices, fine. Supplant-- not such a good idea. We need to foster concentration, rather than distraction. It is possible to design systems that reward concentration, rather than distraction. Attention is attracted and sustained either way, but there are different payoffs for that. But we need to design this human technology relationship-- world relationship in ways that have technologies that support concentration, not distraction. That support commitment, as opposed to simply exercising free choice. That support our individual personal development of the resources to strive and make the effort that is required in order to attain relational virtuosity, not just freedom of choice, which is cheap and easy. Freedom of choice is cheap and easy, especially when most of the goods and services are virtual. It costs nothing to produce after a certain point. You know, diminishing returns-- forget. Digital media-- reproduce it 100 million times no different than 100 times. It's cheap to do that.

What we need are systems that support effortful human engagement in the world, not ones that seduce us into a world in which we can make effortless freedoms of choice-- exercise effortless freedoms of choice. What the intelligence revolution has done is it's given humanity the power of Midas. Midas, you will remember from your Greek mythology, was granted the capacity that anything he touched would turn to gold. That's a big responsibility. The responsibility that humanity is giving itself through this intelligence revolution-- through this new Cambrian explosion of evolutionary dynamics in the world of intelligence-- we are giving ourselves the responsibility for taking full responsibility for our own values, our own intention and our own actions. We will not be overtaken by artificial intelligence because we're going to take over ourselves long before then.

If we have to worry about where we're heading with this new intelligence revolution, we only need to look in the mirror. Look in the mirror. Have we done a good job of ending armed conflict? Have we eliminated poverty? 800 million people go to sleep hungry every night, and that's not 800 million people whose tummies are rumbling. This is 800 million people who do not get enough nutrition for children's brains to develop sufficiently so that they do not have lifelong learning disabilities. 800 million people like that. We haven't done it. Have we stopped domestic violence? No. Have we addressed the egregious inequalities that have now emerged in the world? No.

The synthetic intelligences that are evolving today will give us more of the world we've already created, and that should worry us. What we need to do is to imagine a more humane human technology world relationship, and there's no definition for what it means to be humane. One of the nice things about Buddhist ethics, at least as I read it, is that it's based on this idea that what we're striving for is kaushalya outcomes. Kaushalya opportunities and outcomes, where kaushalya sometimes gets defined as skillful.

I think a better definition-- a more robust definition of it would be superlative outcomes-- virtuosic outcomes, and decreasing those that are without virtuosity. That if we live in a world where we're committed to realizing more and more virtuosic outcomes, what that means is that we're not content with just not doing the bad things-- not just avoiding the mediocre things. It means going beyond what is currently considered good. It gives us an ethics that is evolutionary in nature-- an ethics in which creativity is basic. Every ethical system has blind spots. You scale it up by human terms or by machine terms-- that can be disastrous.

A competition among ethical systems around the world, driven by market, trying to forget market share of moral attention could be disastrous. We don't need a new species of ethics. What we need is an ethical ecosystem where different ethical traditions relate to one another in the way that is required to develop a valorization of ethical diversity, where diversity is not just the presence of differences. That's mere variety. Diversity is what happens as a relational quality when differences are activated as the basis for mutual contribution to sustainably shared flourishing. That's quite a high bar, but that's what you find in natural ecosystems. Species, contrary to what you were taught maybe in high school at a bad high school-- species don't compete with each other. Animals within species compete. Plants within species compete. They compete for the same resources.

The differentiation of species is opening up new domains-- new energy sources within the ecosystem as a whole. Species are collaborative. What we need is an ethical ecosystem-- a system in which we take account of cultural, generational, national, historical differences, and to take those into account as the basis of discovering means to mutual contribution to our sustainably shared and hopefully more humane futures. That is a religious activity in the sense of what we're talking about is reconnection.

We're not talking about connectivity as the answer. We're talking about reconnecting with our own humanity and doing that in a way in which we take seriously the commitment to realizing within ourselves, in our communities, our families, and spreading that out beyond that for relational virtuosity-- for engaging one another in the ways that we need to compassionately and creatively to realize more humane and equitable outcomes than the ones we currently seem to be heading for. Thank you very much.

[APPLAUSE]