Return to site

The Future of AI

· AI,Future

A panel discussion on the future of artificial intelligence between Dr. Andrew Ng, Dr. Sebastian Thrun and Dr. Kai-Fu Lee, moderated by John Markoff.

John Markoff (John)

Sebastian , Andrew and Kai-Fu, would you come up and join me? Very good to see you. While Kai-Fu was speaking, he reminded me of something that William Gibson, the science fiction writer, has said about the future being already here, it's just not evenly distributed. And you were talking about American caution and Chinese pragmatism, and I began to wonder if whether the future might appear first someplace else than the United States. I also think-- I really admire Kai-Fu for actually putting years in your predictions. That shows a lot of courage. We're going to have to come back in five years. You have, I guess, a good track record, at least on some predictions. So, we'll come back in 2022 and see how you're doing. One of the things that I think that's also quite striking here is that Paul Saffo, who's a futurist in Silicon Valley, likes to say, "In the short-term, things seem to move more slowly than we think. In the long-term, they move more quickly." And so, I wanted to start with a discussion of pace and how quickly we really are moving. Because in Silicon Valley, there's this, I guess, almost religious belief in exponential change. And we've certainly seen great progress in artificial intelligence technologies in the last half decade. Let me start by asking each of you, is it exponential? Was this a one-time change based on a set of technologies that began to work after they had not worked for a long time? Or are we going to see the pace of change increase? Kai-Fu, do you have a sense?

broken image

Kai-Fu Lee (KFL)

I think it will definitely increase, because it's the number of fields that have enough data for existing algorithms to work. That's kind of one multiplier. And then, new algorithms that will be invented, that will be the other multiplier. You have two multipliers, it should be rapidly growing and accelerating. For the first part, we already see it in Internet space, in China anyway. I mean, the number of Internet apps that need AI behind it, and the number of financial apps that need it. And then we're going to see medical, education and many others. So, I definitely see it as rapid growing and acceleration. There's some bubble right now in terms of valuation being too high, and some of the VCs are not so aware of how to evaluate companies. But I think that will just be a temporary issue and we'll come back normal. Being precise about dates again - because AlphaGo kicked off the bubble, and companies are funded about 18 months. So, I think the bubble should be hitting about 6 to 12 months from now in China, and then people will see value again. Everything will be back to normal.

John

Okay. Andrew, before you start, have you said anything publicly about what you're going to do next [laughter]?

Andrew Ng (Andrew)

So, Carol, my wife, said to me a few weeks ago-- she said that I'm the hardest working unemployed person she knows [laughter]. So, I am working hard but I don't have an announcement to make yet.

John

Okay, I just wanted to check [laughter]. What about pace of change?

Andrew

John, to be honest with you, I think that exponential thinking is an easy sound bite. It gives a lot of hype, and I think that that level of analysis is a relatively simplistic one. There have been arguments that human population is increasing exponentially, and you can make the obvious extrapolations to when we'll all destroy the planet or whatever. And maybe we will destroy the planet, but I'm not convinced that a very simple exponential argument is the subscripted enough analysis for us to really understand what's going on. I think that the pace of change-- over the last several years, we've realized that we rise at deep learning. Actually, I never really got to thank Sebastian publicly for helping me start the Google Brain Project way past, had me talking to Larry, that got us started. We were never together on a panel to talk about it in public, but thank you, Sebastian. But I think, thanks to things that rise of deep learning, we now see a clear road map for a certain generation of technology to supervise learning, to transform a lot of industries. And we hope that there will be other ways of technological innovations, to unleash other ways. And I don't have the exponential. I know it’s huge, but whether it's exponential or not, I don't really know.

John

And Sebastian?

Sebastian Thrun (Sebastian)

And I have to thank Andrew for starting the Google Brain Project and actually doing it as opposed to just funding it. That was before anybody else did deep learning, and it had a huge impact on Google. And it was Andrew and his vision, and then others later joined. I see exponential as such a funny word. I mean, humanity is about 40,000 years old, give and take. Almost everything industrialized last 60 years, I'd say. And in the 60 years, more interesting about it is the last year. So if you say it's going up, I would point it from the other way. I would look at the opportunity. The opportunity's enormous. 75% of us in America work in offices. Almost all of us do extremely repetitive work. Yes, John, you too [laughter].

John

I retired from assembly.

Sebastian

Not all of our work is -- even the CEO, your work is extremely repetitive. In fact, I've given this speech before exponentially many times [laughter]. It's actually very repetitive. It's kind of stupid I'm doing this again [laughter]. When you have an eye looking over your shoulder, I believe anything repetitive in offices can be automated. And if you don't believe it, there's recent examples in law, where people looked into legal document discovery, or contract drafting, and that can surpass human. Obviously, Go play is now. There's work in medicine, dermatology, we now have AI assistants that outperform human doctors that are highly paid, like $450,000 a year, in finding skin cancer. And that's just the tip of the iceberg. So, if you look at what's going to happen next, I think it's going to be AI is going to be to the office worker the same way I'd say the Agricultural Revolution and the steam engine has been to the farmers.

John

Okay. So, this is going to be about consequences and futures, both utopian and dystopian. And let me start with the question of jobs are at large. So, Kai-Fu has laid out a map that shows great change over a relatively compressed period of time. And I just wondered if-- well, maybe we can start with Andrew and then follow up with Sebastian. But one of the challenges for me about that view of the world is that we've had personal computer technology now in America for three decades. And unfortunately, productivity rates have gone in the wrong direction, which is sort of not what you would expect. All over the world, productivity is going down, and in the United States today, right today, there are 146 million people working. It's more people than ever have worked in American history.

John

So, there's a disconnect here somewhere that I think we have to get over. One would have expected to see the impacts already, and clearly we are seeing impacts, although sometimes we're too glib. For example, right now, job churn, the rate of jobs that are disappearing, occupations that are disappearing and new ones that are creating in America is a historic low. So, I'd like you to sort of chart for me how we're going to get from A to B being what some computer scientists and Kai-Fu believes, that there'll be a great many of jobs that will go away over a very short-term period. Why haven't we seen it already?

Andrew

So, one story was, I've led teams to go into call centers, and sat in interview call center operators, so I can sit and understand their job. And when I look at what they do in the phones, I know that my teams can write software to automate a large fraction of the jobs away. So, I think one of the reasons that technologists look at different trends and economies is that we often see leading indicators that show up even before certain employment, GDP, or went on metrics might. And I think that one of the unfortunate things about technology is that it creates tremendous productivity. There is, frankly, one dark side of technology, which is some companies have gotten really good at hacking the human brain and creating distractions to the human brain, which I think actually harms human productivity. I think economists are just starting [laughter] to factor that into their studies of productivity as well as anti-productivity. But I think looking at leading indicators, I feel that the job displacement is coming, because the AI technology-- Sebastian did this work on diagnosing skin cancer using computer vision, and I think when these jobs displace, I feel quite confident that we'll need solutions like education, like Coursera and Udacity in order to re-skill very large segments of our population.

John

Sebastian, do you have a-- before Kai-Fu, do you have a--?

Sebastian

Oh, I can't even recite all the numbers you gave me. I need AI to help me here [laughter]. I don't know, I can't wait for these stupid jobs to go away. I mean, look at accountants, right? They do, all their life, the same spreadsheet math over and over again. I think some of these jobs are very degrading. I think what we observed in the last hundreds of years is, all of us used to work in the fields, like four, five hundred years ago. Europe had 150 years of war, like 5 generations--

John

I think I'd rather be an accountant [laughter].

Sebastian

Yeah, you're right, being an accountant is absolutely true. And we all weren't able to read or write, except for a very small number of people, like monks and so on. And we would write very slowly. And then, all of a sudden, we got this free time, because now we had machines assisting us in every culture. So, we learned to read and write. And all of us are now authors. We all write on Facebook and Twitter. And this is the beginning of human creativity. I think there's so much creativity to be unleashed. I just can't wait until we stop wasting our time doing the same April 15th tax deadline again and again, or whatever it is.

John

Yeah. So, Kai-Fu talked about this and it's frequently mentioned, this juncture in the last-- at the sort of the beginning of the Industrial Revolution, we had this - and you talked about it - we had this agricultural world. Who would have been able to predict that there'd be all these jobs? Is there any possibility that job creation might actually be more dramatic and more surprising than we thought? I mean, 20, maybe even 50, 100 years ago, who would have thought there would be so many massage therapists in the world? And yet, we're full of massage therapists. You picked your book title (Artificial Intelligence), Kai-Fu, with search engine optimization. I imagine that's a job category that was not around five decades ago, and yet there-- how many search engine optimizers are in the world now?

KFL

Yeah, I really don't know how the human productivity is measured, but I suspect it's some combination of how many people are employed. We're probably employing more people. Imagine if you employ people and make them do really boring jobs, they're not going to be more productive. So, I think what we're trying to do is remove the boring jobs away, so that people have exciting jobs that they will naturally work hard, and they'll do jobs that they love, and they'll pour more hours into it, and they'll get more accomplished. So, I think human productivity is flat because the jobs are boring and because we haven't taken those jobs away. And I think we're, at the first time, going to completely remove most of the boring jobs. And as much as I try to create a few jobs of passion, you know, doctor’s assistant, and things like that, I think most of the replacement - traders on Wall Street, the call center agents - they're completely going to be gone. So, I think we're going to basically push the edge and drive people to either do something they love and become productive, or do something fun. The other answer is, maybe productivity is the wrong answer. How do you measure a poet's productivity? How do you measure an artist's productivity? So, I think we want people to do what they do best, whether it's to be productive, to be fun, to be provocative, or to basically share love, to do things that are positive for this world. Maybe we need a new metric. And then I think that metric would go up once we remove all the boring stuff from what people have to do.

John

You noted that the American companies who described themselves as AI companies, a number of American companies, but-- so, slight correction, both Microsoft and IBM have gone out of their way to say they're not AI companies. They're not AI companies, but they're IA companies, so both Ginni Rometty-- and IA is intelligence augmentation, intelligence amplification. It was this idea proposed by Doug Engelbart many years ago that, rather than replacing humans, you use the tools to extend them. And I think IBM and Microsoft have both sort of set it upon as their corporate agenda to use technology that way. Do you think that's possible?

KFL

If I were an enterprise company with shareholders to answer to, I would say I'm an IA company [laughter].

Sebastian

I would argue that the entire history of technology is IA. I mean, you don't have humanistic robot in fields plowing farms yet, but we have amazing tractors, with the right operator, are really smart. We tend to do stuff as complementary to people, to be honest, and not the same, right? Because we could use it to make a person. It takes about 20 minutes [laughter]. But it's really hard to do. We've turned ourselves in super humans. This tool over here allows me to shout all the way to Australia. And it's kind of amazing. Or I can take a plane and cross the Atlantic in like 11 hours, which is very complimentary to human capability.

Andrew

One of the ideas I spent a lot of time thinking about, that would be irrelevant to those of you building AI businesses, is what is this concept of an AI company? And I don't think we've actually figured it out yet. So, here's what I mean. Right around the corner from us, the Stanford Shopping Center, if you take Stanford Shopping Center and build a website - which they did - that doesn't make the shopping center an Internet company. We know that a shopping mall plus a website does not make an Internet company. And I don't know if they have a CIO, but they say, "Look, we have a website. We're just like Amazon. Yeah, we go." No. There are a lot of differences between a shopping mall with a website and a truly Internet e-commerce company like Amazon or Taobao. And so, the differences are, with the rise of the Internet, we knew that we could do much more A/B testing, so you just run longer experiments. A traditional retailer can't, but an Internet website can. And you push a lot of decision-making power down to the engineers, because at an Internet company, a lot of the knowledge exists only at the engineers' level, so it's less CEO top down.

Andrew

So, I think a real Internet company had to re-architect itself relative to traditional companies in order to take advantage of the new capabilities that the Internet or technology realizes, such as A/B testing and data and signs analytics and so on. I think with the rise of AI capabilities, we're still frankly in the early stages of figuring out how to reorganize our companies in order to take advantage of the new AI capabilities. I think, some examples, I think that AI companies are much more strategic about the accumulation of data and the organization of data. AI companies tend to, more likely, have centralized data warehouses. They tend to be-- the workflow and processes I'm seeing, we have traditional job descriptions like a product manager does this, an engineer does this, a designer does this. I feel like those traditional workflows are also breaking down in the era of AI, and we are starting to write new job descriptions, like part of the job description as a product manager is to design a test set. And that's like a new way of organizing our workflow. So, I think the whole ecosystem-- I think companies like Google and Baidu are forward thinkers, but I think all of us in the Internet space are still thinking even, what does it mean to be an AI company?

John

Yeah. America as a society, about every two decades, we go through this period of anxiety about the impact of these technologies on our society. To be honest, I was part of creating this anxiety about a half a decade ago. I wrote this piece, to Sebastian's point, about a generation of technologies that were beginning to displace lawyers. They were able to read documents more quickly than lawyers, and I really had my hair on fire about the impact of this technology and AI. And then I had a conversation with a behavioral economist by the name of Danny Kahneman. And I was sort of arguing that manufacturing technologies will come to China and that they would lead to social disruption because of the loss of jobs. And he stopped me and he said, "You don't get it." He said, "If they're lucky, the robots will come to China just in time." And I stopped and I said, "Excuse me, what do you mean?" And he walked me through the demographic changes in modern China.

John

China had this one-child policy until recently and so, in fact, the working age-- my understanding, and correct me if I'm wrong. The working age-- workforce in China has actually been contracting. And more importantly, to my sort of way of looking at the world, China's a rapidly aging society. So, that means that people over 65, that people over 80, are going to increase dramatically. And we tend to look at society the way it is now, a snapshot, but in fact, the entire world, except for the Middle East and Africa, is aging. And have either of you-- and it's changed my view of the world and these technologies so much that I no longer go around and ask people, when will the self-driving car come? I ask the roboticists I talk to, when will you be able to make a robot that can safely give a shower to an aging human? Which I think is a hard problem, too. Have any of you considered the impact of AI technologies on an aging society?

Andrew

I think it is actually one of the categories that entrepreneurs invest less in, because, frankly, the average age of entrepreneurs is something that's relatively young. And so, a lot of entrepreneurs start up building solutions for themselves. And unless you are hitting some of the issues that come up as you age, I think there are a relatively fewer number of entrepreneurs are thinking in that space. But I think this is certainly-- I feel like, I don't know, spend time thinking about it, I see some companies entering that space. It's interesting to see what will happen.

KFL

Yeah, I'm going to take a different view on this. I have many aging people in my family. I believe what they want is the human touch. I don't think they want a robot to give them a shower. I think they want someone to talk to. Once, I had a friend who had a company that built a little cool device for the elderly, and it's got all these functions, push a button to call a doctor, push a button to order food, push a button to know the weather, push a button to watch a movie. And then, the only button they push is the customer service rep [laughter]. And then they talk to them about their kids and their life.

Sebastian

You need an AI there [laughter].

KFL

Well, you either need an AI or a person.

KFL

I would hope 80% of elderly care would actually be done by people. Because I think we have excess labor, because I believe the elderly want that human touch, and maybe 20% by robotics. But Sebastian seems to be more optimistic about a robot helper [laughter].

Sebastian

God. No good answer [laughter].

John

So, this is a very interesting day. Kai-Fu talked about the sort of perhaps different rates of progress in China and the United States, and the pragmatism in China and caution in the United States. Today, the US federal budget was-- the complete US federal budget was published. And as it stands right now, the US is looking at dramatic cutbacks in all kinds of basic science research, dramatic cutbacks in-- ultimately in computing and AI research, the kinds of things that created the personal computing and the Internet.

Sebastian

Maybe you elected the wrong guy.

John

Well, without getting into that [laughter], because it's possible that we did. China, at the same time, has begun spending heavily in basic AI research at the government level, not to mention the entrepreneurial stuff that's going on. And I was wondering if I could sort of ask both Kai-Fu and Andrew first about sort of the competitive state of the impact of these basic investments going forward in both countries?

KFL

Sure. China has come out with what I think has been a very effective program, not just on AIs, called the Thousand Person Project. And basically, it's an effort to bring back overseas Chinese and really take care of their compensation, their family, kids' education, as well as their research funding. See, when I talked to some of the top people who went back, as well as many of the people who are still in the US universities, the top three complaints are, they have to write all these government grants for tiny amounts of money. And two, their pay is too low, and that relative to the people who went off to large companies, like us [laughter], or start their own companies, like us [laughter]. The third problem is they don't have data. So, in a top university doing AI, they have seven orders - I'm making this up - seven orders and less data than Google. So, how can they do real mainstream research, right? How can you beat any of the giants in face recognition, or maybe do anger detection based on blinks or something like that, or stress detection based on voice pitch change or something. You can't do mainstream work anymore because you don't have data, so I'm not sure what the answer is. I think the brain drain from top American universities is a significant issue. If you just look at US itself, some top universities, I won't name which one [laughter], have recently had a large exodus of people who have left.And my alma mater, and I guess you were both CMU-affiliated, too, also lost the whole autonomous driving team to Uber. So, I would think that US academia is undergoing a terrible period, where top people who should be staying and doing research are being poached by other opportunities, by startups, by large companies, and that it took so much money and resources to train these people to become great researchers. Looking at China now, I think China academia currently has very few super-experts, so it has a different kind of problem. But the Thousand Person Project is bringing some back, injecting some new blood. The other effective program is called the Young Thousand Person Project, so people under 35 who are sort of the assistant professor level. So, I do think the Chinese programs are going to have some effects. We've seen early versions of the Chinese government AI 2.0 programs, and my guess is that these programs will be effective. But regardless of China, I think the brain drain from top American academia is a considerable problem, and if what you say is right about the further funding, that exacerbates the problem.

John

Andrew, you're one of the few people who's sort of had a deep inside look at both countries in state of the art. Who's ahead and what's the rate of change?

Andrew

I think the US and China both have different strengths. Both countries have amazing AI teams. I will say that the US has tremendous strength in basic research, a lot of the algorithmic innovations. Just the new inventions of-- just neuro architectural, that have that clever algorithm. A lot of the activity is happening, and the US is financing that. I feel like teams in China move very quickly, and the ability to take things to market is incredible in China. And I think, maybe, I think they are just partially cultural differences, partially demographic differences. In China, the country feels more homogeneous. I feel like if you run a market segmentation figure in the US, you may get like 100 market segments, but if you did in China, you might get 10 market segments or something. I'm making these numbers up, this is not a scientific claim. But I think this is why, in China, it's possible for products to go from 0 to 100 much faster than in the US, but you can't fight market segment by market segment to gain market share, so fortunes are made as well as lost much faster in China than in the US. And I think this has driven a dynamic in China where, your company does something in the morning, well, you better react that afternoon or else they're going to kill you. So, the pace is incredible in China. While I was leading teams in China, I'd just call a meeting on a Saturday or Sunday, or whenever I felt like it, and everyone showed up and there'd be no complaining [laughter]. If I sent a text message at 7:00 PM over dinner and they haven't responded by 8:00 PM, I would wonder what's going on. So, it's just a constant pace of decision-making. The market does something, so you better react. That, I think, has made the China ecosystem incredible at figuring out innovations, how to take things to market. I'll tell you a story. I was in the US working with a vendor. I won't use any names, but a vendor I was working with actually called me up one day and they said, "Andrew, we are in Silicon Valley, you've got to stop treating us like you're in China, because we just can't deliver things at the pace you expect." So, this is a true conversation, and so, I think that pace and close proximity of customers drives further innovations in China, although I think the basic research capacity in the US is also phenomenal, so I actually love both ecosystems.

John

Do you have a thought, Sebastian-?

Sebastian

I've been to China five times, some blue skies, some not so blue [laughter]. I want to comment on the brain drain, because I want to take a position that's actually very positive because of brain drain. There's fields that have no brain drains, for example, art history. No one could steal art history professors. And I would think the brain drain is somewhat correlated to the impact of a field. And there's many, many academic fields with very little societal impact, and we have a lucky moment, that actually have the situation that we do something in academia that actually has an impact on real life. Also, actually, like Andrew's taught us, Google Brain, and made Google Search better, and image recognition, and video, and all these wonderful things. I'd rather him do this than writing ten more papers, to be quite frank. There's a lot of paper writing in academia that happens... So, I think what we should do is we rethink the relationship of academia on an industry. That's much more important thing because people of unbelievable intellect and experience and stature, like the two gentlemen next to me, are working now in industry, and there ought to be a better way to bring them back to academia than having them to be adjunct professor without many rights, and so on [laughter].

John

Will anything be lost in that, Sebastian? What about basic science?

Sebastian

Look, basic science is kind of this funny thing. I think we are in AI at the escape velocity. And at this point, if I were to put my one of my own dollars in, whether it be tax money or my personal investment money, I'd rather put it into a corporation than a university at this point. Because I think, even in deep learning, I would argue the intellectually interesting thing that fuels the field today is about 20 to 30 years old. Yann LeCun did his work in 1998, I guess? 1988, I guess. Much older. 30 years ago on the convolutional networks. And what really made the difference is the company that Google was able to string together enough computers and get enough data in to make this thing actually work. I mean, I wrote a thesis on the same topic, and my thesis was laughable because I had a network this big and a computer this small, and now we have a computer this big, and all of a sudden, you get this facet of performance. And that's possible because there's a corporation behind it, had actually a business model. So, I think when the field reaches escape velocity and has a big impact, it's great to let the businesses flourish and let them try things out, because at the end of the day, it's all about impact. Basic research has an impact to a certain point. But the real impact is materialized when we materially affect people's lives.

KFL

I have a question for Sebastian, then. We need to advance the state of the art, with new transfer learning, next generation, the next deep learning or whatever. Would you say that's the industry that's going to innovate there, or-- how would the university keep the people and gather data so that they can do the work, or do you think they don't have to?

Sebastian

I think, if you ask me, universities are way too incremental. So, I think for deep learning, it's going to take care of itself, because you can fund a whole bucket of companies. And all of these are desperate people who work all night. And the good news will be interesting. Some of them will. I think what's really missing is real basic research, like for example, why don't we do research on doubling the human life span? Why don't we massively fund research on fusion to solve the energy problem? Why don't we do massive research on curing all cancers, which I think is actually pretty much possible today? Why don't we invent flying cars at universities [laughter], just to provide an example here. I used to do this example, and now of course I'm publicly involved. But why don't we really think ahead? I would say, looking back at the invention of the Internet, when the first message was sent, the L and O and it crashed with a G. That was something that was just so far out. It was amazing, right? So, what is that research today that is so far out? Why don't we build brain-computer interfaces and implant our brain to give us complete, immediate transparency to the computer network? Which I think we should be doing, and almost nobody does. When we talk basic research, we go to the super-incremental, little, oh, let's make speech recognition a little bit better. I mean, you, Kaifu did basic research in speech recognition when no one else did it, and it was really important. Now that there's even a business around it, I wouldn't want to fund a university for this.

John

So, I realized I neglected to mention that, in case you don't know, that Sebastian is now the CEO of a flying car company - Kitty Hawk. So, just for any of you who haven’t--

Sebastian

Anti-gravity company.

John

An anti-gravity company?

Sebastian

Yeah [laughter].

John

I knew it had more divisions than you had announced [laughter].

Sebastian

Anti-gravity shields.

John

Gravity shields [laughter].

John

Andrew, do you want to add--?

broken image

Andrew

Yeah, just my comment. Positively, deep learning is really taking off, so it's definitely worth massive investments. I mean, a couple comments. I think one of the things about the United States is, we've always been a little bit schizophrenic about to what extent the higher ed system, investing and research as well as education, to what extent that's a private good versus a public good. So, to the extent we think education or research is a private good, then maybe you should pay for yourself, but to the extent that it's a public good, which I think it is, both education, having educated population, as well as the advancing research, I think it's actually worthy of significant government or public expenditures. And then in terms of allocation of resources and research, I think as a society we're actually stronger if we have a very diversified approach. So, even though I think AI is a tremendous opportunity, I don't think everyone in Silicon Valley should all work on AI because I think some of us should work on curing cancer or someone should work on a block chain, or somebody should work-- all of these different things. Although I think that - maybe this is what's Sebastian is getting at - I think AI has proven a value that's certainly deserving of massive investment right now. But I actually support the government funding an art history professor, because I think what they're doing is great, too. Although I would put more dollars into art history, but I would say art history also deserves funding.

John

Once again, reflecting on Kai-Fu's point about Chinese pragmatism and US caution, there are both regulatory and technological challenges on the way to self-driving cars. Kai-Fu and Andrew first, do you have a perspective on whether true commercialized self-driving will come first to the United States or China?

Andrew

I think that autonomous driving will take a public-private partnership, because there are-- if you're driving on the road and a construction worker does that (gesture go), you need to stop, construction worker does that (gesture go), you need to go. I don't think any of us in AI see a straight shot to building computer vision at a level of reliability needed to distinguish this hand gesture versus that hand gesture versus all of these other things that happen on the road. So, one approach to say, "We're going to try even harder and then maybe we'll get there," but I don't think that's the right approach. I think the way to make self-driving cars a reality is to make the modest regulation changes needed. So, just say policeman, construction workers communicate with a car via an app, or via wireless beacon or something rather than these hand gestures. So, I think the countries that are able to make these regulation changes faster, in collaboration with autonomous driving operators, will get there much sooner. Whether that's the US or China or some other country, I think that's what we are going to see.

John

Do you have a bet one way or the other [laughter]?

Sebastian

I think we could have them basically in a year, so very fast.

John

We will have them? You're betting a year.

Sebastian

Yes.

John

Where? When?

Sebastian

Like Andrew worked for a search engine company, I worked for a competing search engine company, and I ran a self-driving car team for a few years. And when I departed, the car was a better driver than human drivers are. It was the safer driver. So, you could measure this in many, many different ways. You would be able to drive about 300,000 miles before any human safety driver had to take over. So, I would challenge anyone in the room to drive 300,000 miles accident-free. You could measure the ability to stay in lane, the ability to reduce braking distance and acceleration as an anticipatory measure. Whatever you measured, the car would drive better than people. And it didn't require any regulation change. It drives me nuts we don't have them yet. I think technology is basically there.

John

Kai-Fu?

KFL

Okay. I think to the extent, if you measure the technology advancement and who's more ahead, if that's what drives the first market, US is clearly ahead. Especially Google, especially after hearing Sebastian [laughter]. But if you measure by which society and government's more likely to take a utilitarian view and make, let's say, necessary road augmentations to make autonomous vehicle even safer, or implement liability solutions that makes companies less afraid of the gigantic lawsuits that come about, I would bet on China. So I think the answer is 50-50, because we don't really know which one is the showstopper at this point.

John

Okay. So, we've talked a little bit about jobs. So, I want to talk about another social-economic issue that is-- there's a robust debate about whether it is tied to new technologies, and that is inequality. There is an argument that the development of this new AI technologies is leading to greater inequalities in society. And I guess, let's look at it from the other side, is there a way out of that process? What can we do, as technologists or people in corporations, to make meaningful differences, if these technological changes are leading to increasing inequality?

KFL

Well, there's some optimistic trends, too. Like for education, these are our Coursera and Udacity founders, right? (Gesturing Andrew & Sebastianz). I think we'll have smart AI algorithms teaching more people that will be more scalable to more backward countries. So, I think that's kind of a positive equalizing trend. But I'm also with you. The negative factors are substantial in the sense that the giant AI companies will make so much money, and then the person being displaced will be so helpless. So how do we bridge that gap? I really don't have any answer other than the taxation point, taxation and minimum stipend. It's not a beautiful answer. I hope we as mankind can come up with a better one, but maybe that's a starting point.

Sebastian

I think it's a great answer, and negative income tax for minimum wage is actually a great answer. I think this is mostly a distribution problem, right? So, if we build, create AI, we create more wealth in the world. And the question is how do we distribute it, right? And then there's always been inequity. Before democracies, we would charge up the hill and kill the king, and then have a new king, and then that king would get very rich and then we kill him, and now we have democracy [laughter]. If we had elected the guy we should have elected - I don't want to take a political stand here - I didn't vote for him, but Bernie Sanders, as opposed to the guy we accidentally elected, Donald Trump. He had on his banner a better a distribution, and I think if we don't solve it at this election round, then maybe the next one. But eventually, I think enough people will say, " Okay, we are getting too little of the pie," and democracy will just take care of it.

KFL

What about poorer countries? US and China can probably redistribute the corporate income to the poor, to the displaced people. What about the poorer countries?

Sebastian

Well, I'm not a political scientist, but poverty has been on the decline worldwide and the bottom level has been raised worldwide. And if you look at today versus, say, two or three hundred years ago, we now have almost uniformly access to the same information, we have almost uniformly access to the ability to read or write, and often primary school even in poor countries and so on. I'm not trying to belittle poorer countries, but I think we have been able, as a world society, to raise even the bottom. And I think technology makes things more affordable. Technology makes education more affordable, food more affordable, transportation more affordable, housing, shelter, and maybe health more affordable. And these things, I think they are not dispersed equally, but they are dispersed worldwide at this point. And you see even China, I mean, China used to be a very poor country, and now it's leveled up to a very rich point, thanks to inventions that are partially US, American, and Soviet technology. And many, many countries we talk to-- we've actually just started a big set of offices in Saudi Arabia, which is not a poor country, but it's an oil-based country where people don't really work, they just harvest and pump. But to educate women in Saudi Arabia, which otherwise can't get a good education because they can't drive to university. So, I think there's a lot of leveling going on in the world that I see as very positive.

Andrew

The world's been pretty good at giving rewards to those people with skills. And thanks to the rise of digital content, digital education, I think we're better than ever at giving those opportunities to people around the world, including developing economies. They're different in Coursera. Coursera usage is over-indexed in developing economies, so a lot of Coursera's users are in developed economies, but the representation from developing economies is over-index, so on average, we're actually moving the world toward equality a little bit, maybe. And I think with the rise of AI, I think that what we might need to do as a society is actually have a new deal. We've heard of the new deal in the United States. But I think, in addition to greater solutions like negative income tax and universal basic income, I personally would actually favor a version of a conditional basic income, where we pay people but where the payment to, say, unemployed individuals, is tied to their studying. Because, by studying, you increase your chance of re-entering the workforce and contribute back to the taxpayers that's paying for all this. I think one of the biggest problems of education is motivation, where it's fantastic for almost anyone to go online and work hard and study, but having a government conditional basic income creates a structure of incentives to help, say, unemployed individuals study. I think it actually creates better incentives. There's this new engine for the whole economy.

John

And Kai-Fu?

KFL

So, one comment on each of your comments. I like the conditional part. I think we can augment it one more for people who are willing to volunteer and help other people, such as the elderly, because some people, their re-education is maybe too tough. And then while I agree with Sebastian's viewpoints about all those good things about technology for the poorer countries, but I would add the two areas of concern. One is for poorer countires, the size of population and the rate of growth is becoming a liability, given the difficulty of deploying the people in GDP-positive jobs in the post-AI world. And secondly, I think poorer countries won't have the rich companies to tax to redistribute the income to get the re-education going. So, some kind of redistribution between the countries may be needed. Put it very directly, they have to basically get US or China to subsidize them, or maybe there are other ways through United Nations or something.

John

Let me ask another general technological question about the pace of change. This is a late question. Kai-Fu showed this great progress in AI technologies, which were basically about pattern recognition. Well, we made this just-- great strides, and in contrast, I wanted to ask, as a reporter watching the DARPA Robotics Grand Challenge, it was very striking. 3 of their teams actually solved this problem, which was 8 tasks, in 50 or 60 minutes. But almost, not all-- not almost, but all of the robots were teleoperated. So, I wanted to ask you if there are progresses being made on autonomy about algorithms or technologies to move around autonomously in unstructured environments.

Sebastian

We made progress in self-driving cars [laughter]. I think that was just a mistake on DARPA's side to make it tele operated, and it was just a--

John

You mean to allow it to be tele operated?

Sebastian

Yeah. I think they chickened out [laughter], which was--

John

But could anybody have solved the problem without teleoperation?

Sebastian

I am pretty sure, yes. So, my experience in life, by the way, is - especially at Stanford where you get these young fresh graduate students who have never seen how hard things really are. You tell them do this, and they just do it. And when they graduate and get older, they realize it's a really hard thing, then they come back and say, "It's too hard for me." But when they're really young, they don't know [laughter]. Look, if you don't aim high, you don't shoot high. Period. Here's an embarrassing story of my life. So, I was, in 2007, I was probably the world's best known person for self-driving cars. And I happened to have won this thing called DARPA Grand Challenge. The four other teams were equally good but it was a tiny tad faster. So, I became very well-known. And then Larry Page comes to me and says, "Hey." And this is-- DARPA Grand Challenge is like driving a car in a desert. So, the best you can hope for is it doesn't collide with a cactus. But a cactus doesn't move very fast [laughter], so it's basically--

John

If you remember, we did collide with a cactus. Remember [laughter]?

Sebastian

Yes, that's true. We had [crosstalk]--

John

Self-driving car accident.

Sebastian

That is true. He was in the car. That's how we met. And then you helped us to push the car out of the mud. I remember this. Let's not talk about that [laughter]. So, Larry comes to me and says, "Hey, Sebastian, why don't we build a car that can drive itself in all the streets of California?" And I said, "That's ridiculous. That can't be done. I am the world expert, I'm professor, Dr. Sebastian Thrun, Stanford University. I have won the DARPA Grand Challenge. There's no person better informed than me. It can't be done." And I'm paraphrasing a little bit, but Larry, that summer, came back and said, "Look, I understand it can't be done, but so I can tell Sergey and Eric, can we have the technical reasons why it can't be done?" And I was thinking, going home, I was like, "Shit [laughter]." And scratched my head, that's how I lost my hair [laughter]. And I couldn't find the technical reason. And then Larry said maybe you should give it a try. And then he and Sergey scoped out about a thousand miles of extremely hard-to-drive territory. Then they said, "Okay, if you drive all these streets 100% autonomously, hands-off, a thousand miles--", like all the bay bridges and Lombard Street and from here to Los Angeles on Highway 1, and around Lake Tahoe, that crazy shit, “then you make an extra salary” [laughter]. So here is of us kind of secretly building up with a few engineers this team, until some nasty journalist in the New York Times found out about us and wrote an article and threatened us… [laughter]. Well, longer story.

John

Longer story

Sebastian

Friends for a long time. But lo and behold, it took about 15 months to do it. And I could have had 15 months to invent a new, I don't know wiring for a battery or something small, or 15 months to build a toy that drives a car in your living room, and it would have been equally hard. But because we tried the impossible, I mean, we did it from-- just reach for the stars, just do it. We are moon-shot, because, I mean, if you waste your time, which we're all doing, then waste it on something really great. So, if you get there, it's really amazing. And ever since, I mean, I apply the same principle, most recently in finding elementary [inaudible] skin cancer. We had an HR paper this spring where we used AI to find skin cancer. If you look at the database, all performing 25 Stanford board-certified dermatologists with a little AI box, those guys make over $400,000 a year. Now the iPhone is actually better. Have a little app on there, it actually does a better job than my doctor. And again, it's the funniest thing. You can just do it, and it's doable. I think we have only invented like 1% of interesting technologies here. There is 99% have not been invented. So, I think, for my perspective, is, you should just go invent these things and be optimistic.

Andrew

So, those things are-- that was very American of you [laughter].

Sebastian

Says the German.

John

Says the German, yes.

Andrew

Yeah, so I have a slightly unpopular point of view, which is that we do have all of these great examples of when someone aimed for the stars and got some phenomenon results. I have an unpopular Silicon Valley point of view that sometimes, there is a cost to aiming too high, frankly. Because, for example, I really admire the work that Bill Gates did on the curing-- massive moving towards curing malaria. I'm really glad he was working on curing malaria rather than, say, curing all human diseases all at the same time. And I think that there's a difference between working to colonize moss, the debate whether that's doable or not doable, versus let's try to exceed the speed of light. I would not work on something trying to exceed the speed of light, for example. So, I think there is a cost to aiming too high, which is that, if you're out trying to cure all human diseases at the same time, you might miss out on the opportunity to cure malaria, which is a fantastic thing to do. So--

Sebastian

I completely disagree [laughter]. The impact, if you have a 10% chance of curing all diseases, the mortality rate of malaria is like this tiny, compared to, for example, diarrhea or the common flu. I think we should aim higher.

Andrew

Yeah, and the debate is whether it's a 10% chance or whether it's 10 to negative 10 chance, I think that's the question. So, I find as I get older, increasingly, I find a lot of opportunities where we can see line-of-sight impact to changing millions or tens of millions or hundreds of millions of people's lives, and that excites me, when AI capabilities [laughter] give us a clear shot at doing that.

Sebastian

So, with that logic, I think we would have faster horses now, but no cars [laughter].

John

Kai-Fu [laughter]?

Andrew

Actually, I don't agree. I think we do have cars. I think with that logic, we would have not built rocket ships that exceed the speed of light, but instead we do have cars.

John

Kai-Fu, would you like to jump in and--?

KFL

Sure. I would love to jump in. I would say that you two didn't have the honor of getting the hardest problem from Larry and Sergey. Sebastian got to be asked to make a car driven in California, Andrew got asked to build a Google Brain. I got asked the task of winning search in China [laughter].

broken image

John

So [laughter], we're coming near the end, I want to ask one more sort of impact of this technology on society. And you kind of brought it up when you talked about brain interfaces. And Elon Musk, of course, has brought to the fore. And it's a profoundly deep technological challenge, and maybe the way that Elon's talking about is just the wrong way. But I wanted to ask about it from a slightly different perspective, and that is, I mean, we're all familiar with the Star Trek species, I guess, the Borg. Resistance is futile, you will be assimilated. What I want to ask you is, so one of the things that principally determines if I'm human is my independence of thought. This is the last private space. And if you read the-- a year ago, no, three years ago, we were in the realm of science fiction. But the goal of the Obama BRAIN Initiative was not only to be able to read from a million neurons simultaneously, but was to write to a million neurons simultaneously. And when I looked at the field of robotics, one of the most exciting aspects in the field of robotics is cloud robotics. When one robot learns something now, all of them know it. That's not the way humans learn. So, there's this interesting space. Let me ask the three of you. If we could go there, if we could connect ourselves in a direct way, but the cost was losing independence, losing our humanity perhaps, should we go there [laughter]?

Sebastian

I think we should connect ourselves and not lose independence.

John

Okay. [crosstalk]--

Sebastian

We're able to give something up that's good for us. I mean, we build technology when it's better. Think about the following. I mean, the example is from self-driving cars. It goes as follows: if you as a human driver make a mistake, then you learn from it, but nobody else will. And a Google self-driving car made a mistake, then all the other cars have learned from it, including all the unborn cars. So, it's like the equivalent of, you have a child and it's born with a PhD. And can speak all languages it can speak, right? And we have to get this. So, we're learning-deficient compared to our machines. So, eventually, machines will become smarter and smarter. We can't, unless we can fix our I/O problem. My I/O is slower. Speaking is very slow, listening is slow. Our brain's very lost, it turns out. It's not very digital, it could be analog. Books are digital. So, I think we have to fix this bridge and the only way to go, in the end, is to have a direct brain-computer interface. But what's going to be cool about it is, you're going to know everything. You're going to have an IQ of 10,000. You can communicate as fast as you want. I mean, there's going to be amazing sharing possible once we have a full brain-computer interface. Just, we should do it [laughter].

Andrew

So, without disagreeing of what Sebastian-- I think, actually, one challenge with brain-computer interfaces is that I think, honestly, our bottleneck, I'm not convinced, is the speed of I/O. Because I can flash text you far faster than you can possibly read, so your brain can certainly suck on information way faster than you can process. Whereas a computer with a video camera and OCR can actually read text faster much faster than you can. So, I'm actually not convinced that the bottleneck to better man-machine, human-machine collaborations is the I/O bottleneck. I think it's just that the human brain is actually still quite slow at processing data that you can shove at it. But I think that's why John's question's a very interesting one. Maybe, to process faster, we use the machine do the thinking and just write the answer in your brain. I don't know [laughter]. Rather than rely on the slow-- so, interesting question.

John

Kai-Fu?

KFL

I'll take a different angle again. I believe that we have to protect our humanity, that, is our faith, our love, and our soul, and our spirit, and believe that it's something that is not replicable. That doesn't mean you can't connect your brain to the electronic interface. But if you start believing in that everything that is about us that's human can be completely replicated by a machine, so much so as to allow a machine dominating you, then I think we then have to draw the next conclusion that there is no purpose or meaning in life. That is, we're all in this giant video game, as I think Elon Musk has said [laughter]. I think, we're willing to draw that conclusion, then there's no meaning of life, then why are we here? I think the alternative is, do the scientific exploration, do the engineering efforts, but never lose faith ourselves that there is something different about our humanity, and believe that strongly, and try to do everything we can to differentiate us from machine. Things I talked about, sending out more love, connecting with more people, have faith, and believe we have a soul and a purpose in life. Now, you could say that hey, either is a plausible outcome in the end, are we in the middle of a big game or do we really have our humanity. I think it's imperative that we believe the latter, even if either could be true. Because if the former is true and we believe in the latter, we're going to have more fun and more love and extend our lives as a false belief in humanity for another 100 or 200 years. I think that's worth it. Alternatively, if we have our humanity, but we all say we're going to be cyborgs, then we will not ever be able to evolve our humanity and find meaning of life, and we will have terminated our being as human beings, and our lives will become meaningless. So, we cannot possibly choose the life of cyborgs. But connecting to machines, I think we have to remain cautious.

John

Do we have time for a question or two from the audience? Who would like to-- over there. Just, I think, speak up if you can.

Audience

So, all of you have PhDs. Some of you argue against governmental funding of research, so I wonder if you could go back and do it again, would you still get your PhD and spend time on research [laughter]? It's like five years, right? [inaudible].

John

Interesting. Good question [laughter].

Andrew

I think that, if some of you are deciding whether or not-- this is actually how I choose what to work on, which is, I tend to pick projects where I'll learn a lot, and two, if successful, could have a huge impact, right? And so I feel like the PhD decision-- [laughter]. What was that, Sebastian? Such sarcasm [laughter]. So, I feel like a lot of the PhD decision is just, you'll be considering a PhD versus this job offer versus that job offer. I would evaluate all of these in the same context as just, where can you go via the PhD program, or a corporation, or something else. Where would you learn the most and where, if you are successful, can you have a huge impact?

Sebastian

I concur with Andrew, I get a lot of questions like this from prospective PhD students who come to Stanford and they say, "Look, you've been a professor at Stanford, you've been exec at Google, and you've been the CEO of a start-up, what should I do?" And I think all of them have unique advantages and disadvantages. I see the PhD as much more as an education experience than a research experience. I see university much more an education institution than a research institution. I think research is on the side. Education is the core product of an university, including Stanford. And what the PhD affords you to do is not just to answer a given question, but to also define the question. Half of a PhD program is to find the right question to ask. And I think that skill of asking the right question is a skill that I'd like to see much, much more in society. So many of us run after existing questions and try to grab it and do it, and so few of us ask even the question, "Is this even the right question to ask?" I mean, people are willing to brush away their assumptions and ask what is the right way to go forward, what's the right solution for the society. Then I think we can arrive at fundamentally different solutions, and that's what makes us so special in Silicon Valley and around the world.

KFL

I think I got a lot out of my PhD, and I'm thankful I went for it. But I also think it did some permanent damage to my brain [laughter] that it was very difficult to undo. So, the positive part is, I think, when I first went to CMU, our department chair, Nico Habermann challenged, and he said, "By being here, you're committing yourself to becoming the world's foremost expert in some narrow domain that you choose to. Otherwise, you're not deserving of the PhD." So, believing that and working so hard on it, I think helped me solve independent problems and also helped me strive for excellence. And I glad I didn’t let him down. But I think the damage that it did is that all academia cares about is doing something novel. Doing something no one's done before, basically, with little regard for usefulness. That’s how you get promotions, that's how you get papers. It forces that thinking to me. There is nothing wrong with this system, because this is how we get innovations. But when I moved track and started to build products, I was not able to shed the pursuit of "This is new, this is cool," and to accept that I must build what users want, and that if possible companies should take no technology risks. It took me more than 10 years to learn that, and without the PhD, I would have learned it faster [laughter].

John

So, we're out of time. I realize that I've been following these three-- my favorite three AI scientists for the last 30 years, and reporting on their adventures. So, thank you very much for doing so many neat things, and please thank them as well. [applause]