Rishi Sunak & Elon Musk: Talk AI, Tech & the Future

7 months ago
25

Prime Minister Rishi Sunak talks to X, Tesla, and SpaceX CEO Elon Musk for a broad-ranging discussion covering AI, technology, and the future of human civilisation.

Subscribe for the latest from the UK Prime Minister.

0:02
okay all right well good evening everybody Welcome Elon thanks for being here thank you for having me we feel we
0:08
feel very privileged we're excited to have you right so I'm going to start with some questions and then we're going to open it up let me get straight into
0:13
it so Bill Gat said there is no one in our time who has done more to push the
0:19
bounds of science Innovation than you well it's kind of to say well that's it that's a nice thing to have anyone say
0:24
about you nice coming from Bill Gates but oddly enough when it comes to AI actually for around a decade you've
0:31
almost been doing the opposite and saying hang on we need to think about what we're doing and what we're pushing
0:36
here and what do we do to make this safe and and actually maybe we shouldn't be pushing as fast or as hard as we are
0:42
like I mean you've been doing it for a decade like what was it that caused you to think about it that way and you know
0:47
why do we need to be worried yeah I've been somewhat of a Cassandra for quite a while um where where people would I tell
0:54
you like we should really be concerned about AI they be like what are you talking about like they've never really had any experience with with AI but
1:00
since I was immersed in um technology I have been immersed in technology for a long time I could see it coming um
1:08
so uh but I think this year was there've been a number of of breakthroughs I mean
1:14
you know the point in which someone can see a dynamically created video of themselves um you know like somebody
1:20
could make a video of you saying anything in real time um or me um and uh
1:26
so the sort of the deep Pake videos which are really incredibly good in fact sometimes more convincing than real ones
1:31
um and deep real um and um and then and then obviously
1:40
things like CH GPT were were quite remarkable now I saw gpt1 gpt2 gpt3 gp4
1:47
that you know the whole sort of lead up to that so it was easy for me to um kind of see where it's going if you just sort
1:53
of extrapolate the points on a curve and assume that Trend will continue then we will have um profound artificial
1:59
intelligence and obviously at a level that far exceeds human intelligence um
2:05
so um but I'm I'm glad to see at this point that uh people are taking uh
2:11
safety seriously and i' I'd uh like to say thank you for holding this AI safety conference I think actually it will
2:16
regard on in history as being very important I think it's it's really quite profound um and
2:23
um and and I do think overall that the potential is there for a artificial intelligence AI to
2:30
um have most likely a positive effect um and to create a future of abundance
2:38
where there is no scarcity of goods and services um but but it is somewhat the of the the magic Genie problem where if
2:46
you have a magic Genie that can grant all the wishes um usually those stories um don't end
2:53
well be careful what you wish for including wishes yeah yeah so you you
2:58
you talked a little bit about about the the summit and thank you for being engaged in it which has been great and
3:04
people enjoyed having you there participating in this dialogue now one of the things that we achieved today in
3:10
the meetings between the companies and the leaders was an agreement that
3:16
externally ideally governments should be doing safety testing of models before
3:21
they're released I think this is something that you've spoken about a little bit it was something we worked really hard on because you know my job
3:27
in government is to say hang on there is a potential risk here not a not a definite risk but a potential risk of
3:34
something that could be bad you know my job is to protect the country yes that's and we can only do that if we develop
3:40
the capability we need in our safety Institute and then go in and make sure we can test the models before they are
3:45
release delighted that that happened today but you know what what's your view on what we should be doing right you've
3:51
talked about the potential risk right again we don't know but you know what are the types of things governments like our should be doing to manage and
3:57
mitigate against those risks well I generally think that that it is good for
4:02
government to play a role when the public safety is is at risk so um you
4:07
know really for the vast majority of software um the public safety is not at risk I mean if if the if the uh app
4:13
crashes on your phone or your laptop it's not a a massive catastrophe um but when you're talking about digital
4:20
superintelligence I think which does pose a risk to the public then there is
4:25
a role for government to play to safeguard the interest of the public and and this is of course true in many
4:31
fields um you know Aviation cars you know I and I deal with Regulators
4:37
throughout the world uh because of um stalling being Communications Rockets being Aerospace and cars you know being
4:44
TR vehicle transport so I'm very familiar with dealing with with Regulators um and I actually agree with the vast majority of regulations there's
4:51
a few that I disagree with from time to time but Point one% probably of or less than 1% of regulations I disagree with
4:57
so um and there is some concern from uh people in silic Valley who who' have
5:03
never dealt with Regulators before and they think that this is going to just Crush Innovation and and slow them down
5:09
and be annoying but and and uh it will be annoying it's
5:15
true um they're not wrong about that um but but but I think there's we've learned over the years that uh having a
5:23
referee is a good thing and if you look at any sports game there's always a a referee and and nobody's suggesting I
5:30
think to have a sports game without one um and and I think that's the the right way to think about this is for um for
5:37
government to be a a referee to make sure the Sportsman like conduct and and
5:43
and that the public safety is um you know is addressed that we care about the
5:49
public safety because I think there might be at times too much optimism about technology and I speak I say that
5:55
as a technologist I mean so I ought to know um and and uh and and like I said
6:02
on on balance I think that the AI will be a forceable good most likely but the
6:08
probability of it going bad is not 0% yeah so we we just need to mitigate the downside potential and then how you
6:15
talked about referee and that's what we're trying right there yeah well there we go I mean you know and we talked about this and Demus and I discussed
6:21
this a long time ago like literally facing right at and actually you know Demus to his
6:28
credit and the credit of people in industry did say that to us I you know de was say it's not right that Demus and
6:33
his colleagues are marking their own homework right there needs to be someone independent and that's why we've developed the safety Institute here I
6:40
mean do you think governments can develop the expertise one of the things we need to do is say hang on you know
6:45
Demis Sam all the others have got a lot of very smart people doing this governments need to quickly tool up
6:51
capability wise Personnel wise which is what we're doing I mean do you think it is possible for governments to do that
6:56
fast enough given how quickly the technology is developing or what do we need to do to make sure we do do it quick
7:02
enough no I think it's it's a good it's a great Point you're making um the the pace of of AI is faster than any
7:10
technology I've seen in history by far um and it's it seems to be growing in capability by at at least fivefold
7:18
perhaps 10 fold per year it it'll certainly grow by an order of magnitude next year yeah so um so and and
7:26
government isn't used to moving at that speed um but I but I think even if there are not um firm regulations um even if
7:35
there's not even if there isn't an enforcement capability Simply Having insight and being able to highlight concerns to the public will be very
7:42
powerful um so even if that's all that's accomplished I think that will be very
7:48
very good okay yeah well hopefully we can do better than that hopefully yeah no but that that's helpful actually we
7:53
were talking before it was striking you you someone who spent their life in technology living More's law and what
8:00
was interesting over the last couple of days talking to everyone who's doing the development of this and I think youd
8:06
concur with this is is just the pace of advancement here is unlike anything all
8:11
of you have seen in your careers and technology is that fair because you've got these kind of compounding effects
8:17
from the hardware and and the data and the Personnel yeah um I mean the two um
8:26
currently the two leading centers for AI development are C Isco Bay Area and the and the sort of London area that um and
8:32
there are many other places where it's being done but those are the two leading areas so I think if um you know if if
8:39
the United States and the UK um and and China are um sort of aligned on on
8:45
safety that's all going to be a good thing because that's really that's where that's that's where the the leadership is generally I me you actually you
8:51
mentioned China that so I I took a decision to invite China to to Summit over the last couple days and it was not
8:58
an easy decision a lot of people criticize me for it you know my view is if you're going to tryal serious conversation you need to but I what
9:04
would your thoughts you do business all around the world you just talked about it there yeah you know should we be engaging with them can we trust them is
9:11
that the right thing to have done if if we don't if China is not on board with uh AI safety it it's somewhat of a mood
9:18
situation U the single biggest objection that I get to any kind of AI regulation
9:24
or or sort of safety controls um are well China's not going to do it and therefore they will just jump into the
9:30
lead and exceed us whole um but but actually China is willing to participate
9:35
in uh in AI safety um and thank you for inviting them and I and they you know I
9:41
I think we should thank China for for attending um when I was when I was in
9:48
China earlier this year the my main subject of discussion with this the leadership in China was AI safety and
9:54
saying that this this is really something that they they should care about and um they took seriously and and
10:00
and um and you are too which is which is great um and having them here I think was essential really if they if they're
10:07
if they're not participants it's it's uh pointless it's pointless yeah no that and I think we were pleased I they were
10:12
engaged yesterday in the discussions and actually ended up signing the same communic that everyone else did that's
10:18
great which is a good stop right and I said if we need everyone to approach us in a similar way if we're going to have
10:23
I think a realistic chance of of resolving it I was going to you talked about Innovation earlier and and regul
10:29
being annoying there was a good debate today we had about open source and I think you you've kind of been a
10:34
proponent of algorithmic transparency and making some of the the X algorithms public and you actually we were talking
10:41
about Jeffrey Hinton on the way in yeah you know he he's particularly been very concerned about open- Source models
10:48
being used by Bad actors you've got a group of people who say they are critical to Innovation happening in that
10:54
distributed way look it's it's a trick there's probably no perfect answer and there's a tricky balance
10:59
what are your thoughts on how we should approach this open- Source question or you know where should we be targeting
11:05
whatever regulatory or monitoring that we're going to do well the open
11:11
source um algorithms and data tend to lag the close Source by 6 to 12 months
11:18
um but so so there but given the rate of improvement that there's actually therefore quite a big difference between
11:24
the the closed source and the and the open um if things are improving by factor of let's say five or more um than
11:32
being a year behind is you're five times worse so it's a pretty big difference and that might be actually an okay
11:38
situation um but it it certainly will'll get to the point where you've got open
11:44
source um AI that can do that that will start to approach human level
11:49
intelligence will perhap succeed it um I don't know quite what to do about it I think it's somewhat inev inevitable
11:55
there will be some amount of Open Source and I I I guess I would have a slight bias towards open source uh because at
12:01
least you can see what's going on whereas closed Source you don't know what's going on now it should be said
12:06
with AI that even if it's open source do you actually know what's going on because if you've got a gigantic data
12:12
file and um you know sort of billions of
12:18
of of data points weights and parameters uh you can't just read it and see what
12:23
it's going to do it's a gigantic file of inscrutable numbers um you can test it
12:29
when you when you run it you can test it you can run a bunch of tests to see what it's going to do
12:34
but it it's probabilistic as opposed to um deterministic it's not it's not like
12:40
traditional programming where you've got a you've got very discret logic and and
12:45
and the outcome is very predictable and you can read each line and see what each Line's going to do um uh a neural net
12:52
is just a whole bunch of probabilities um I mean it sort of ends up being a giant comma separated value
12:59
file it's like our digital guide is a CSP file
13:04
really okay um but that that is kind of what it is yeah now that that point
13:11
you've just made is one that we have been talking about a lot because again conversation with the people who
13:16
developing their technology make the point that you've just made it it is not like normal software where there's
13:21
predictability about inputs improving leading to this particular output improving and as the models iterate and
13:28
prove we don't quite know what's going to come out the other end I think Demis would agree with that which is why I think there is this uh bias for look we
13:36
need to get in there while the training runs are being done before the models are released to understand what is this new iteration brought about in terms of
13:43
capability which it it sounds like you would U would agree with I I was going to shift gears a little bit on you know
13:49
you've talked a lot about human consciousness human agency which actually might strike people as as strange given that you are known for
13:56
being such a brilliant innovator in technology ologist but it's it's quite heartfelt when I hear you talk about it
14:02
and the importance of maintaining that agency in technology and preserving human consciousness now it kind of links
14:08
to the thing I was going to ask is when I do interviews or talk to people out and about in this job about AI the thing that comes up most actually is is
14:15
probably not so much of the stuff we've been talking about but jobs it's what does AI mean for my job is it going to
14:22
mean that I don't have a job or my kids are not going to have a job now you know my my answer as a you as a policy maker
14:29
as a leader is you know actually AI is already creating jobs and you can see that in the companies that are starting
14:35
also the way it's being used is a little bit more as a co-pilot necessarily versus replacing the person there's
14:42
still human agency but it's helping you do your job better which is a good thing and and as we've seen with technological
14:49
Revolutions in the past clearly there's change in the labor market the amount of jobs I was quoting an MIT study today
14:56
that they did a couple of years ago something like 60% of the jobs at that moment didn't exist 40 years ago so hard
15:01
to predict and my job is to create an incredible education system whether it's at school whether it's retraining people
15:07
at any point in their career because ultimately if we've got a skilled population they'll be able to keep up with the the pace of change and have a
15:13
good life but you know that it's still a concern and you know you what would your
15:18
kind of observation be on on AI and the impact on labor markets and people's jobs and how they should feel about that
15:24
as they they think about this well I think we are seeing the most disruptive force in history here
15:33
um you know where we have for the first time we will have the first time something that is smarter than the
15:38
smartest human um and that I mean it's hard to say exactly
15:44
what that moment is but but there will come a point where no job is needed you
15:50
can have a job if you want to have a job for sort of personal satisfaction but
15:55
the AI will be able to do everything so I don't know if that makes people
16:01
comfortable uncomfortable it's [Laughter] it's you know that's why that's why I
16:06
say if if you if you wish for a magic Genie that gives you any wishes you want
16:11
and there's no limit you don't have those three limit three wish limit nonsense you just have as many as many wishes as you want um so uh it's both
16:21
good and bad um one of the challenges in the future will be how do we find meaning in life if if you have a gen
16:28
that can do everything you want I I I do think we we it's it's it's hard you know when when when this new technology it
16:34
tends to have usually follow an S curve in this case we're going to be on the exponential portion of the S curve for a
16:41
long time um and you like you'll be able to ask for anything it won't be we won't
16:48
have Universal basic income we'll have Universal High income so in some in some sense it'll be somewhat of a leveler um
16:55
or an equalizer you know because really I think everyone will have access to this magic Genie um and you able to ask
17:02
any question it'll be certainly be good for Education you it'll be the best tutor you could I'm the most patient
17:08
tutor uh sit there all day
17:13
um and uh there will be no shortage of goods and services will be an age of abundance um I think if I'd recommend
17:21
people read uh in Banks the banks culture books are
17:27
probably the best envisioning if fact not probably they're definitely by far the best envisioning of an AI future um
17:32
there's nothing even close so I'd recommend really recommend fanks I'm very big fan um all his books are good
17:40
um does not say which one all of them um so so that's that that'll give you a
17:47
sense of what is a I guess a fairly utopian or
17:54
protop um future with with AI yeah um which is good from a as you said it's a
18:01
universal High income which is a nice phrase and that's it's good from a kind of materialistic sense a of abundance
18:06
actually that it kind of then leads to the question that you pose right I'm someone who believes you know work gives
18:11
you meaning right I think a lot about that as as you know I think work is a good thing it you know gives people purpose in their lives and if you then
18:19
remove a large chunk of that you know what does that mean and where do you get that you know where do you get that
18:24
drive that motivation that purpose I mean you were talking about it you you work a lot of hours I do no as I was
18:30
mentioning when we we were talking earlier I have to somewhat engage in deliberate suspension of disbelief um because I'm I'm putting so much Blood
18:36
Sweat and Tears into a work project and burning the you know 3:00 a.m. oil um
18:42
then um I'm like wait why am I doing this I can just wait for the AI to do it I'm just lashing myself for no reason
18:50
yeah um must be a glut for punishment or something um so we called call Demus and
18:57
tell him to hurry up up and then you can have a holiday right that's the plan yeah no it's a it's a tricky it's a
19:03
tricky thing because I think you know part of our job is to make sure that we can navigate to that very I think
19:10
largely positive place that you're describing and help people through it between now and then because these
19:15
things bring a lot of about a change in in the labor market as we've seen yeah um I I think it probably is generally a
19:22
good thing because you know there are a lot of jobs that are uncomfortable or dangerous or sort of tedious um and the
19:29
computer will have no problem doing that be happy to do that all day long so um you know it's fun to cook food but it's
19:35
not that fun to wash dishes like but the computer's perfectly happy to wash dishes um I I guess there is um you know
19:43
we still have uh sports like where where where humans compete and like the Olympics and obviously um a machine can
19:52
can go faster than any human but we still have uh we still humans race against each other um and uh and have
19:59
all you know have these Sports competitions against each other where even though the machines are better they're still I guess competing to see
20:05
who can be the best human at something yeah um and and people do find fulfillment in that so I guess that's
20:10
perhaps a a good example of how even when machines are faster than are stronger than us we still find a way we
20:15
still we still enjoy competing against other humans to at least to who the best human yeah that's that's a good that's a
20:21
good analogy and we've been talking a lot about managing the risks I just before we move on and finish on AI is
20:27
just talk a little bit about the opportunities you know you you're engaging lots of different companies
20:32
neur being an obvious one which is doing which is doing some exciting stuff I you touched on the
20:39
thing that I'm probably most excited about which is in education yeah and I think many people will have seen s
20:45
Khan's video from earlier this year is Ted talk about as you talked about it's like personal tutor yeah personal tutor
20:51
an amazing personal tutor an amazing personal tutor and we know the difference in learning having that personalized tutor is incredible
20:57
compared to class from learning if you can have every child have a personal tutor specifically for them that then
21:02
just evolves with them over time that could be extraordinary so that you know for me I look at that I think gosh that
21:08
is within reach at this point and and that's one of the benefits I'm most excited about like when you look at the
21:13
the landscape of things that you see as possible what is it that you know you are particularly excited about I I think
21:20
certainly ai ai tter are going to be amazing um perhaps already are uh I
21:26
think there's also perhaps companionship which may seem odd because how can the computer really be your friend but if
21:32
you if you have an AI that has memory you know and remembers all of your interactions and has read every you can
21:39
say like give it permission to read everything you've ever done so it really will know you better than anyone perhaps even yourself um and and and where you
21:47
can talk to it every day and and those conversations spold upon each other you will actually have a great friend um as
21:54
long as that friend can stay your friend and not get turned off or something don't turn off my
22:02
friends um but I think that will actually be a real thing um and um I
22:07
have a one of my sons is is sort of has some learning disabilities has trouble
22:13
making friends actually and and I was like well you know he an AI friend would actually be great for him oh okay you
22:19
know it's that was a surprising answer that's actually it's worth uh worth reflecting on it's really interesting
22:25
Ian we're already seeing it actually as we deliver you know Psychotherapy anyway now doing far more by digitally and by
22:33
telephone to people and it's making a huge difference and you can see a world and which actually you know AI can provide that social benefit to people um
22:41
just quick question on on X and then we should open it up to everybody you made
22:46
a change when you in one of the well made many changes but quite a few one one of the one of the Chang you love
22:52
that letter yeah i' got a real thing about it you you really do you really do one of the
22:58
changes which you know kind of you know goes into the space that you know we have to operate in and this this balance
23:04
between free speech and moderation it's you know we grapple with as politicians you were grappling with your own version
23:10
of that and and you you moved away from a kind of manual human yeah uh way of
23:16
doing it the moderation to the the community notes and and I think that's it it was an interesting change right
23:22
it's not what everyone else has done it would be good you know what's what was the reasoning behind that and why you
23:28
think that is a better way to do that um yeah part of the problem is if if you if
23:33
you Empower people as censors then well have there's going to be some amount of bias they have um and then whoever
23:40
appoints the sensors is effectively in control of information so then the the idea behind Community notes is well how
23:47
do we have a consensus driven uh I mean so it's not really censoring
23:53
it but consensus driven approach to truth how do we or how do we how do we make things um the least amount untrue
24:00
like you can say like what one can't pass perhaps get to Pure truth but you can aspire to be more truthful um so the
24:09
the the thing about Community notes is it doesn't actually delete anything it simply adds context now that context
24:14
could be this thing is untrue for the following reasons um um and but but
24:20
importantly with Community notes um everything is open source actually so you can see ex the software um every
24:27
line of the software you can see all of the data that went into a community node and you and you can independently create
24:33
that Community node so if you've got if you see manipulation of the data you can actually highlight that and say well
24:38
this this this there appears to be some gaming of the system um and you can suggest improvements um so it's it's
24:46
it's maximum transparency which is I think combined with the kind of wisdom of the crowds and transparency to get to
24:52
a better Onis and and really one of the key elements of community notes is that in order for to be shown people who have
24:59
historically disagreed must agree um and and there is a bit of AI usage here so there's we populate a
25:07
parameter space um around each contributor to community notes and then
25:13
a parameter space so so everyone's got basically these these vectors associated with them which so it's it's not as
25:20
simple as as right or left it's saying it's it's more it's several hundred vectors that that because things are
25:26
more complicated than something right right or left and um and and then we'll we'll do sort of a uh inverse
25:33
correlation say like okay these these people generally disagree but they agree about this note okay so then that so
25:40
then that that that gives the note credibility okay um yeah that's the that's the core of it and it's working
25:45
quite well yeah um I've yet to see a note actually be be present for more than a
25:51
few hours uh that that is incorrect so the batting average is extremely good and when I ask you people say oh they're
25:57
worried about Community notes sort of being disinformation like send me one and then they
26:03
can't so so I think it's I think it's quite good I mean the general aspiration is with the xplatform is to inform and
26:10
entertain the public um and to be as accurate as possible and as truthful as possible um even if someone doesn't like
26:17
the truth you know people don't always like the truth um no not always um but but that's that's
26:25
the aspiration and I think if if we are if we stay true to the truth then I think we'll find that people uh use use
26:32
the system to learn what is going on and to to learn it I I think actually truth
26:38
pays um so I think it'll be what what I mean assuming you don't want to engage
26:44
in self- delusion then um then I think it's it's the smart move
26:50
you know so excellent very helpful right let's uh open it up to all our guests here we've got some microphones they'll
26:56
come put your hands up they'll come come and find you we got yes go for it thank
27:02
you good evening uh Alice bentin from entrepreneur first uh thank you for a fascinating conversation I suppose a
27:08
question for each of you um prime minister the UK has some of the best universities in the world we have the
27:14
talent what will it take for the UK to be a real breeding breeding ground for
27:19
unicorn companies um and Elon uh being a Founder in the UK is still a non-obvious
27:25
career choice for the most exceptional technical Talent what are the cultural elements that we need to put into place
27:31
to change this thank you both John you want to go first go for it um sure well
27:36
you're right that there are um cultural elements where you you you know the the
27:42
culture should celebrate creating new companies um and
27:47
um and there should be a bias towards supporting um small small companies
27:53
because they're the ones that need nurturing the larger companies really don't need nurturing ing um so you know
28:00
just you can think of it's sort of like a garden if it's a little Sprout it needs needs nurturing if it's a Mighty Oak it doesn't need quite as much um so
28:07
I think uh that that that is a mindset change that is important um but but I I should mention that um uh London is uh
28:17
you know London and San Francisco or the Bay Area are really the two senses for AI so that so London is actually doing
28:24
doing very well on that front that the two MO I say the two leading locations on Earth and you know San Francisco's
28:32
probably ahead of London but London's really very strong or London area um greater London home counties I
28:39
guess keep going keep going um so I'm just saying objectively
28:45
this is the case um um and but you do need that uh you need you need the
28:51
infrastructure you need you need um landlords who are willing to uh rent to new companies you you need uh law firms
28:57
and accountants that are willing to support new companies and it's generally a mind it is a mindset change um and I
29:03
think some of that is happening but I think really it's just culturally people need to decide this is this is a good
29:08
thing yeah yeah no actually well thanks for for what you said about the UK it's something that we work hard on lots of
29:14
people in the room are are part of what makes this a fabulous place for intive companies including uh Alice so Alice
29:21
what I'd say is you know my job is to get all the you know the nuts and bolts right make sure that all of you are
29:27
starting companies can raise the capital that you need everything from you know your seed funding with our incredible
29:33
you know Eis tax reliefs all the way through to you know your late stage rounds and we need reform of our Pension
29:39
funds and the Chancellor's got a bunch of incredible reforms to unlock capital from all the people who have it and
29:45
deploy it into growth Equity right that is a work in progress we're not there yet but I think we're we're making we're making good progress we need talent we
29:52
need people you right so that means an education system that prioritizes the things that matter and you've seen my
29:58
reforms I go on about more maths more maths more maths um but I think that is important but also attracting the best
30:04
and the brightest here if you look at our fastest growing companies in in this country and I think it's probably the
30:09
same in in the US over half of them have a non-british Founder right and so that
30:15
tells you we've got to be a place that is open to the world's best and brightest entrepreneurial Talent so the
30:21
Visa regime that we've put in place I think does that makes it easy for those people to come here and then actually
30:26
it's the thing that we spent the beginning of the session talking about the regulation right making sure that we've got a regulatory system that's Pro
30:33
Innovation that yeah we of course we always need guard rails on the things uh that will worry us but we've got to
30:38
create a space for people to innovate and do different things you know those are all my jobs the thing that is
30:44
tougher is the thing that Elon talked about which is culture right it's how do you transpose that culture from places
30:50
like Silicon Valley across the world where people are unafraid to give up the
30:55
security of a regular paycheck to go and start something and be comfortable with failure you you talked about that a lot
31:02
I think you talked about it more in when you were playing games right but that you've got to be comfortable failing and
31:07
knowing that that's just part of the process and that is a it's a tricky cultural thing to do overnight but it's
31:12
an important part of I think creating that kind of environment yeah if if you don't succeed with your first startup it
31:18
shouldn't be sort of a catastrophic career ending exactly thing it should be you know what good I think should like
31:24
should be like well you know you gave it a good shot you know and and and now try again exactly and it's so one thing I
31:32
was going to mention is like obviously creating a company is sort of a highrisk high reward uh situation um but and I
31:38
don't know quite what the how it works in in the UK I think it's probably better than than um Continental Europe
31:45
um but the the stock options are very difficult in most parts of Europe I'm
31:50
not sure if how than the UK but but if somebody's basically going to risk their the sort of life savings and with and
31:57
the vast majority of startups fail so I mean you hear about the startups that succeed but most companies are most
32:04
startups consist of you know a massive amount of of of work um followed by
32:10
failure that's actually most most companies and so it's a highrisk high reward and and and so the higher reward
32:16
part does need to be there for it to make sense yeah I think that was a very soft pitch for tax policy that I CH but
32:24
I actually I can tell you so look I a I agree and we have so we have I think relative to certainly European countries
32:31
but certainly the US definitely California a much lower rate of capital gains tax okay right so for those people
32:37
who are risking and growing something like we think the reward should be there at the end so 20% capital gains tax rate
32:43
um and on stock options I don't know if we've got anyone from index ventures in in the room so you know index one of our
32:50
bleeding VC funds here they they do a regular report looking at most countries
32:56
tax treatment of stock options yeah and you know when I was Chancellor of you
33:02
know treasury secretary equivalent you know we were I think down at uh we were pretty good but we were fourth or fifth
33:07
and I said we need to for exactly the reason that you mentioned was like this has got to be the best place for innovators we need to move that up and I
33:14
think in the last iteration of that report we had because of the changes that um Jeremy and I had made we have
33:19
moved up to I think second from from memory so hopefully that should give you and everyone else some comfort that we
33:25
recognize that's important because when people work hard and risk things yeah they should be able to enjoy the rewards of that high risk High reward yeah and I
33:31
think we have a we very much have a tax system that supports that and those are the values that you know I believe in
33:37
and I think most of us in this room probably do as well right next uh next question I've got seven front of me and
33:42
then I'll come over here go on go on S thanks very much um we've talked about
33:50
some really big Ideas um Global changing ideas I'm really interested particularly in the context of creation of of science
33:57
and technology superhubs and so on how does that map onto the everyday lives of
34:02
uh of people living in say Austin Texas to choose r or in my case Nottingham East Midlands uh what what is how do you
34:10
see that evolving for people you know every day the sort of everyday effects of AI
34:16
yeah um for context Elon so Seb Seb runs are are equivalent of CVS right or
34:23
Walgreens so you know when as I visited right so he's got of people coming in his shops every day and it's making sure
34:29
how do we make this relevant I think so is your question how how is this relevant to that person you know maybe actually let me go I'll go first on that
34:36
because I I think it's a a fair a fair point I was just going over with the team a couple of things that we're doing
34:43
because I was thinking like how are we doing AI right now that it's making a difference to people's lives and we have
34:48
this thing called gov.uk which is which actually when we when it happened several years ago was a pioneering thing
34:55
all the government information brought together on one website gov.uk and so you need to get a driving license
35:01
passport any interaction with government it was centralized in a very easy relatively easy to use way uh better
35:09
than most better than most yeah so we we are about so we're about to deploy AI
35:14
across that platform so that is something that I think you know several million people a day use right so a
35:20
large chunk of the population is interacting with gov.uk every single day to do all these day-to-day tasks right
35:26
every one of your customers is doing all those things and so we're about to deploy AI into that to make that whole
35:33
process so much easier because you know some people will be like look well I'm currently here and I've lost my passport
35:39
and my flight's in five hours you know at the moment that would require you know how many steps to figure out what
35:44
you do you know we actually when we deploy the AI it should be that you could just literally say that and boom
35:50
boom boom boom boom this is what we're going to do walk you through it and that's going to benefit millions and millions of people every single day
35:56
right cuz that's a very practical way in my seat that I can start using this technology to help people in their
36:02
day-to-day lives not just Healthcare discoveries and everything else that we're also doing but I thought that's
36:07
quite a powerful demonstration of literally your day-to-day customer seeing actually their just day-to-day
36:13
life get a little bit easier because of something that you know Elon deis and others in this room have helped create yeah no exactly the the the most
36:20
immediate thing is just being able to ask um like having a very smart friend that you can ask
36:25
anything um you know ask how to make something how to solve any problem and
36:30
it'll tell you um so and and obviously companies are going
36:36
to adopt this so I think you'll have much better customer service I guess essentially that'll probably be the
36:41
first thing you notice um and um and then we talked about education yeah um
36:50
so having a tutor so if you're trying to understand a subject like having a
36:57
phenomenal tutor on any subject is that that's really pretty much there already
37:02
almost I mean we need to obviously AI needs to stop hallucinating before you know it can't give you I mean we we
37:09
still have a little bit of the problem where it can give you an answer that's confidently wrong um with great grammar
37:15
uh and you know bullet points and everything in citations that was not
37:21
real so it has to be okay we need to make sure it's it's not it's not it's not giving you confidently wrong two
37:26
dire answers um but but we that's going to happen quick pretty quickly where it is actually correct um so yeah I going
37:34
to say for any for any parent who was homeschooling during coid and realizing what their kids needed to be helped with
37:41
that will come as an enormous relief I think very very good right have we got let's go questions uh over here who have
37:47
we got we any microphones or Brent are you there perfect hi Bren hobman um so you know
37:53
you've spoken eloquently about abundance and the age of abundance so it it feels obviously with AI it's
37:59
everything everywhere everywhere all at once but with um with robots and to to
38:05
get the age of abundance we'll need a lot of robots I know you're working hard on robots as well are there sort of constraints that we should think of and
38:12
our politicians should be thinking of that we might get one country might get heavily behind in in Rob in robots that
38:18
can do all these things and enter the age of abundance and therefore be at a strategic
38:23
disadvantage well really anything that can be actuated by a computer is effectively a
38:29
robot um so you can think of frankly Tesla cause are robots on Wheels um
38:37
anything that's connected to the internet is effectively an endpoint actuator for artificial intelligence um
38:46
so um you've got Boston Dynamics obviously they've been making impressive robots for a while um I think they're at
38:53
this point mostly owned by Hyundai so I I guess when I's probably going to make um robots of that are humanoid and and
39:01
and some rather interesting shapes that I wasn't disting like the one that looks like a has wheels and looks sort of like
39:07
a kangaroo on Wheels I'm not sure what that is but um um looks a little dented frankly but
39:16
um but there's going to be all sorts of all sorts of robots um you've got the company Dyson in in the UK which I think
39:23
does some pretty impressive things um I I I think the UK will not be behind
39:30
actually on on that front um UK also has arm which is um really the the best one
39:38
one of the best perhaps the best uh in in uh chip design in the world um Tesla
39:45
uses a lot of a lot of arm technology almost everyone does actually so I think
39:50
the UK is in in a strong position um Germany obviously makes a lot of robots industrial robots uh that I mean I think
39:58
generally countries that make um robots of any kind even if they seem somewhat
40:03
conventional will be will be fine um I do think there is a there is a a
40:10
safety concern especially with humanoid robots because um you at least the car can't
40:16
chase you into this building not very easily you know or chase you off a tree or you know um you can sort of run up a
40:23
flight of stairs and get away from a Tesla um um I think there a Stephen King movie
40:29
about that um if your car gets possessed um so but if you have a humanoid robot
40:35
it can it can basically chase you anywhere so I I think we should have some kind of um hardwired local cut off
40:45
um that that you can't update from the internet so anything that can be software updated from the internet
40:50
obviously can be overridden um but if you have a local sort of off switch um
40:55
where you p pa SA a keyword or something and then that puts the robot into a safe State um some kind of localized safe
41:03
State ability um an off switch you know where you don't have to get too
41:10
close to the robot I don't know so if you've got millions of these things going all over the place you're not
41:15
selling it just you know like no I I know um I'm saying it's is something we
41:20
should be quite concerned about um because if robot can robot can follow you anywhere then you know
41:27
what if they just one day get a software update and they're not so friendly anymore um then we've got a James
41:33
Cameron movie on I um it's it's actually that's it's funny you're saying that because we in
41:39
our session that we had today I you know just would say who we they made exactly
41:44
the same point right de so we're talking about they're talking about movies actually without mentioning James Cameron they're talking about James
41:50
camera movies and they're saying if you think about it's not just those movies but any of these movies trains Subways
41:59
metros cars buses they said all these movies with the same plot fundamentally all end with the person turning it off
42:06
right or finding a way to shut the thing down and they were making the same point that that you were about the importance
42:12
of actual physical off switches yeah and so all the technology is great but fundamentally this same movie has played
42:17
out 50 times we've all watched it and all fundamentally you know you know point I'm referring to right it all ends
42:23
in pretty much the same way with someone finding a way to just yeah do thing um which is kind of interesting that you
42:29
said a similar point right it's probably not the it's not the obvious place you'd go to but maybe that could be one of the
42:34
tests for the AI we we just say like blank is your favorite jamers Cameron movie fill in the
42:44
blank yeah excellent right um yes we got over there yeah
42:50
perfect hi a question for you both um so I'm a founder of a Ai and ml scaleup in
42:56
the third Center for AI which is leads in the north of England I'm bit biased um since the launch of chat GPT three
43:03
months after that we saw a real increase in fishing attacks using much more sophisticated language patterns um what
43:10
do we do to protect businesses consumers so they trust this technology better and
43:15
how do we bring them along that Journey with us well I think we shouldn't trust it
43:21
that much actually um it is actually quite quite a
43:27
significant challenge because we're getting to the point where even open source AI can pass uh human capture
43:32
tests so you know this s are you a human identify all the traffic lights in this picture you're like okay yeah is going
43:39
to have a no problem doing that in fact it'll do it better than a human and faster than a human so we're like how do
43:45
you you know at the point of which is better a better human better passing human test than humans then well what
43:52
tests actually make sense that is a real problem I don't actually have a good solution to it um that one of the things
43:58
we're trying to figure out on the xplatform is how to deal with that because it really we really are at the
44:04
point where um even with with uh open source you know readily available AI you
44:10
don't need to be sort of leading the field um you can actually be better than humans at passing these tests um and
44:16
that's sort of why we're thinking well perhaps we should sort of charge a a dollar or a pound a year it's a very tiny amount of money but it's it still
44:23
it still makes it prohibitively expensive to make a million Parts um so
44:29
um and especially if you need a million payment methods then you you run out of sort of for stolen credit cards pretty
44:35
quickly um uh so that that's that's sort of
44:40
where we're thinking like we might have to sort of just charge some very tiny amount of money um3 cents a day
44:45
effectively to um deal with the the onslaught of AI powered bonds um and if
44:53
if and that that is that is still a growing problem but will will be I think perhaps an insurmountable problem next
44:59
year so and and and then you have to worry about well manipulation of of um
45:05
information is making something seem very popular when in fact it is not because it's getting boosted by all
45:10
these likes and and and repost from um AI powered bots so that's why I sort of
45:16
think somewhat inevitably it leads to some small payment in order to uh
45:22
dramatically increase the the cost of a bond um so I think I think frankly I
45:28
think probably any social media system that doesn't do that will simply be overrun by
45:33
B you know I think my my general answer would be you know we we need to show
45:39
that we are on top of mitigating the risk right so people can trust the technology that's what actually the last
45:45
couple of days has been about on the safety Summit is just showing you know we're investing in the safety Institute
45:51
having the people who can do the research on these things to figure out how we mitigate against them and we have to do it fast and we have to
45:58
keep iterating it because I think all of us probably in this room believe that the technology can be incredibly powerful but we've got to make sure we
46:04
bring people along that Journey with us that we're handling the risks that are there and I said there's a job to do and
46:11
the last couple of days I think we make good progress on it because we want to focus on the positives and manage these
46:16
things but that requires action and and that's what the last couple of days has been about and your your um story your
46:23
analogy there was part of the research that actually you know the team working on the task force here published and
46:31
presented yesterday I don't know if you saw it was uh which is a essentially that it was a using AI to do to create a
46:39
ton of fake profiles on social media and then infiltrate particular groups with
46:45
particular information and actually at the moment that is said to your point of it's like cost free it's it's it's
46:51
getting it's getting to the point where it's like really you're going to have 100 for a penny sort of thing ridiculous and if you think about some of these
46:57
social networks at quite a neighborhood or town level it's not that many fake profiles that you need to quickly create
47:02
suddenly they're everywhere and there's some local issue that might be of importance and you know the team have
47:07
have run versions of how that would look like and suddenly they're interacting with everybody and then spreading a misinformation around ex your point
47:15
that's we literally as part of the research that we published on misinformation yesterday it's uh it's a
47:20
real challenge yeah exactly to your point I mean you the images it's it's you don't even need to steal some body's
47:26
picture cuz that's traceable but you can actually just say create a new a new image of a person realistic looking but
47:32
doesn't exist um and and then create a biography realistic but doesn't exist
47:38
and do that on mass and practically the only way we be able to tell us that the gram is too good did give away yeah no tyos come
47:48
on now I'm getting wave that because I think we are out of time I don't we take one very brief last question and let's
47:54
make a go one yes sir going your right in front of me go thank you for the opportunity Elon
48:00
question for you um related to X platform are there simple things we can do especially when it comes to visual
48:06
media you alluded to the fact that it's fairly straightforward and effectively free to make people like yourself say
48:13
and do things that you never said or did yeah um can we do something like cryptographically signed media I'm from Adobe we're working on this project yeah
48:20
Twitter was a member love to see X come back okay um digitally signed media to indicate uh not only what was created by
48:26
AI but what came from a camera what was real yeah to imbue a sense of Trust in Media that can go viral that sounds like
48:33
a good idea actually so if um some some way of authenticating would be would be
48:38
good um so yeah I I that sounds like a good idea we we should probably do
48:45
it there you go actually on that on that point so I've I've already so this is
48:52
particularly pertinent for people in my job right and I've already had a situation happen to me with a doctored
48:57
image that goes everywhere negative by the time everyone realizes well that's fake and we should stop sending it the
49:04
damage is damage is done um and actually we were again reflecting today if you think next year you've got elections in
49:12
you know I think you know the US India I think Indonesia um probably here there
49:17
you go massive news um and actually you've
49:23
got just an enormous junk of the world's population is voting next year right and you got EU elections as well and you
49:30
know actually just these issues are right in front of us you know next year is where big elections across the globe
49:36
probably the first set of Elections where this has been a real issue yeah um so figuring out how we manage that is I
49:43
think kind of mission critical for people who want you know the Integrity of our democracy yeah I mean some of it
49:48
is it's quite interesting like the Pope in the puffer jacket have you seen that one I haven't that's amazing but I mean
49:54
I still run into people who who who think that's real um I'm like well what are the odds he's wearing a puffa jacket in July in Rome uh you know be sweating
50:03
but it actually look quite quite dashing I say um in fact I think AI fashion is
50:08
going to be a real thing so so I Doom and Gloom like we live in the most interesting times and I think this is um it is you know like 80% likely to be
50:17
good and 20% bad and I think we're if we're cognizant and and careful about the bad part on balance actually it will
50:23
be the future that we want um or the the future that is preferable um and and it
50:29
actually will be somewhat of a leveler an equalizer in the sense that you know
50:35
I think everyone will have access to goods and services and education and so you know I think probably it leads to
50:40
more human happiness so I I guess I'd probably leave on on an optimistic note perfect yeah I that's a well that's a
50:48
that is a great note to end on I think that you know we all want that that better future we think it's there the
50:53
promise of it is certainly there lot people in this room including yourselves are working hard to make it happen our job in government is to make sure it
50:59
happens safely but on the basis of this conversation in the last couple of days I'm certainly leaving more confident
51:05
that we can make that happen so it's been a huge privilege and pleasure to have you here thank you very much for having [Applause]
51:15
me

Loading comments...