Regulating the Internet of Things

Speaker 1: So today, we are going … the
speaker is going to be Bruce Schneier. He is the CTO IBM Resilient, and special advisor
to IBM Security, from IBM Resilient. Bruce Schneier: Okay, so that was
a lot of announcements. We will have them straight. I’m pretty sure the IBM home is not actually
a home. Sorry, the Google Home is not an actual home
provided by Google, but a device you put in the home you already own, so just to be clear
on that. So, thanks for coming. It’s always nice to see everybody at RSA yet
again. In 2011, Marc Andreessen said that
software is eating the world. It’s an interesting quote, and I think what
he’s saying is that software is permeating every aspect of our lives. And the way I think of it is that everything
is becoming a computer. So your microwave oven is a computer that
makes things hot. Your refrigerator is a computer that keeps
things cold. Your smart phone is a small, portable
computer that makes phone calls. An ATM machine is a computer with money inside. Your car is a … not a computer, it’s probably
a hundred-plus computer distributed system with four wheels and an engine. And this is happening at all aspects of our
lives. Right? I mean, a nuclear power plant is a computer
that produces energy. And as everything becomes a computer, computer
security becomes everything security, and this has two important ramifications for
us. One, the knowledge that we all have about
computer security will soon be more broadly applicable. It’ll be applicable to everything, and two,
the restrictions and regulations that are in the physical world are coming into our
computer world. And the beachhead of all of this is the internet
of things. So when I think of the internet of things, I think of it in three pieces. There are the sensors that collect data about
us and our environment. Right, so smart phone location data, or smart
thermostats and light bulbs knowing who’s in the room, or internet-enabled street and
highway sensors. There’s the smarts that figure out what all
this data means and what to do about it. So its processing, its memory, and a lot of
it’s in the cloud. And the third part are the actuators
that affect our environment, so the whole point of that smart thermostat is to regulate
the temperature in the room. The whole point of sensors in our cars is
eventually going to be to drive autonomously. So, when you think about it, we’re creating
an internet that senses, thinks, and acts. And this is the classic definition
of a robot. So I argue that we are, together, creating
a world-sized robot, and we don’t even realize it. Now, this isn’t a robot in the classical sense. We tend to think of robots like we see them
on television and in the movies, so discrete, autonomous entities in a metal shell, with
the smarts inside, and the sensors and actuators on the surface, like Data from
Star Trek. But that’s not what we’re building. What we’re building is distributed, it doesn’t
have a central brain, different parts are controlled by different people, it doesn’t
have a singular goal or focus, and most importantly, it’s not something deliberately designed. This is an emergent property of the computers
and networks that we’ve built. But for our purposes, it is smart things that
act on the world in a direct and physical manner. And of course, smart’s relative. It’s actually pretty dumb, but it’s getting
smarter, and it’s getting more powerful through all the interconnections we’re building. And this is what’s eating the world. This is what means internet security becomes
everything security, which is all of the lessons we know become broadly applicable. Lessons of security and complexity,
of vulnerabilities and patching, incident response, attackers, their tactics. Right, everything we’ve done for decades is
going to be everywhere, with two real important differences, the effects … Sorry, one important
difference, the effects are greater. So decision making algorithms are going to
have lasting and serious effects. You think of predictive policing, algorithms
that process loans, or college applications, or government services, algorithms
that determine who gets released from jail, or what kind of treatment you get at the hospital. We’re seeing these systems vertically integrated
in a way that threatens the openness and accessibility of the internet. More centralation, more monopolies. The proliferation of sensors that erodes privacy,
and allows ubiquitous surveillance on a global scale. Remember that Google Home you might
win? You know, when all of these have the potential
to deepen social inequities and reinforce social divides, and specific to our field,
the immediate security threats are greater. Cyber-physical systems have real world effects,
and the integrity and availability threats are much worse than the confidentiality threats. So we’re worried about information
manipulation, as an increasing threat, and both former DNI, James Clapper and current
NSA director, Mike Rogers, have both testified about this. Denial of service is increasingly a threat,
as these systems become more essential. It’s one thing for Reddit to be DDoS’d. It’s another for your home thermostat to be
DDoS’d in the winter. Hacking is increasingly a threat
in SCADA systems, and then as these things affect our life and property, there’s a threat
there. And of course, confidentiality is still a
threat, especially as these systems has become more independent and autonomous. So we spend a lot of time ensuring that our
communications, our encryption, can’t be broken. Who here wants intelligent, independent,
autonomous robots to communicate securely in a way that we can’t listen? I’m not convinced that’s a good idea. We’re in a world where our smart phone is
emerging as the centralized control device, which leads to single points of failure. We know about class breaks, are even more
serious now. The whole full disclosure debate takes a very
different tone, when we’re talking about a vulnerability in aircraft avionics,
so same computers, different outcomes. Or put another way, there’s a fundamental
difference between crashing your computer and you lose your data, and crashing your
pacemaker and you lose your life. It might be the same operating system and
the same vulnerability, but the effects are night and day. So there are five truisms from internet security
that we need to take to the broader world. One, most software is poorly written and insecure. We know this. We, in the computer world, don’t want to pay
for quality software. Good, fast, cheap, pick two. We picked fast and cheap, and we do it again
and again, and we had good reasons, but we might want to rethink that, because we know
that poor software is full of bugs, and we know that some bugs are security
vulnerabilities, and some of those are exploitable. Truism two, the extensibility of computerized
systems means that everything can be used against us. Extensibility is fundamental in computers,
and doesn’t exist anywhere else, because computers can be programmed to do anything. The computer in your toaster can get additional
features, can be reprogrammed, can get malware, in a way that manual systems
can’t. So these continuously evolving systems are
hard to secure, because we can’t anticipate every use or every condition, and these systems
can be upgraded with additional features, both ones you like and ones you don’t know
about. Real different. This doesn’t happen to cars, pre-computer. It can’t. Truism three, the complexities
of computerized systems resulting in new insecurities. Now, we know this deep in our core, that complexity
is the worst enemy of security. We know it for all sorts of reasons. You know, we know we talk about attack surfaces,
and attackers having first mover advantage. We talk about agility of attackers, and the
ponderousness of defenders, but we talk about all this stuff, and it basically
means two things, that attack is easier than defense. It was kind of neat when President Obama,
former President Obama said that a few months ago. I felt like he was listening to my talks. And two, that security testing is hard in
a way it wasn’t hard before computers. Too many options, too many configurations,
too many interactions. You can’t just do an underwriter’s laboratory
test for computer security, like you can do for light bulb safety. It just doesn’t work. Truism four, there are new vulnerabilities
in the interconnections. The more we connect things to each other,
the more that vulnerabilities in one thing affect other things. So the Dyn attack is a great example of that,
vulnerabilities in DVRs and CCTV cameras allow a hacker to knock over a DNS server, which
allows them to drop a couple dozen popular websites. But we see it again and again. There’s a great story by Mat Honan, of how
his identity was stolen, and what happened is … I think I’m going to get this right. There was a vulnerability in his Amazon account
allowed hackers to get into his Apple account, which allowed them to get into his Gmail account,
which allowed them to drop his Twitter account, and it was a cascade of failures. It’s a really good article to read. Or Target Corporation, where a vulnerability
in their HVAC supplier gave hackers an avenue into their corporate network, and
this is really hard to fix, because no one system might actually be at fault. Security is not composable. You could have two secure systems, put them
together and you get residual insecurity. Not true in the real world, in the same way. And my fifth trend, last, is that computers
and networks are vulnerable in different ways. This is important. The failure modes are different
between computer systems and the mechanical systems they replace, for a whole bunch of
reasons. A lot of it is that the internet is naturally
empowering. It allows things to scale, including attacks. So the notion of a class break, where you
could have secure everything. You wake up one morning, and every single
copy of, I don’t know PDF, is insecure. It doesn’t happen in the real world. So we know that driverless cars will be much
more secure and safe than regular cars, until they’re not, right? And that will not surprise us, because we
know how class breaks work. That will surprise the rest of the world. The whole software monoculture makes this
work, because we all are subject to the same vulnerabilities, and fewer attackers can do
more damage because their ability to scale attacks. And this becomes more dangerous as systems
get more critical. Remember, we’re building a robot that affects
the real world, so we are worried about crashing all the cars, shutting down all the power
plants, and so on. It’s science fiction, still, but not stupid
science fiction. And we also know we’re not concerned about
the security against the average attacker. We’re concerned about security
against the 5 sigma guy who can ruin it for everyone. Right, one person writes the Mirai botnet,
then publishes his code, and within a week it’s in dozens of botnets. That’s our world. And soon, that’ll be everyone’s world. So this is a real hard technical problem,
and there are a lot of people working on it. There are a lot of companies on the show floor,
and a lot of people still working in stealth. You know, different ways to secure the IoT,
or poorly secured systems, and it’s whether it’s secure IoT building blocks,
or security systems that assume a malicious environment, I mean ways to limit catastrophic
effects. There’s a lot of good stuff being researched. I don’t think we’re going to solve this anytime
soon. We’re more likely to muddle through with various
technologies, basically as we’ve done for the past couple of decades. I mean, in the near-term, there are a lot
of people trying to come up with a list of thing IoT vendors should be doing. I’ve been collecting those lists. I posted it on my blog last week. I think I had 19 different IoT security guideline
documents. They all basically say the same stuff. Right? Good security practices, good testing, patching,
avoiding known vulnerabilities, secure defaults. Some of them talk about data minimization,
data protection, data accessibility, supporting responsible research, fail-safe
functionality. Some of them talk about a Faraday mode, that
should enable function even without the internet. Interoperability, data portability, I mean
we all could write these documents, and they’re all good lists. The question is how to get them adopted. How do you get the company that’s making the
internet-enabled toy, or toaster, or toothbrush. There actually is an internet-enabled toothbrush. Right, how do we get them to adopt
these? I mean, until now, we’ve largely left computer
security to the market. Right, and this conference is a testament
to that market, but if you’re a vendor here, you know that the incentives only work okay. There’s lots of externalities to worry about. The interdependencies are really great, and
there are collective action problems we have there that markets just can’t solve. But we have been okay with these
imperfect solutions, because the effects of the failures just weren’t that great, and
that’s what’s changing. Additionally, the economics of the internet
of things is different. So our computers and phones are as secure
as they are for two basic reasons. One, there are teams of engineers, at companies
like Microsoft, and Apple, and Google, that are doing their best to design the things
secure in the first place, and two, those same teams of engineers are able
to quickly and effectively deliver security patches to all end user devices, when vulnerabilities
are found, and patching has gotten much better in the past couple of decades, so not great,
but it’s real good. But that whole ecosystem doesn’t exist for
low-cost embedded systems, like DVRs or home routers. They’re built at a much lower profit
margin. They’re often built offshore by third parties,
and there just aren’t security teams associated with those devices. Even worse, a lot of them have no way to patch. I mean, the way you update your DVR right
now is you throw it away and buy a new one, and that’s actually not a good mechanism. Also, we get our security from
the fact that our devices keep churning. You replace your phone every couple of years,
your computer maybe every three years, and that’s not true for these cheap, embedded
systems. I mean, I replace my DVR, what every five
to 10 years? My refrigerator every 25 years. I mean, I expect to replace my thermostat
approximately never, and that’s not going to work, because our field doesn’t work that
way. And the market’s not going to fix
this because neither the buyer nor the seller care. I mean, think of that DVR that was used in
the Mirai botnet. The buyer of it has no idea it’s part of the
botnet. It’s working perfectly. It was cheap. What’s the problem? The seller doesn’t care. It’s working perfectly. It’s cheap. What’s the problem? This is all an externality. And really, even sort of more broadly, the
market tends not to fix safety or security problems without government intervention. I mean, think of food safety and security,
think of automobile safety, airplane safety and security, product safety, what we’re going
through right now with the safety of financial products, without government intervention,
you don’t get the levels of security you need. And this is getting big fast. I saw a Gartner number, we can argue with
it, but they have us adding 5.5 million devices to the internet every day. That’s about 2 billion per year,
and most of it’s low-hanging fruit for attacks. It’s entry points into larger systems, gives
us larger and more powerful botnets, and some of it’s controlling surprisingly highly critical
systems. So in general, we have two paradigms of security. There’s paradigm A, that comes from the world
of dangerous things, and this is the paradigm of getting it right the first time. So think of planes, automobiles,
medical devices, buildings. This is the world of regulations, of codes,
of standards, certifications, testing, licensing. Then there’s paradigm B, from our heretofore
benign world of software, and this is the paradigm of make sure your security is agile. This comes from … I guess this is update,
and rapid prototyping, and survivability, recoverability, mitigation, adaptability. We can’t get it right the first time. We can fix it fast. In a sense, we’re choosing … We’re trying
to balance the cost of failure and the cost to fix. Here, the cost of failure’s very high. Here, the cost of failure is low, and the
cost to fix is low. Here, the cost to fix is high. A product recall of an automobile, expensive. Rebuilding a building after it
collapses on all of us, very expensive. That doesn’t happen here. These two worlds are colliding, in our cars,
I guess literally, our medical devices, building control systems, traffic control systems,
voting machines, and we need to somehow make these paradigms collide, and we’re not doing
great. So we live in a world where Windows XP, which
is what, 14 years old, is still running 95% of our ATM machines. There are medical systems that cannot download
security patches, because by doing so, it invalidates the testing required by the medical
certification systems, to be usable medical devices. Or a nice comparison from last year … Actually,
it was 2015, Chrysler recalled 1.4 million cars to fix a software vulnerability. So they actually had a product recall for
a software update. September of last year, Tesla had
a vulnerability in their Model S cars, downloaded a patch to their users overnight. I mean, it’s sort of interesting to watch
the two different worlds. Primarily, this is a policy problem. This is a problem of law, economics, psychology,
sociology, and getting the policy right’s critical, getting the economics and psychology
correct is critical. Think of email security. Think of spam/anti-spam. Policy, when you get policy wrong, you have
serious problems. Apple versus FBI, a real good example of that,
or the whole debate about the vulnerability equities process. These are very technical policy debates we’re
having in our industry. And law and technology have to work together,
so I think this is the most important lesson from Edward Snowden. We always knew that technology
could subvert law. What Snowden showed us is that law can subvert
technology, and that both have to work together. So I have a practical problem, when I think
about government involvement, is that there isn’t a regulatory structure to tackle this
at a systemic level, so there’s a fundamental mismatch between the way government works
and the way technology works. Government operates in silos. FAA regulates aircraft, FDA regulates medical devices, FTC regulates privacy, and unfair and deceptive trade practices,
in certain contexts. I can go on, but each agency has different
approaches and different rules, and few have expertise in these issues. The internet is this free-wheeling system
of integrated objects and networks. It grows horizontally, it destroys barriers,
it allows systems that never communicated to communicate. Already, there are apps on my phone that can
log health information, control my energy use, and communicate with my car. I think I’ve just crossed four government
regulatory agencies and like it’s still morning. So any solutions we come up with have to be
holistic, have to approach computers as computers, whether they’re cars, drones, or phones. It’s just different peripherals
on the same computer. So governments have a limited tool box they
use when they look at industries. They can do things ex ante, kind of before
the fact. And that’s like regulations on products or
product categories, licensing of individuals or products, testing requirements. There are things they can do ex post, after
the fact, and that’s like fines for insecurity, or liabilities when things
go wrong, torts. There’s thing they can do sort of in the middle,
so think of product labeling and other transparency measures. Think of consumer reports, like ratings agency,
or an NTSB like forensics agency. Then there’s stuff they can do kind of on
the side, and that might include funding for education and research, or using
its own procurement power to drive requirements. That’s basically what governments can do. And we’re seeing a bunch of movement, I think
primarily in Europe. There’s a new general data protection regulation,
the GDPR, which has strong requirements for privacy, and even stronger penalties. Goods manufactured and sold in Europe have
to have a mark, and you’d see it, it says “CE,” which basically means “Complies
with all applicable standards,” and there is, already, an applicable standard for vulnerability
disclosure, and they’re working on one for secure defaults and for patch management. This kind of stuff gets incorporated in trade
agreements like GATT, and then suddenly, you see it in more places in the world. The international considerations are interesting,
because software is write once and sell everywhere, so for automobiles, you’ll see
car manufacturers make different cars for different environmental regulations. So they’re not going to sell the same car
in California that they sell in Mexico, because the environmental regulations are different,
but for software, it’s easier to sell one thing. If you have to make it more secure, because
the EU demands it, you might as well sell it that way everywhere, because you don’t
lose anything. So my proposal in the US is that
I think we need a new regulatory agency. Now, there’s a lot of precedent for this. In the past century, many technologies have
led to the formation of new government agencies. Trains did, cars did, airplanes did, radio
did, the Federal Radio Commission became the FCC. Nuclear power led to the formation of the
Department of Energy. I mean, for a couple of reasons. New technologies need new expertise, and new
technologies need new controls. And this is something markets can’t solve. Markets are, by definition, short-term and
profit motivated. That’s what they’re supposed to do. They don’t solve collective action problems,
and we need some counterbalancing force to corporate power, and government is the entity we use to solve problems like this. So of course, there are lots of problems here. I don’t think we really have the expertise
and willingness to do the work. Regulatory capture is always a problem. We have, here in the United States, a general
unwillingness of Congress to do anything proactive, and there’s a real problem of security versus
safety. Right, the difference between a static safety
environment and intelligent and adaptive security environment, and how that
changes things, and also how to regulate security in a fast moving technological environment. Not at all clear. Right, so the devil’s in the details here,
and I don’t have them, but I submit that this is the worst possible idea, except for all
the others, and I’m not sure the alternative is viable any longer, because usually when
we’re asked about regulation, we answer, “We want none of the above,” and I
don’t think that’s going to fly anymore, because I think governments are going to get involved
regardless. The risks are too great and the stakes are
too high. Government is already involved in physical
systems, and the physicality of the internet of things will spur them to action. If not that, then it’ll be the actual robots. My guess is the courts are the first branch
of government that will set precedent here, that there will be torts that will
be recognized. I think the existing regulations come in second,
and I think Congress and laws play catch-up, but Congress will follow. I mean, nothing motivates the US government
like fear. I mean, all the strong bias we have towards
leaving the market alone tends to disappear when people start dying. When there’s a disaster, people demand that
government do something. Think of 9/11, and the formulation
of Department of Homeland Security, a massive government bureaucracy. And if we don’t watch out, what we’ll get
will be something like the Homeland Security, something ill-conceived, and ham-handed, and
doesn’t work very well. So our choice here is not government involvement
or no government involvement. Our choice is smarter government involvement
or stupider government involvement, and we have to start thinking about this now, otherwise this will be imposed on us. We need to make sure that the regulations
that are coming don’t stifle innovation. Now, we always here that as a threat when
everyone talks about regulation, and it’s unclear whether it’s true. We heard it with, I don’t know, restaurant
sanitation codes, automobile safety regulations. Not a lot of evidence that it does, and my
feeling is if we do this right, it will spur innovation, especially in our
industry. We also, I think, need to start thinking about
disconnecting systems. I mean, if we cannot secure complex systems,
then we must not build a world where everything is connected and everything is computerized. There are other models we can use, local collection,
limits, systems that don’t interact. And we need to start thinking about
more distributed systems, more self-empowerment, and I don’t think these large centralized
systems are inevitable. I mean, there are technical elites pushing
us in that direction, but the arguments aren’t very good, and I believe that we will soon
reach the high water mark of computerization and connectivity, and that afterwards, we’re
going to make conscious decisions about how and when to connect. And there might be a good analogy with nuclear
power here. The ’70s was a high watermark in
the use of nuclear power. That’s when we were still talking about nuclear power everywhere. We had a disaster at Three Mile Island, and
we didn’t get rid of nuclear power. We just made more conscious decisions about
when it was a good idea, when it was too hard and too dangerous. So I think that’s coming. Not today. I think we’re still in the honeymoon phase
of connectivity. I think governments and corporations are so
punch drunk on data, and just like, you remember the NSA slogan, “Collect it all,”
we’re in the middle of connect it all, but I think that’s going to change. And morally, I think we need to change the
fabric of the internet, so that evil governments just don’t magically have the tools to create
a horrific totalitarian state. It feels like a bad idea. More generally, we need to start talking about
our future. We rarely, if ever, have conversations about
our technological future and what we’d like to have. Instead of designing our future, we let it
come as it comes, without forethought, or architecting, or planning. When we try to design, we get surprised by
emergent properties. I think this also has to change. I think we should start making moral, and
ethical, and political decisions about how technology should work. Until now, we have largely given programmers
a special right to design, to code the world as they saw fit, and giving them
that right was fine, as long as it didn’t matter. I mean, fundamentally it doesn’t matter what
Facebook’s design is, but when it comes to things, it does matter, so that special right
probably has to end. And also, for us right now, for
all of us, we technologists need to get involved in policy. As internet security becomes everything security,
internet security technology becomes more important to overall security policy, and
we’re never going to get the policy right if the policy makers continue to get the technology
wrong. Think of the going dark debate. Think of the equities debate. Think about the voting machine
debate. Think about driverless car debate. These are all important policy debates happening
right now, that desperately need technologists involved, and if you watched Apple versus
FBI, what you saw were technologists and policy makers talking past each other. Right, the DMCA debate has that same problem. You watch the 702 debate later this year,
you’ll see the same thing. We need to fix this. We need to fix this. Technologists need to get involved in policy
discussions. We need to be on Congressional staffs, in
federal agencies, at NGOs, part of the press. Because getting it right means having our
expertise. And this is a lot bigger than security. I think we need to build a viable career path
for public interest technologists, just like there is right now for public interest
attorneys. If we don’t do that, bad policy happens to
us. All right, so quickly the main points. The computerization of everything will change
our profession, even as it changes the world, and this is computers that affect the world
in a direct and physical manner being a fundamentally different animal in the eyes of the government. And like it or not, government
involvement is coming. When computers start killing people, there
are going to be consequences, and security is an exception to our bias for small government. I think this is coming faster than most people
think. I’ve seen estimates in the tens of billions
of IoT devices by 2020. We need to get ahead of this. We need to start thinking about this, the
pros and cons. We can no longer answer “None of
the above” to government regulation, and the worst outcome is that non-technological policy
makers impose regulations on us. And lastly, we need to bring together policy
makers and technologists, and that’s hard to do, but we need to get involved in the
debate. Thank you. … microphone there, and a microphone there. Please just come up to the mic and ask your
question. While he’s coming up, I’ll tell you, I am
doing … We’re doing a book signing and book giveaway at the IBM booth, at 2:45 today,
so if you all show up, it’ll scare them, which would be awesome for my career, so please
do that. Then at 4:00, there’s actually going to be
alcohol on the show floor. This is a custom brew [inaudible] cocktail that’ll be handed out free, and we’re not going to ask for ID, which is
awesome. Don’t tell them that. Yes. Speaker 3: Thank you for that. Do you have any thoughts about certification,
in terms of something has to be certified before it can get on X.
Bruce Schneier: Yeah, don’t know. I mean-
Speaker 3: I’m thinking issues with rollout, de-certification-
Bruce Schneier: I can tell you why it’s not going to work. So, we have two types
of certification. There is certification of individuals. Right, you had to be a licensed architect
to design this building. You couldn’t be just anybody, right? So we could have that sort of certification,
licensed software engineer, it’s certainly possible, or we could have certification of
our objects. Think of a medical device, has to be … or
a drug, before it can be used. Both are possible. I think both have a role. Both are going to upend our industry dramatically,
but that’s in the government tool kit, and we’re seeing it for medical device
software, so my guess is you’re not going to get a one-size-fits-all regulation, but
you’ll have different pieces, just like for food safety, you don’t have one regulation. They tend to be diced around, but those are
possibilities. Speaker 3: All right, because I’m thinking
the buyer isn’t hurt by this thing, the seller isn’t hurt by [crosstalk]
Bruce Schneier: Oh yeah. Speaker 3: But the community is. Bruce Schneier: Right, and that’s the externality,
and that’s why there’ll be something, but certification is certainly something you might
see. You could probably easily see it in driverless cars. Right, before driverless car software is on
the road, it has to go through this testing certification environment, right? I mean, that’s plausible. [inaudible] there. Speaker 4: Thank you. So, in the context of trying to find the middle
ground between our fear of regulation and our love of small government, what are your
thoughts around the influence that insurance can have in the informational symmetry of
uneducated buyers? How does insurance drive to become
a better educated buyer of security solutions? Bruce Schneier: So insurance doesn’t need
better educated buyers. Insurance just gives you sort of a buyer that
can do the math. I mean, and insurance poses an important role
in a lot of our security systems, because they are educated in place of the consumer. Right, in order to get insurance, you have
to do these things. In order for your network to be covered, you
have to buy equipment from this list. Right, you can make up all things that insurance
companies could do, that raise the security of the ecosystem, without the buyers’
really knowledge, so I think insurance has a very powerful role to play. Again, a lot of reasons why you can’t lift
the existing models onto computers and networks, but I have been very bullish on insurance
as being a mechanism that the market uses itself, to raise security, but it’s still
… Insurance works because there’s a threat of liability, the threat of torts
at the backend, so I need government to sort of force you to pay attention to insurance. That way insurance can work. Yes. Speaker 5: Bruce, I was wondering if you would
be willing to serve as an example for us in your recommendation that technologists get
involved in policy, and consider running for president in 2020. Bruce Schneier: You know, so I don’t tweet,
so that’s kind of a disqualification right now. I’m not convinced that we are best
served in front of the legislative podium. I think we’re better served behind. I mean, I do get involved in politics a lot,
but it is not as candidate, and not as elected official. I mean, if you run for office, I think that’s
an awesome thing to do, and I’m not going to discourage anyone from doing that, but
I am much rather advise elected officials, government agencies, and I do a lot of that,
and I think that’s something we can do in our existing jobs. You don’t have to quit to be the person your
Congresscritter has on speed dial when something happens. Or, you know, you can take a spell at a government
agency for a year or two, and maybe get a sabbatical from work. That’s happening more. So I think advisory role is just as valuable
as being the person whose name is in the voting booth. Yes, please. Speaker 6: I spent some time on that speed
dial list, and I’m starting to work more on policy as well, and one thing I’ve experienced
is some technologist pushback, as if by becoming involved in policy, I’m less of a technologist. I spend less time in front of the computer,
and I’d like to know how we as technologists can reward instead of penalize those of us
who spend more time in DC and less on GitHub. Bruce Schneier: I’m trying to help by making
people recognize it’s important, and I think we need to look at public interest
law as an example. You go back to the 1970s, there was no career
path in public interest law. Now, the ACLU has a job opening, they get
like 200 applications, for making 1/3 you’d make in a corporate job, and that took a decade
or more to build that whole ecosystem, where there are courses at universities. There are internships. There are paid jobs for you to go to this
whole ecosystem, and we do not have that. There are some people who do this, but they’re
largely exceptions. We just need to make this the norm. I mean, I need someone at the Southern Poverty
Law Center who understands algorithmic discrimination at a deep, fundamental level, because that’s
how discrimination works in the 21st century. I mean, we need those people in these organizations. At Amnesty International, I mean [inaudible] is going to have to know … Because when you start seeing the kind of human rights
violations in this century, they’re going to be data-based. They’re going to be algorithm-based. They’re going to be surveillance-based. And I need people in those organizations who
understand those, just like I need them on Congressional staffs and inside government,
and in the press reporting on this. I mean, we all know one or two people who
do this. This is not nothing. MIT offers a degree in this. It’s technology policy, I think it’s called,
but they’re still exceptions. This needs to be something that many of us do. I mean, right now, 10% of the Harvard
Law School graduating class goes into public interest law. Negligible percent of the computer science
graduates go into public interest technology. That’s what I want to change. Please. Speaker 7: I see that you focus on the role
the government should have in regulations, but I would like to hear your thoughts about
what should be our role. I mean, as the security community and also
the industry on that question. Bruce Schneier: Right, so industry, I don’t
expect them to be anything but profit motivated. I mean, to the extent that the
industry is with us, it’s largely because it gives them good PR. I mean, I think this is even true of a company
like Apple. I mean, they’re doing what they’re doing because
we’re rewarding them, so continue doing that. Right, that’s good, but for us as individuals,
I think we have to support the right policies. I mean, I really think we need incentives
being changed, and this is where we need to have our voice expressed, not as consumers
but as citizens, so I think that’s what’s missing. There’s too much consumers, not enough citizens. All right, you’re my last question. Speaker 8: I read your blog. It’s awesome to see you live. Bruce Schneier: Thank you. Speaker 8: Thank you for being here, and-
Bruce Schneier: I’ll get you drunk later. Speaker 8: Yeah, sure. Bruce Schneier: Unless it’s like awkward for
you, in which case we won’t. Speaker 8: No, I’m on. Bruce Schneier: Okay. Speaker 8: Yeah, I have this question. Like, you were talking about that government
will follow on this regulation of internet of things to [inaudible] Even if
they follow up, and we have regulation authority in US, by US government, and things
get secure, but you know that things, the billions of things that you were talking about,
they’re all over the world, right? Bruce Schneier: Yeah. Speaker 8: So instead of talking about … Because
too, if things in US get secure, but all over the world they are insecure, then also, these
things are insecure, right? Bruce Schneier: All right, let me stop you,
because we’ve got to end quickly. So he’s right, it’s a real important consideration. This is an international problem. A domestic-only regulatory agency … I mean,
I kind of palmed a huge card there, but I mean, this is something we’re used to. It’s true in nuclear proliferation, small
arms trafficking, money laundering, human trafficking, where we have domestic solutions
for international problems, so we kind of know as a community, how to slowly make things
better, to marginalize states that don’t go along. We do have the benefit that it’s software,
it’s write once used everywhere. This’ll just be part of a solution, so yes,
you’re right, the international considerations are important. I don’t think they make this unsolvable. All right, I have to get off stage. Thank you very much. I’ll be out there, happy to answer questions. Come by the booth and I’ll say hi again. There are flyers down there. The flyer gets you, I think absolutely nothing
except the booth number. Yes, it does. All right, thank you.


Add a Comment

Your email address will not be published. Required fields are marked *