Monday 27 May 2013

The Law of the Internet

I've spoken to a number of journalists over the last few weeks who have all posited some variation of the simple question: "Don't national governments have the right to govern the Internet?"  While it is a simple question, it doesn't have a simple answer, and perhaps not for the reason that many people think.

There are many technology activists who see cyberspace as the first truly transnational place: where the existing rules don't apply, and a new set of rules - better rules - can take their place; where we can leave meatspace behind and become a meritocracy of the mind.  I strongly recommend reading "Snow Crash" by Neal Stephenson to get a sense of it.

I believe that there are three fundamental problems with this approach: (1) the Internet isn't a place; (2) we don't yet live in a futuristic dystopian civilisation; and (3) the Internet was not designed.  It's the last problem that causes all the angst.

If the Internet had been designed by governments, rather than organically grown by technologists, it wouldn't look anything like what we see today.  There would be the controls that national governments want, but there almost certainly wouldn't have been the flourishing of our society that connectedness has brought us.  If you want a real example of this, look at the Internet of DPRK, or the Great Firewall of China.  Then imagine each of these interconnected with deep packet inspection borders to mimic the physical borders we have today.  There would even likely be Internet passports to allow transit.

But it wasn't designed, and the law-makers were slow coming to the party.  By the time they noticed it was largely too late.  As John Gilmore famously said in 1993: "The Net interprets censorship as damage and routes around it."  It was too late in 1993, and it's way too late in 2013.

Does this mean that the netizens have won, and society as we know it will necessarily fall?

Of course not.  The problems that the Internet brings are the same problems that all societies have had over all of human history.  The links are just faster, and the people vastly more connected.  In the past if someone wanted a business to pay protection money they had to send a hard-man to bully the owners and do a little damage.  There was always the possibility of injury, arrest and incarceration.  Today they DDOS their website, with no possibility of injury, and almost no chance of getting caught.

But the crime is the same.  And the laws to deal with the physical crime are already on the books.

Governments can and should govern the Internet, but they can't control it.  Governing requires the consent of the governed.  Control requires technology.

The Internet is a people problem.  Until national governments realise this, they have no chance.

Phil Kernick Chief Technology Officer
@philkernick www.cqr.com
 

Monday 20 May 2013

Attribution is Easy

Imagine two neighbours - let's call them Alice and Chuck - who aren't friends, but who regularly do business with each other.  Alice is an architect who designs wonderful houses, and Chuck is a carpenter who builds them.

One day Chuck tells Alice that he can't pay as much for her designs as he used to, as he's found another architect who can do them cheaper.  She's a bit dubious as it is a small town, but she lowers her rates a bit, and they keep working together.

Over the next few weeks, Alice is sure that there is someone looking over her back fence during the evenings, but she can't tell exactly who it is.  She doesn't do anything about it or tell anyone.

A few months go by, and Chuck says that he's been studying at night to become an architect, and that he doesn't need Alice any more.  She's shocked, it took her years to get her degree, and she knows that Chuck spends his nights at the pub.  Is it even possible?  How could this be?  Perhaps it's just his way of driving an even lower price.

Her worst fears are realised.  Not only is he building wonderful houses, they look exactly like her designs.

She confronts Chuck and tells her that she knows that he is stealing her designs, and that he's been looking over her back fence to copy them using a camera with a telephoto lens.  Of course he denies it!  But if she lowers her rates just a little more, then they can start doing business again.

This is an economic market working perfectly - if it is cheaper to steal the design than license it, economic theory drives theft, until the cost of theft is greater than the cost of licensing.

Alice can get all upset at Chuck, but she needs to realise that she has the control.  If she increases the protection of her designs by installing night vision cameras, building a higher fence, and situating her office somewhere with no windows, then she will be able to increase her prices again to cover the investment, and the relationship will continue as before.

Attribution is easy.  Doing something to protect your business is hard.

Phil Kernick Chief Technology Officer
@philkernick www.cqr.com
 

Tuesday 14 May 2013

Writing Secure Software is Hard

All code is crap.  I know, because I've written a lot of it.  A long time ago, in a galaxy far far away, I wrote a large PHP based enterprise application that is still in production use today.  It's also being used as an example of how not to write code that will survive penetration testing.

Development of this application started in 1998, so in Internet years it is ancient.  A recent review of the application has found that XSS is possible in nearly every field, and while SQLI is harder to do that you'd expect, due to a protection framework I wrote to save me from writing bad SQL, it is still possible.

How can I have gone so far wrong?  The simple answer is I didn't know any better.

There are two foundation reasons why software developers don't write secure code.  The first is that they aren't told they have to, and the second is they aren't shown how to do it.  Both of these reasons applied to me - there was no specification for the application that included non-functional requirements such as security, and none of the university level courses I'd passed on software development even mentioned secure coding practices.  In my defence I'm going to throw in a third reason that these attack types were not well known at the time.

Unfortunately the foundations are just as weak today.  Most development projects don't include security in their requirements document, and most developers are not taught how to write secure code.  My third defence doesn't save us any more as the attack types are very well known.  Rapid prototyping methodologies just make this harder.

It's time to do it better.  All development projects need to have security as part of their project governance, and have developers trained by penetration testers on how their code will be abused.

Microsoft started in 2002 with their secure development initiative.  Most other large development companies have done the same thing.  But there are still a very large number of web facing applications being developed without the safety net of a secure software development lifecycle, and they will continue to fall over much to the surprise of their developers.

The 21st century has been around for a while now, but we're still writing code like it's 1999.

Phil Kernick Chief Technology Officer
@philkernick www.cqr.com
 

Monday 6 May 2013

Why People Break the Rules

You have a security policy.  Some of your staff even know where it is.  You have a security awareness campaign.  Every year the staff are required to click through the computer based training module.  Your IT department has deployed security controls to every workstation to limit what people can do.  Staff have the flexibility of bring-your-own-device.

And yet your staff still break the rules.  Every.  Single.  Day.

At first glance there can only be two reasons for this flagrant violation of the security policy.  Either the staff are ignorant and don't understand the rules, or they are troublemakers and dismissive of the need to follow them.  The obvious answers are more computer based training and stronger rules.  Perhaps a new screen-saver and some posters will help.

Perhaps not.

I've undertaken social engineering assignments that involved getting into the IT department of a large bank.  Past reception, through the first swipe card door, past the guard station, through the second swipe card door.  Not just once, but on multiple consecutive days to prove that it wasn't a fluke.  Everything was in plain sight of bank employees and contrary to their policies.

So are the bank employees ignorant or troublemakers?  Maybe some are one or the other, but the vast majority are neither.  They are sensible.  Just to be clear, I am saying that breaking the security rules is sensible for many employees of many businesses.

Think about it in economic terms.  Employees are paid for getting the job done.  They get fired for not getting the job done.  They almost never get fired for breaking the rules.  In fact there is rarely any consequence of breaking the rules at all.

So given the choice of getting the job done by breaking the rules, or not getting the job done by following the rules, which is the rational choice?  The one they all make.

Which brings us to the crux of the matter.  Security rules that get in the way of getting the job done will be ignored.  A culture of getting the job done first and compliance with rules a distant second, fosters breaking the rules.

When recast this way, there really are two obvious ways forward.  Either promote a culture where we look at security in the same way manufacturing businesses look at safety, and report every incident and near miss.  Or mercilessly fire people for non-compliance.

Love or fear is a choice.

Phil Kernick Chief Technology Officer
@philkernick www.cqr.com