Showing posts with label attack. Show all posts
Showing posts with label attack. Show all posts

Monday, 22 April 2013

Why no-one gets SCADA security right

SCADA is an acronym for Supervisory Control and Data Acquisition.  That's a bit of a mouthful and unless you've studied Engineering it's not clear what it means, so here's a simple definition: SCADA is computer controlled physical processes.  The common examples given are power stations and water treatment plants, but it's much more than that.  Building management systems that control the temperature, lights and door locks: that's SCADA.  The production line at a large bakery that makes your bread: that's SCADA.  The baggage system at the airport that loses your bags: that's SCADA.  The traffic lights that annoy you on your drive to work: that's SCADA.

It's everywhere.  It's all around us.  And it's all implemented badly.  Maybe that's too strong - it's all implemented inappropriately for the threat model we have in 2013.

We have to set the way back machine to the 1980s to understand why we are in the mess we are today.

Traditionally SCADA systems were designed around reliability and safety.  Security was not a consideration.  This means that the way the engineers think of security is different.  In IT security we consider Confidentiality first, then Integrity and finally Availability.  This matches with our real world experience of security.  But in SCADA systems it's the other way around - Availability first, then Integrity, and finally Confidentiality a very distant third.

There are two very good reasons for this approach.

Firstly: Keeping SCADA systems running is like balancing a broom stick on your finger - you can do it, but it takes a lot of control, and if you stop thinking about it, the broom stick falls.  This is the fundamental reason that the dramatic scenes where the bad guy blows up a power station as shown in movies just can't happen.  If you mess up the control the power stations stops generating power, it doesn't explode.

Secondly: Every business that controls real world processes has a culture of safety: they have sign boards telling how many days since the last lost time injury, and are proud that the number keeps going up.  Anything that gets in the way of human safety is removed.  That's why control workstations don't have logins or passwords.  If something needs to be done for a safety reason, it can't be delayed by a forgotten password.

All of this made perfect sense in the 1980s when SCADA systems were hard wired analog computers, connected to nothing, staffed by a large number of well-trained engineers, and located in secure facilities at the plant.

That isn't true now.  Today SCADA systems are off-the-shelf IT equipment, connected to corporate networks over third party WAN solutions and sometimes the Internet, staffed by very few over-stressed Engineers, sometimes not located even in the same country.

So what happened in between?  Nothing.  Really.  SCADA systems have an expected life of about 30 years.  The analog computers were replaced by the first general purpose computers in the late 1980s, and they are only now being replaced again with today's technology.  They will be expected to run as deployed all the way to 2040.

I hope you've stocked up on candles.

Phil Kernick Chief Technology Officer
@philkernick www.cqr.com
 

Monday, 25 March 2013

Personal Information is the Currency of the Internet

When we talk about privacy of personal information on the Internet what do we mean?  Many people assume it is the punch-line to a joke, as it is the accepted wisdom that there is no privacy on the Internet.  But the wisdom of the crowds is not something I'd bet on.

Legally personal information is anything that can identify an individual.  But this is an overly broad definition, and includes everything you have on your business card.  Morally personal information is that which is in the sphere of your domestic life.  But the work/life balance is increasingly blurred so that doesn't really work either.  A practical definition of personal information that needs privacy protection is anything that can be used against you.

In the past this has been easy to understand and easy to protect.  We used well understood physical security controls.  If you want to stop someone looking into your bedroom window then close the curtains.  But today it's much harder to understand, as the controls are now all logical, changeable, and set by publicly listed corporations.  If you think you understand the Facebook privacy controls today, wait until they change them tomorrow.

These same public corporations are not privacy advocates.  Facebook and Sun have publicly said that the age of privacy is over.  Google, Microsoft and Apple have all gone to court to fight against having to keep your personal information secure.  But this is entirely rational behaviour on their part - if you don't pay for the service you are not the customer, you are the product.

But do we protest too much.  Do we really care about our privacy?

Turn on a TV anywhere in the Western world and you will be bombarded with reality TV shows.  Go to any news-stand and look at the array of gossip magazines.  These forms of entertainment are very popular, and very, very profitable.  And they are all based on voyeurism and abusing the privacy of others.  There is even a mainstream movie coming out this year called Identity Thief, that will let us laugh along at the hapless victim.

I think that there is an explanation, that explains our use of Facebook, that explains reality TV, an explains why privacy on the Internet really does make sense.

Personal information is the currency of the Internet.  It's what we use to pay for services.  It should be protected in the same way we protect our wallet, and we should make sensible choices about where to spend it.

For the value we get from Facebook, for most of us the spend is reasonable.  For the winners of reality TV shows, the spend is trivial compared to the real world cash they convert their privacy into, even if the same can't be said for the losers.

But if we don't protect our privacy, we will have nothing left to spend.  And no-one likes being poor.

Phil Kernick Chief Technology Officer
@philkernick www.cqr.com
 

Wednesday, 20 March 2013

19th century PKI

Over the last few years more and more reports have been published claiming that PKI was fundamentally flawed.  The failure of the Dutch CA DigiNotar is widely claimed to be the final proof.  But I disagree.  The problems with PKI fall into two categories: "you're doing it wrong"; and "you're using it wrong".  Neither of these have anything to do with the fundamental underpinning cryptography.

The problem that PKI is intended to address is trust.  I can trust what you say if someone I trust authorises what you say.  It really is that simple to say, and at the same time fiendishly complicated to implement correctly.

It may surprise you to know that we've been doing PKI since the end of the 19th century, in the role of Justice of the Peace.  This is a person who will witness a signature on an official document.  The receiver of the document trusts that the document is genuine as they trust the JP, and the JP saw you sign it.

However just like current PKI problems, there are identical problems in the 19th century version.  When I had a legal document witnessed at the local public library, the JP had no way of validating that the form I was signing was genuine.  He also made no effort to validate that what I signed was really my signature, nor that I was the person referenced on the form - which makes sense as there is no way he could have done that anyway.

What he asserted is that a real person made a real mark on a real piece of paper.  Everything else is covered by laws against fraud.  And this has worked for more than 100 years, and continues to work today.

If we used current PKI to do only this - assert that a real computer made a real communication at a definite time, everything would be fine.  But we don't.  We want to know which computer, and so ask questions about identity, and then act surprised when the implementations fail us.

PKI is the answer.  It's the question that's wrong.

Phil Kernick Chief Technology Officer
@philkernick www.cqr.com
 

Tuesday, 12 March 2013

Printing to the Internet

You've deployed a brand new networked printer, and after getting it all set up and working, what's the next step?  How about connecting it to the public Internet.  So that anyone, anywhere, at any time can print anything they want and waste all your paper and toner.

Madness you say!  Not it would seem in universities in Taiwan, Korea and Japan.

A little Google hacking and we have 31 internet connected Fuji Xerox printers.  Some of them have public IP addresses, but many of them have been actively published through a NAT firewall.  So this was a conscious choice!

Perhaps it's just a clever way for attackers to exfiltrate data, but I've learned not to attribute to malice that which is better explained by incompetence.

Here's my advice: If you want to print to a work printer from home, this is not the way to do it.

Phil Kernick Chief Technology Officer
@philkernick www.cqr.com
 

Monday, 4 March 2013

The Perils of Cloud Analogies

Moving your operations to the cloud is like... a dream for those who love analogies.  All sorts of things have been claimed, but there is only one reality.  It's like outsourcing, because that's exactly what it is.

The biggest business risk with outsourcing is that you replace technical controls with contracts, and while a move from tactical operation to strategic management looks excellent in a business plan, it can fail badly when interacting with the real world.  The claim that "insert-vendor-here" should be better at running the infrastructure because they developed it, is much more an article of faith than a well-reasoned position.

Consider the failure of the Windows Azure platform over the last weekend.  I noticed it when I couldn't play Halo 4.  As a gamer it didn't occur to me that there was anything deeper than the Halo servers weren't working, but it turns out they were hosted on a cloud infrastructure.  And the cloud had failed.  Completely.  The reason: "Storage is currently experiencing a worldwide outage impacting HTTPS operations due to an expired certificate."  In 2013.

Information security is a people business, and the people failed.

As Sony previously discovered, the total failure of their game platform is a pain, but it isn't going to threaten the company.  To Microsoft's credit they had it all restored in about 8 hours.

But Windows Azure doesn't just host games - it hosts businesses.  And the same failure happening in the middle of the week would mean that businesses that had fully moved to the Microsoft cloud could do nothing.  No backup.  No failover.  No disaster recovery.  Because all the availability controls were outsourced.  And it is very unlikely that the clients using the service are big enough to make any contractual claim for loss.

This isn't just a Microsoft problem, Amazon had the same sort of outage last year.  Every cloud hosting provider will have these problems.

So here's my cloud analogy: it's like putting all your eggs in one basket - a basket you've never seen and can't locate - along with everyone else's eggs, and having faith that this will be managed well by the fox.

Phil Kernick Chief Technology Officer
@philkernick www.cqr.com
 

Monday, 4 February 2013

Myth #9: We trust our staff

We are secure because we trust our staff.  We have a solid awareness programme, and after all, security is only a problem on the Internet.  If only it were true.

We might imagine that the most common internal attackers are the IT staff as they have full and unrestricted access to all of our systems.  As Microsoft wisely said in their 10 Immutable Laws of Security, a computer is only as secure as the administrator is trustworthy.

But system administrators aren’t the only insiders with means, motive and opportunity.

The Verizon 2012 Data Breach Investigations Report looked at the type of role held by internal attackers.  The results are eye opening.  While 6% of breaches were due to system administrators, 12% were by regular employees, 15% by managers and even 3% by executive management!

The truth is that trust must be earned, never expected.  All insiders have means and opportunity, all they need is motive.

To lower the risk, wise businesses perform background checks for new employees moving into sensitive positions, apply appropriate segregation of duties to lower the potential for attack, and then implement good detective controls to catch it if and when it happens.

If you trust but verify, then this myth is plausible.

Phil Kernick Chief Technology Officer
@philkernick www.cqr.com
 

Monday, 14 January 2013

Myth #6: We have good physical security

We have good physical security implemented by guards, guns and gates. All systems are in secure server rooms, located at secure sites, and since the bad guys can’t get to them, they can’t attack them.

This myth presupposes good firewalls, so let’s assume that attack from outside is too difficult. Do organisations really have as good physical security as they believe, and does this keep them safe?

Physical security is implemented by combining three different techniques:

(1) deterrence – making the risks too high to attack in the first place (guns);

(2) prevention – making it too hard or too expensive to attack (gates);

(3) response – having a capability to detect or capture the attacker even if successful (guards).

It does seem plausible that if an organisation gets all of these right, that physical security will protect them. The problem is that they never get them right, and physical access is almost always the easiest way to attack.

If a bad guy really wants to attack an organisation, none of the deterrence mechanisms matter, they’ve already decided to attack. Strike one.

The only prevention mechanism that has any chance of success is complete exclusion of all non-employees from a site. If visitors are let in, prevention has been bypassed. If there are any contracts with any third-party services at all, the only thing that has been done is to require an attacker to buy a second-hand contractor logo shirt from a charity shop. Network level security inside an organisation is usually very poor, and the attacker has just bypassed the firewall. Strike two.

A competent attacker who is determined to physically attack is going to rely on both looking like they should be there, and the normal human nature not to question strangers. The attacker won’t be stopped even in organisations with a name badge requirement and posters everywhere saying challenge strangers. And a simple disguise will make CCTV useless. Strike three.

Put bluntly: deterrence doesn’t work; prevention doesn’t work; and response doesn’t notice. It’s even worse than that, because the belief that organisations have good physical security when they really don’t, makes them blind to physical attack. This is especially true in branch offices.

Physical security underpins everything else, but it isn’t enough by itself, and that is why this myth is busted.

Phil Kernick Chief Technology Officer
@philkernick www.cqr.com

Monday, 17 December 2012

Myth #4: We comply with PCI DSS

There are a lot of organisation who think they are compliant with the controls in the PCI DSS, but really aren’t.  There are even more that were compliant at a point of time in the past, but aren’t now.  But let’s for the moment assume that an organisation really is compliant with the 6 objectives, 12 requirements and 225 controls in the PCI DSS.  Does this mean that they are more secure?

The Verizon 2012 Data Breach Investigations Report provides statistics on organisations that suffered a data breach, but should have been compliant with the PCI DSS.  If they were compliant they were 24× less likely to suffer a loss.  This is a really clear statistic, companies really are far more secure if they are compliant with the PCI DSS.

Of course this shouldn’t be a surprise, since the standard is just good security practice, and if organisations take this good practice and apply it to everything, it naturally follows that they will be more secure.

But there were still breaches from PCI DSS compliant organisations.  This doesn’t imply that the standard isn’t good enough – there is no such thing as perfect security – but more perhaps reflects that the only part of an organisation covered by the standard is the cardholder data environment.  It’s possible to have a compliant cardholder data environment, but neglect security in other areas, and still get compromised.

Compliance drives security, but does not equal security.

If PCI DSS is used as a basis for the entire security culture, then this myth is confirmed.

Phil Kernick Chief Technology Officer
@philkernick www.cqr.com

Monday, 3 December 2012

Myth #2: We’ve outsourced our security

We don’t need to worry about security because we’ve outsourced it.  I’ve increasingly heard this from clients, so clearly many large businesses believe it to be true.  As this myth is quite pervasive, it needs more analysis: what do our clients mean by “security”, what do they mean by “outsourced”, and why have they taken this path?

Let’s start with outsourcing.  It’s one of the 10 year cycles in the IT industry: outsource non-core functions, then discover that they actually are core and bring them back in.  Wash, rinse and repeat.  For security this can make more business sense than for IT in general, as most businesses are not set up to support security 24×7, can’t retain the specialists they would need to do so anyway, and aren’t in the security business.  So outsourcing isn’t inherently a problem.

But maybe they aren’t talking about staff.  Maybe it’s just infrastructure that’s been outsourced.  The Cloud Security Alliance has an entire body of knowledge on how to do this well.  So having infrastructure managed by a third-party isn’t inherently a problem either.

So does having your security outsourced make you inherently more secure?  According to the Verizon 2012 Data Breach Investigations Report, the answer is no.  An organisation is just as likely to have had a data breach if the assets are managed internally as externally.  This is a disappointing result, but hardly surprising as managing IT is not the same as managing security.

What many businesses really think they are outsourcing is accountability for security, and that isn’t possible.  Businesses need to define their own security policy, and then select an outsourcer based on their capability to meet it, and then keep them honest.  Otherwise they end up with the outsourcers risk appetite, which might be quite different from their own.

In the end, you really do get only what you pay for.  If your outsourcer is certified to an recognised international standard, such as ISO27001 then you will pay more, but you will get a secure result.  If you go down the cheap and cheerful route with security outsourcing, unfortunately you probably won’t end up either cheap or cheerful.

This myth is plausible, as it is possible to successfully outsource security, but it isn’t easy.

Phil Kernick Chief Technology Officer
@philkernick www.cqr.com

Wednesday, 21 November 2012

Myth #1: No-one will attack us

Guess what the following organisations have in common: a game host in the USA; a pizza chain in India; a news aggregator in Mexico; and a festival organiser in Ireland.  Answer: they were all the victims of a data breach during the first three weeks of September 2012.  According to the OSF Data Loss DB, they were just four of the 27 organisations that publicly disclosed that they’d been breached in those three weeks.  The number of undisclosed breaches is probably orders of magnitude greater.
Many organisations feel that they are safe because they don’t believe that anyone is interested in their data.  Even more feel safe because they believe that they’ve never been attacked.

Unfortunately the truth is somewhat more uncomfortable.
Every organisation’s data is interesting to someone: hackers, competitors, hactivists, even nation states; and if you are connected to the Internet you have been attacked, and unless very lucky or very careful, you’ve been compromised.

But who sets out to steal the corporate secrets of a pizza chain?  This is the wrong question.  The question implies that the target was selected first, then the attack happened second.  In reality in today’s Internet it’s much more likely that the opposite happened, that the entire internet was attacked, and the targets selected that were vulnerable.  Including the pizza chain.
But is this plausible?  The Internet is big!  You might think that it’s a long way to the corner shop, but that’s nothing compared to the Internet.  The IPv4 Internet can have a maximum of 2 billion directly addressable hosts, and as of July 2012 ISC reported that about 900 million were connected.  That is still a lot of address space to attack!  Today automation, fast links, and cloud computing have turned an impossible task into something that can be done for a few dollars in a few days.

So every service published on the Internet will be found.  And if they are vulnerable they will be attacked.  This week.
If you still think that you have weeks to patch your Internet facing hosts, you are amongst the good company of those who have been compromised but just don’t know it yet.

If you needed an excuse to get your IPv6 migration started, I can’t think of a better one, as it moves scanning the entire Internet back into the impossible category.
Then there are targeted attacks…

This myth is completely busted.

Phil Kernick Chief Technology Officer
@philkernick www.cqr.com