Wednesday, 30 January 2013

Myth #8: Security is too expensive

Let’s not kid ourselves – security isn’t cheap.  We have to buy hardware, and software, and staff, and training and auditors, and in each case somebody is putting their hand in your pocket and taking their cut.  But that’s not what this is about.  The myth is that it’s too expensive, that it doesn’t add value and that it’s only a “nice to have”.

Instead of thinking about security, for a moment imagine taking your family to a public swimming pool for a fun day out…

Public pools have fences.  They have lifeguards.  They have water in the pool, that is the right depth, the right temperature and has the right treatment to ensure that it is safe.  They have non-slip surfaces and signs that say “no running”.  They have lots of controls all designed to keep everyone safe, and most of them not noticed by anyone.

But the fences aren’t 10m high.  There are not hundreds of lifeguards.  The water still splashes out of the pool.  There aren’t patrols with assault rifles enforcing the “no running” rule.  These would be silly.  These would be a waste of money.

Security can be too expensive if spent in the wrong place, whether in a business or a public pool.  Businesses that overspend on hardware and underspend on testing are wasting money just like putting armed guards at a public pool.  They probably believe security is too expensive, but that isn’t really their problem.

For some businesses security is not considered a cost at all, is a core strategy.  Qantas is rightly proud of their safety record.  They don’t believe that safety is too expensive.

Information security is really just data safety.  Know what information is important to your business and protect it well, but not too well.

Security is a measure of the health of your company, and that makes this myth plausible.

Phil Kernick Chief Technology Officer
@philkernick www.cqr.com


Monday, 21 January 2013

Myth #7: A security review is just an audit

Here’s the thought process behind this myth: security is just risk management; risk management underpins compliance; compliance is driven by audit; audit is well understood.  There are many well defined and accepted audit methodologies for areas such as finance (SOX, SAS 70), process (COBIT) and security (ISO 27001).  Therefore any competent auditor, with an appropriate checklist should be able to perform a security review.

All risk management methodologies, whether qualitative or quantitative assume that risk is the product of impact (what will the loss be if the event occurs) and likelihood (how likely is the event).  Using this methodology events which are catastrophic but rare, and events which are insignificant but almost certain may both be labelled as medium risk.  And the beauty of medium risk events is that they are almost always accepted by the business.

The problem is that this analysis is fundamentally flawed when considering security.

The risk management methodology is designed for random and accidental events.  It is well understood how often buildings burn down.  It is well understood how long the average power failure will be.  This is true because actuaries have been recording tables of unlikely events for more than 100 years.  But IT security isn’t old enough as a discipline to have actuarial tables, which is exactly why you can’t buy anti-hacking insurance.

The insurers know something that businesses haven’t worked out yet.  Attackers completely control the likelihood.  If they have decided to attack, the likelihood is almost certain, no matter how it’s been assessed in a risk methodology.  Being hacked isn’t accidental and it isn’t random.  Remediation of all security vulnerabilities with high impact rather than just high risk is required to improve security.

But if you ask an experienced security specialist to undertake a security review with a current and appropriate checklist, and then you act on all the high impact findings, it’s plausible that you will be more secure.

Phil Kernick Chief Technology Officer
@philkernick www.cqr.com


Monday, 14 January 2013

Myth #6: We have good physical security

We have good physical security implemented by guards, guns and gates. All systems are in secure server rooms, located at secure sites, and since the bad guys can’t get to them, they can’t attack them.

This myth presupposes good firewalls, so let’s assume that attack from outside is too difficult. Do organisations really have as good physical security as they believe, and does this keep them safe?

Physical security is implemented by combining three different techniques:

(1) deterrence – making the risks too high to attack in the first place (guns);

(2) prevention – making it too hard or too expensive to attack (gates);

(3) response – having a capability to detect or capture the attacker even if successful (guards).

It does seem plausible that if an organisation gets all of these right, that physical security will protect them. The problem is that they never get them right, and physical access is almost always the easiest way to attack.

If a bad guy really wants to attack an organisation, none of the deterrence mechanisms matter, they’ve already decided to attack. Strike one.

The only prevention mechanism that has any chance of success is complete exclusion of all non-employees from a site. If visitors are let in, prevention has been bypassed. If there are any contracts with any third-party services at all, the only thing that has been done is to require an attacker to buy a second-hand contractor logo shirt from a charity shop. Network level security inside an organisation is usually very poor, and the attacker has just bypassed the firewall. Strike two.

A competent attacker who is determined to physically attack is going to rely on both looking like they should be there, and the normal human nature not to question strangers. The attacker won’t be stopped even in organisations with a name badge requirement and posters everywhere saying challenge strangers. And a simple disguise will make CCTV useless. Strike three.

Put bluntly: deterrence doesn’t work; prevention doesn’t work; and response doesn’t notice. It’s even worse than that, because the belief that organisations have good physical security when they really don’t, makes them blind to physical attack. This is especially true in branch offices.

Physical security underpins everything else, but it isn’t enough by itself, and that is why this myth is busted.

Phil Kernick Chief Technology Officer
@philkernick www.cqr.com

Monday, 7 January 2013

Myth #5: It’s too risky to patch

I can’t count the number of times I’ve been told by a client that it’s too risky to patch. The justifications are varied, but they usually fall into one of these general categories: (a) we can’t afford any downtime; (b) it’s a legacy system; (c) patches have been known to cause problems; or (d) our systems aren’t certified if we patch.

Let’s look at each of them in more detail.

“We can’t afford any downtime” is code for the implementation doesn’t have any redundancy or resilience, combined with a lack of understanding of whatever business process it is supporting. There is no such thing as 100% uptime, as some of the natural disasters in the last year have proved. And if there is a real business requirement for something close to 100% availability, then building a system without appropriate redundancy is an epic fail. This has nothing to do with patching.

“It’s a legacy system” is an excuse used by businesses that have poor strategic planning. A system which no longer has patches available, also no longer has support available. If a business is running a critical system with unsupported components I hope the IT manager is wearing their peril sensitive sunglasses! That way when they have a critical failure it will be without all the fear and worry that normally precedes one. This also has nothing to do with patching.

“Patches have been known to cause problems” is an example of the logical fallacy called the excluded middle. Just because a bad event has ever happened, doesn’t mean that the opposite is always true. By using this same logic, we should never get in a car as car crashes have caused deaths. It is true that patches sometimes do caused problems, but this isn’t a reason not to patch. While this is at least related to patching, it’s actually more about having a poor testing process, insufficient change management, and lack of understanding of risk management.

“Our systems aren’t certified if we patch” is code for letting the vendor set the security posture rather than the business. I mentioned this before in Myth #2 as a problem with outsourcing security, and it’s equally true here. This really doesn’t have anything to do with patching either.

In reality the certain loss from not patching is far higher than the theoretical loss from patching. In the Defence Signals Directorate top 35 mitigations against targeted cyber-attacks, patching applications is #2 and patching operating systems is #3. I really think that DSD has a much better understanding of the risk than most IT managers.

Patching is a foundation for good security as it eliminates the root cause of most compromises. Better patch management leads to lower accepted risk, and this is something that all executives want.

Any system too risky to patch is too risky to run, and that is why this myth is completely busted.

Phil Kernick Chief Technology Officer
@philkernick www.cqr.com