Showing posts with label Corporate. Show all posts
Showing posts with label Corporate. Show all posts

Monday, 25 March 2013

Personal Information is the Currency of the Internet

When we talk about privacy of personal information on the Internet what do we mean?  Many people assume it is the punch-line to a joke, as it is the accepted wisdom that there is no privacy on the Internet.  But the wisdom of the crowds is not something I'd bet on.

Legally personal information is anything that can identify an individual.  But this is an overly broad definition, and includes everything you have on your business card.  Morally personal information is that which is in the sphere of your domestic life.  But the work/life balance is increasingly blurred so that doesn't really work either.  A practical definition of personal information that needs privacy protection is anything that can be used against you.

In the past this has been easy to understand and easy to protect.  We used well understood physical security controls.  If you want to stop someone looking into your bedroom window then close the curtains.  But today it's much harder to understand, as the controls are now all logical, changeable, and set by publicly listed corporations.  If you think you understand the Facebook privacy controls today, wait until they change them tomorrow.

These same public corporations are not privacy advocates.  Facebook and Sun have publicly said that the age of privacy is over.  Google, Microsoft and Apple have all gone to court to fight against having to keep your personal information secure.  But this is entirely rational behaviour on their part - if you don't pay for the service you are not the customer, you are the product.

But do we protest too much.  Do we really care about our privacy?

Turn on a TV anywhere in the Western world and you will be bombarded with reality TV shows.  Go to any news-stand and look at the array of gossip magazines.  These forms of entertainment are very popular, and very, very profitable.  And they are all based on voyeurism and abusing the privacy of others.  There is even a mainstream movie coming out this year called Identity Thief, that will let us laugh along at the hapless victim.

I think that there is an explanation, that explains our use of Facebook, that explains reality TV, an explains why privacy on the Internet really does make sense.

Personal information is the currency of the Internet.  It's what we use to pay for services.  It should be protected in the same way we protect our wallet, and we should make sensible choices about where to spend it.

For the value we get from Facebook, for most of us the spend is reasonable.  For the winners of reality TV shows, the spend is trivial compared to the real world cash they convert their privacy into, even if the same can't be said for the losers.

But if we don't protect our privacy, we will have nothing left to spend.  And no-one likes being poor.

Phil Kernick Chief Technology Officer
@philkernick www.cqr.com
 

Monday, 4 March 2013

The Perils of Cloud Analogies

Moving your operations to the cloud is like... a dream for those who love analogies.  All sorts of things have been claimed, but there is only one reality.  It's like outsourcing, because that's exactly what it is.

The biggest business risk with outsourcing is that you replace technical controls with contracts, and while a move from tactical operation to strategic management looks excellent in a business plan, it can fail badly when interacting with the real world.  The claim that "insert-vendor-here" should be better at running the infrastructure because they developed it, is much more an article of faith than a well-reasoned position.

Consider the failure of the Windows Azure platform over the last weekend.  I noticed it when I couldn't play Halo 4.  As a gamer it didn't occur to me that there was anything deeper than the Halo servers weren't working, but it turns out they were hosted on a cloud infrastructure.  And the cloud had failed.  Completely.  The reason: "Storage is currently experiencing a worldwide outage impacting HTTPS operations due to an expired certificate."  In 2013.

Information security is a people business, and the people failed.

As Sony previously discovered, the total failure of their game platform is a pain, but it isn't going to threaten the company.  To Microsoft's credit they had it all restored in about 8 hours.

But Windows Azure doesn't just host games - it hosts businesses.  And the same failure happening in the middle of the week would mean that businesses that had fully moved to the Microsoft cloud could do nothing.  No backup.  No failover.  No disaster recovery.  Because all the availability controls were outsourced.  And it is very unlikely that the clients using the service are big enough to make any contractual claim for loss.

This isn't just a Microsoft problem, Amazon had the same sort of outage last year.  Every cloud hosting provider will have these problems.

So here's my cloud analogy: it's like putting all your eggs in one basket - a basket you've never seen and can't locate - along with everyone else's eggs, and having faith that this will be managed well by the fox.

Phil Kernick Chief Technology Officer
@philkernick www.cqr.com
 

Monday, 18 February 2013

Information Security Themes for 2013

Everyone else is making predictions as to what will be the important information security trends in 2013.  I think they are all wrong.  Not because the writers are uninformed, just because they are unimaginative.  It’s easy to look to the past, draw a line through the dots, scale it up and to the right, and predict the future.  Except these sort of predictions are safe, boring and they never allow for disruptive events.

Here are a few of the safe predictions that others have made:

·         mobile malware will increase

·         state sponsored attacks will increase

·         malware will get smarter

·         hactivists will get smarter

·         IPv6 security will matter

I agree with all of them, but then who wouldn’t.  Up and to the right.  And nearly everyone making these predictions sells something to mitigate them.

So what do I think the themes for 2013 will be?  I have only one information security theme that I think really matters.  Only one theme that will confound the industry, and add to the number of grey hairs sported by CIOs.  Only one theme we cannot avoid, even though we are really trying to do so.

Authentication.

Everything else pales in comparison.  It really is back to basics.  2012 was the year that we saw more password dumps than ever before.  It was the year the hash-smashing as a service became mainstream, and not just performed by spooky government agencies.  It was the year that we saw a mobile version of the Zeus crime-ware toolkit to attack SMS two factor authentication.  It was the year logging into sites via Facebook became the norm, and not the exception.

And these are all symptoms of an underlying problem.  Passwords suck.  Passphrases are just long passwords, and they also suck.  Every two factor scheme out there really sucks – mostly because I have so many different tokens that I have to carry around depending on what I want access to.

The problem is that we are tied into the past: something you know, something you have, something you are.  We spend more and more time trying to prove these to so many disparate systems that the utility of the systems asymptotes to zero.

So instead of looking back we need to look forward: somewhere I am, something I do, something I use.

Instead of trying to authenticate the user, we need to instead authenticate the transaction.  And that is a hard problem that our backward looking way of thinking makes even more difficult to address.  Happy 2013.

Phil Kernick Chief Technology Officer
@philkernick www.cqr.com
 

Wednesday, 30 January 2013

Myth #8: Security is too expensive

Let’s not kid ourselves – security isn’t cheap.  We have to buy hardware, and software, and staff, and training and auditors, and in each case somebody is putting their hand in your pocket and taking their cut.  But that’s not what this is about.  The myth is that it’s too expensive, that it doesn’t add value and that it’s only a “nice to have”.

Instead of thinking about security, for a moment imagine taking your family to a public swimming pool for a fun day out…

Public pools have fences.  They have lifeguards.  They have water in the pool, that is the right depth, the right temperature and has the right treatment to ensure that it is safe.  They have non-slip surfaces and signs that say “no running”.  They have lots of controls all designed to keep everyone safe, and most of them not noticed by anyone.

But the fences aren’t 10m high.  There are not hundreds of lifeguards.  The water still splashes out of the pool.  There aren’t patrols with assault rifles enforcing the “no running” rule.  These would be silly.  These would be a waste of money.

Security can be too expensive if spent in the wrong place, whether in a business or a public pool.  Businesses that overspend on hardware and underspend on testing are wasting money just like putting armed guards at a public pool.  They probably believe security is too expensive, but that isn’t really their problem.

For some businesses security is not considered a cost at all, is a core strategy.  Qantas is rightly proud of their safety record.  They don’t believe that safety is too expensive.

Information security is really just data safety.  Know what information is important to your business and protect it well, but not too well.

Security is a measure of the health of your company, and that makes this myth plausible.

Phil Kernick Chief Technology Officer
@philkernick www.cqr.com


Monday, 14 January 2013

Myth #6: We have good physical security

We have good physical security implemented by guards, guns and gates. All systems are in secure server rooms, located at secure sites, and since the bad guys can’t get to them, they can’t attack them.

This myth presupposes good firewalls, so let’s assume that attack from outside is too difficult. Do organisations really have as good physical security as they believe, and does this keep them safe?

Physical security is implemented by combining three different techniques:

(1) deterrence – making the risks too high to attack in the first place (guns);

(2) prevention – making it too hard or too expensive to attack (gates);

(3) response – having a capability to detect or capture the attacker even if successful (guards).

It does seem plausible that if an organisation gets all of these right, that physical security will protect them. The problem is that they never get them right, and physical access is almost always the easiest way to attack.

If a bad guy really wants to attack an organisation, none of the deterrence mechanisms matter, they’ve already decided to attack. Strike one.

The only prevention mechanism that has any chance of success is complete exclusion of all non-employees from a site. If visitors are let in, prevention has been bypassed. If there are any contracts with any third-party services at all, the only thing that has been done is to require an attacker to buy a second-hand contractor logo shirt from a charity shop. Network level security inside an organisation is usually very poor, and the attacker has just bypassed the firewall. Strike two.

A competent attacker who is determined to physically attack is going to rely on both looking like they should be there, and the normal human nature not to question strangers. The attacker won’t be stopped even in organisations with a name badge requirement and posters everywhere saying challenge strangers. And a simple disguise will make CCTV useless. Strike three.

Put bluntly: deterrence doesn’t work; prevention doesn’t work; and response doesn’t notice. It’s even worse than that, because the belief that organisations have good physical security when they really don’t, makes them blind to physical attack. This is especially true in branch offices.

Physical security underpins everything else, but it isn’t enough by itself, and that is why this myth is busted.

Phil Kernick Chief Technology Officer
@philkernick www.cqr.com

Monday, 7 January 2013

Myth #5: It’s too risky to patch

I can’t count the number of times I’ve been told by a client that it’s too risky to patch. The justifications are varied, but they usually fall into one of these general categories: (a) we can’t afford any downtime; (b) it’s a legacy system; (c) patches have been known to cause problems; or (d) our systems aren’t certified if we patch.

Let’s look at each of them in more detail.

“We can’t afford any downtime” is code for the implementation doesn’t have any redundancy or resilience, combined with a lack of understanding of whatever business process it is supporting. There is no such thing as 100% uptime, as some of the natural disasters in the last year have proved. And if there is a real business requirement for something close to 100% availability, then building a system without appropriate redundancy is an epic fail. This has nothing to do with patching.

“It’s a legacy system” is an excuse used by businesses that have poor strategic planning. A system which no longer has patches available, also no longer has support available. If a business is running a critical system with unsupported components I hope the IT manager is wearing their peril sensitive sunglasses! That way when they have a critical failure it will be without all the fear and worry that normally precedes one. This also has nothing to do with patching.

“Patches have been known to cause problems” is an example of the logical fallacy called the excluded middle. Just because a bad event has ever happened, doesn’t mean that the opposite is always true. By using this same logic, we should never get in a car as car crashes have caused deaths. It is true that patches sometimes do caused problems, but this isn’t a reason not to patch. While this is at least related to patching, it’s actually more about having a poor testing process, insufficient change management, and lack of understanding of risk management.

“Our systems aren’t certified if we patch” is code for letting the vendor set the security posture rather than the business. I mentioned this before in Myth #2 as a problem with outsourcing security, and it’s equally true here. This really doesn’t have anything to do with patching either.

In reality the certain loss from not patching is far higher than the theoretical loss from patching. In the Defence Signals Directorate top 35 mitigations against targeted cyber-attacks, patching applications is #2 and patching operating systems is #3. I really think that DSD has a much better understanding of the risk than most IT managers.

Patching is a foundation for good security as it eliminates the root cause of most compromises. Better patch management leads to lower accepted risk, and this is something that all executives want.

Any system too risky to patch is too risky to run, and that is why this myth is completely busted.

Phil Kernick Chief Technology Officer
@philkernick www.cqr.com

Monday, 17 December 2012

Myth #4: We comply with PCI DSS

There are a lot of organisation who think they are compliant with the controls in the PCI DSS, but really aren’t.  There are even more that were compliant at a point of time in the past, but aren’t now.  But let’s for the moment assume that an organisation really is compliant with the 6 objectives, 12 requirements and 225 controls in the PCI DSS.  Does this mean that they are more secure?

The Verizon 2012 Data Breach Investigations Report provides statistics on organisations that suffered a data breach, but should have been compliant with the PCI DSS.  If they were compliant they were 24× less likely to suffer a loss.  This is a really clear statistic, companies really are far more secure if they are compliant with the PCI DSS.

Of course this shouldn’t be a surprise, since the standard is just good security practice, and if organisations take this good practice and apply it to everything, it naturally follows that they will be more secure.

But there were still breaches from PCI DSS compliant organisations.  This doesn’t imply that the standard isn’t good enough – there is no such thing as perfect security – but more perhaps reflects that the only part of an organisation covered by the standard is the cardholder data environment.  It’s possible to have a compliant cardholder data environment, but neglect security in other areas, and still get compromised.

Compliance drives security, but does not equal security.

If PCI DSS is used as a basis for the entire security culture, then this myth is confirmed.

Phil Kernick Chief Technology Officer
@philkernick www.cqr.com

Tuesday, 11 December 2012

Myth #3: We have the best hardware

We have the best hardware.  We have firewalls from more than one vendor.  We have anti-virus appliances at the gateway.  We have excellent logging capabilities.  We’ve just implemented a data loss prevention solution.  And we’ve had the smartest engineers hook it all up.  Of course we are secure, our vendors told us so!

If you go back to Myth #1, most of the businesses that suffered a data breach had the best hardware.  It didn’t stop the bad guys.

The Verizon 2012 Data Breach Investigations Report has some really enlightening statistics about the timing of data breaches.  Most compromises happened within minutes of initial attack, and data exfiltration happened within minutes of compromise.  But detection of the compromise didn’t happen for months, and containment took weeks after that.  And many of these breaches happened to companies with all the best hardware.

The thinking underpinning this myth is that as technology created the problem, it can also solve it.  As most of these technical systems are scoped, implemented and managed by capable technologists, they are unfortunately blind to the truth.  Information Security is a People Business.  It’s not about the technology.  It’s never been about the technology.

People are the easiest system to attack, and people can subvert any security control.  And much to the annoyance of the technologists, they can’t be patched, and they can’t be upgraded!

Hardware provides a solid platform, and without it security isn’t possible.  But policy, configuration and management trump functionality every time.  Many businesses focus too much on capex and so will overspend on the former, and underspend on the latter.

That makes this myth busted.

Phil Kernick Chief Technology Officer
@philkernick www.cqr.com

Monday, 3 December 2012

Myth #2: We’ve outsourced our security

We don’t need to worry about security because we’ve outsourced it.  I’ve increasingly heard this from clients, so clearly many large businesses believe it to be true.  As this myth is quite pervasive, it needs more analysis: what do our clients mean by “security”, what do they mean by “outsourced”, and why have they taken this path?

Let’s start with outsourcing.  It’s one of the 10 year cycles in the IT industry: outsource non-core functions, then discover that they actually are core and bring them back in.  Wash, rinse and repeat.  For security this can make more business sense than for IT in general, as most businesses are not set up to support security 24×7, can’t retain the specialists they would need to do so anyway, and aren’t in the security business.  So outsourcing isn’t inherently a problem.

But maybe they aren’t talking about staff.  Maybe it’s just infrastructure that’s been outsourced.  The Cloud Security Alliance has an entire body of knowledge on how to do this well.  So having infrastructure managed by a third-party isn’t inherently a problem either.

So does having your security outsourced make you inherently more secure?  According to the Verizon 2012 Data Breach Investigations Report, the answer is no.  An organisation is just as likely to have had a data breach if the assets are managed internally as externally.  This is a disappointing result, but hardly surprising as managing IT is not the same as managing security.

What many businesses really think they are outsourcing is accountability for security, and that isn’t possible.  Businesses need to define their own security policy, and then select an outsourcer based on their capability to meet it, and then keep them honest.  Otherwise they end up with the outsourcers risk appetite, which might be quite different from their own.

In the end, you really do get only what you pay for.  If your outsourcer is certified to an recognised international standard, such as ISO27001 then you will pay more, but you will get a secure result.  If you go down the cheap and cheerful route with security outsourcing, unfortunately you probably won’t end up either cheap or cheerful.

This myth is plausible, as it is possible to successfully outsource security, but it isn’t easy.

Phil Kernick Chief Technology Officer
@philkernick www.cqr.com

Wednesday, 21 November 2012

Myth #1: No-one will attack us

Guess what the following organisations have in common: a game host in the USA; a pizza chain in India; a news aggregator in Mexico; and a festival organiser in Ireland.  Answer: they were all the victims of a data breach during the first three weeks of September 2012.  According to the OSF Data Loss DB, they were just four of the 27 organisations that publicly disclosed that they’d been breached in those three weeks.  The number of undisclosed breaches is probably orders of magnitude greater.
Many organisations feel that they are safe because they don’t believe that anyone is interested in their data.  Even more feel safe because they believe that they’ve never been attacked.

Unfortunately the truth is somewhat more uncomfortable.
Every organisation’s data is interesting to someone: hackers, competitors, hactivists, even nation states; and if you are connected to the Internet you have been attacked, and unless very lucky or very careful, you’ve been compromised.

But who sets out to steal the corporate secrets of a pizza chain?  This is the wrong question.  The question implies that the target was selected first, then the attack happened second.  In reality in today’s Internet it’s much more likely that the opposite happened, that the entire internet was attacked, and the targets selected that were vulnerable.  Including the pizza chain.
But is this plausible?  The Internet is big!  You might think that it’s a long way to the corner shop, but that’s nothing compared to the Internet.  The IPv4 Internet can have a maximum of 2 billion directly addressable hosts, and as of July 2012 ISC reported that about 900 million were connected.  That is still a lot of address space to attack!  Today automation, fast links, and cloud computing have turned an impossible task into something that can be done for a few dollars in a few days.

So every service published on the Internet will be found.  And if they are vulnerable they will be attacked.  This week.
If you still think that you have weeks to patch your Internet facing hosts, you are amongst the good company of those who have been compromised but just don’t know it yet.

If you needed an excuse to get your IPv6 migration started, I can’t think of a better one, as it moves scanning the entire Internet back into the impossible category.
Then there are targeted attacks…

This myth is completely busted.

Phil Kernick Chief Technology Officer
@philkernick www.cqr.com
 

Tuesday, 20 November 2012

Top 10 Information Security Myths


Any sufficiently complex field has a collection of myths associated with it.  They appear to be a normal part of the expansion of the knowledge base, where a premise is put forward, evaluated, and either accepted or discarded.  Myths can be thought of as the fuel for the scientific method.
However some myths seem to be cherished even when provably false.  This is true in the individual fields of information technology, psychology and law, and when put together into the field of information security, they can be more pervasive and harder to dispel.

In this series we’ve distilled the feedback we’ve had from 10 years of client conversations, and come up with the top 10 myths in information security.
Like all myths, some will be busted, some are plausible and a few even confirmed.

Top 10 Information Security Myths
Myth #1: No-one will attack us
Myth #2: We've outsourced our security
Myth #3: We have the best hardware
Myth #4: We comply with PCI DSS
Myth #5: It’s too risky to patch
Myth #6: We have good physical security
Myth #7: A security review is just an audit
Myth #8: Security is too expensive
Myth #9: We trust our staff
Myth #10: We have a security plan

Phil Kernick Chief Technology Officer
@philkernick www.cqr.com


Wednesday, 31 October 2012

CQR CONSULTING MERGES WITH TECHNOSYS


CQR Consulting and TechnoSys Consulting Services have announced a merger to create one of Australia’s most experienced and diverse information security providers. CQR Consulting Managing Director David Simpson said the move was the result of the strong compatibility between the two companies. “Both consultancies are fiercely independent of information security product vendors and share a passion for working cooperatively with clients to achieve the best outcomes,” Mr Simpson said.

CQR Consulting primarily works in the private sector while TechnoSys is South Australia’s largest provider of information security services to the government sector. “Our combined knowledge and skills position us to provide even greater value to our customers,” Mr Simpson said. The company will continue to trade as CQR Consulting and will be the state’s largest information security service provider, with a workforce of more than 30 security specialists. The merged company will continue to be based at the CQR Consulting offices in Dulwich, central Adelaide, with offices in Sydney, Melbourne and Oxford, United Kingdom.

TechnoSys managing director, Jeff Gwatking, said the merger was an exciting opportunity for the company. “The merger will enable us to offer an expanded set of services and skills to our existing clients, while providing our staff the opportunity of a broader range of clients and projects”, Mr Gwatking said.  “It really is a case of everybody wins.” Mr Gwatking will assume an executive management position in CQR Consulting.

Phil Kernick Chief Technology Officer
@philkernick www.cqr.com