Showing posts with label Press Releases. Show all posts
Showing posts with label Press Releases. Show all posts

Friday, 30 August 2013

Self Signed Security

For many years we have been evangelising the strength of the hierarchical trust model of PKI and putting up large warning signs whenever we see a self-signed certificate.  I think we got it completely backwards, and have been putting our trust in the wrong place.

The entire PKI architecture was designed to solve the man-in-the-middle problem: how do I know you are who you say you are, and aren't someone else pretending to be you.  To do this we created certificates, which are signed public keys.  The theory is that we trust the certificate authority that signed the key, and believe that the registration authority have validated the identity of the subscriber who asked for the key to be signed.

But nearly everything about the theory is provably wrong.

We know that certificate authorities get to be that by paying a tax to browser manufacturers.  They are trusted because of a commercial agreement that is an externality to the users of the system.

We know for sure that we can't trust the certificate authority.  The breaches of Comodo and Diginotar allowed certificates to be minted by a CA that were false.  We know that intelligence agencies around the world can buy wildcard root certificates from CAs that will allow national governments to intercept all traffic.

We know that registration authorities do as little as possible to validate the subscribers, usually requiring no more than an e-mail from the domain in question, or merely trusting WHOIS records.

So what are the alternatives?

Definitely not certificate pinning.  This is not scalable, and doesn't address the underlying problems with the architecture.  It's a band-aid on a gaping wound.

Convergence looks interesting, but I suspect that if implemented as suggested it would suffer from all the same problems.  We now have to trust notaries, rather than certificate authorities, and it ends up looking like the web of trust model from PGP, and that failed dismally.

I propose that the answer is self-signed certificates.  I know that I trust me.  I control everything about the issuing and revocation of my certificates.  And so does every subscriber.  While it is possible to for anyone to mint a certificate that looks like me, they would have to mint certificates for everyone to undertake the current man-in-the-middle attack strategy.  We don't make it less secure for the defenders, but we make it exponentially more difficult and costly for the attackers, and that makes all us more secure.

Sometimes the old ways really are the best.

Phil Kernick Chief Technology Officer
@philkernick www.cqr.com
 

Wednesday, 20 March 2013

19th century PKI

Over the last few years more and more reports have been published claiming that PKI was fundamentally flawed.  The failure of the Dutch CA DigiNotar is widely claimed to be the final proof.  But I disagree.  The problems with PKI fall into two categories: "you're doing it wrong"; and "you're using it wrong".  Neither of these have anything to do with the fundamental underpinning cryptography.

The problem that PKI is intended to address is trust.  I can trust what you say if someone I trust authorises what you say.  It really is that simple to say, and at the same time fiendishly complicated to implement correctly.

It may surprise you to know that we've been doing PKI since the end of the 19th century, in the role of Justice of the Peace.  This is a person who will witness a signature on an official document.  The receiver of the document trusts that the document is genuine as they trust the JP, and the JP saw you sign it.

However just like current PKI problems, there are identical problems in the 19th century version.  When I had a legal document witnessed at the local public library, the JP had no way of validating that the form I was signing was genuine.  He also made no effort to validate that what I signed was really my signature, nor that I was the person referenced on the form - which makes sense as there is no way he could have done that anyway.

What he asserted is that a real person made a real mark on a real piece of paper.  Everything else is covered by laws against fraud.  And this has worked for more than 100 years, and continues to work today.

If we used current PKI to do only this - assert that a real computer made a real communication at a definite time, everything would be fine.  But we don't.  We want to know which computer, and so ask questions about identity, and then act surprised when the implementations fail us.

PKI is the answer.  It's the question that's wrong.

Phil Kernick Chief Technology Officer
@philkernick www.cqr.com
 

Tuesday, 12 March 2013

Printing to the Internet

You've deployed a brand new networked printer, and after getting it all set up and working, what's the next step?  How about connecting it to the public Internet.  So that anyone, anywhere, at any time can print anything they want and waste all your paper and toner.

Madness you say!  Not it would seem in universities in Taiwan, Korea and Japan.

A little Google hacking and we have 31 internet connected Fuji Xerox printers.  Some of them have public IP addresses, but many of them have been actively published through a NAT firewall.  So this was a conscious choice!

Perhaps it's just a clever way for attackers to exfiltrate data, but I've learned not to attribute to malice that which is better explained by incompetence.

Here's my advice: If you want to print to a work printer from home, this is not the way to do it.

Phil Kernick Chief Technology Officer
@philkernick www.cqr.com
 

Monday, 25 February 2013

The Sky Falling, NOT!


FUD: Fear, Uncertainty and Doubt.  It seems to drive the product segment of the security market, and it really annoys me.  The sky is falling.  Cybercrime is rampant.  And on, and on, and on...

Let's dial the emotion down, and look at the underlying premise.  How safe online are we really?

As I look out my window, the sky is not falling, it is a beautiful blue.  However there are a few clouds and it may rain tomorrow.  If the doomsayers were in the weather industry instead, they would be telling us all the carry umbrellas at all times, wear raincoats just in case, and take out lightning protection insurance.  I don't see anyone on the street taking these sort of precautions, because they are all able to make a sensible assessment of the likelihood of rain.  Unfortunately they are not able to make a similar sensible assessment on the likelihood of a security compromise, so they worry.  And worry is the marketing tool of choice.

Cybercrime is certainly a problem, but the main problem is the "cyber" prefix.  Cybercrime is just crime.  We don't talk about transport-crime when a thief uses a car as a getaway vehicle.  We don't call it powertool-crime when a safe is cracked.  So why make such a big deal about the enabling technology?  Everything is online now, so everything is "cyber", so let's stop using the word.  People have been stealing from each other since they first decided to pile rocks up in a cave, and it is not much different today.  The majority of crime is theft and fraud, and this is a very rare event in everyday life.  It does happen.  It will continue to happen.  It may be a large absolute value, as much as hundreds of millions of dollars, but the world economy is in the hundreds of trillions, and if we've got crime down to below 0.0001% then we should be pleased about it, not worried by it.

I grew up in a small country town, where everyone knew everyone, and people didn't lock their doors.  Today the same town is much larger, unknown people are the majority, and everyone locks their doors.  In the online world, we are now in the large town, but still acting like we are in the small one.  We need to take sensible precautions against the bad guys, but not spend all our days worrying about them.  And at least know where your umbrella is!

Phil Kernick Chief Technology Officer
@philkernick www.cqr.com
 

Monday, 18 February 2013

Information Security Themes for 2013

Everyone else is making predictions as to what will be the important information security trends in 2013.  I think they are all wrong.  Not because the writers are uninformed, just because they are unimaginative.  It’s easy to look to the past, draw a line through the dots, scale it up and to the right, and predict the future.  Except these sort of predictions are safe, boring and they never allow for disruptive events.

Here are a few of the safe predictions that others have made:

·         mobile malware will increase

·         state sponsored attacks will increase

·         malware will get smarter

·         hactivists will get smarter

·         IPv6 security will matter

I agree with all of them, but then who wouldn’t.  Up and to the right.  And nearly everyone making these predictions sells something to mitigate them.

So what do I think the themes for 2013 will be?  I have only one information security theme that I think really matters.  Only one theme that will confound the industry, and add to the number of grey hairs sported by CIOs.  Only one theme we cannot avoid, even though we are really trying to do so.

Authentication.

Everything else pales in comparison.  It really is back to basics.  2012 was the year that we saw more password dumps than ever before.  It was the year the hash-smashing as a service became mainstream, and not just performed by spooky government agencies.  It was the year that we saw a mobile version of the Zeus crime-ware toolkit to attack SMS two factor authentication.  It was the year logging into sites via Facebook became the norm, and not the exception.

And these are all symptoms of an underlying problem.  Passwords suck.  Passphrases are just long passwords, and they also suck.  Every two factor scheme out there really sucks – mostly because I have so many different tokens that I have to carry around depending on what I want access to.

The problem is that we are tied into the past: something you know, something you have, something you are.  We spend more and more time trying to prove these to so many disparate systems that the utility of the systems asymptotes to zero.

So instead of looking back we need to look forward: somewhere I am, something I do, something I use.

Instead of trying to authenticate the user, we need to instead authenticate the transaction.  And that is a hard problem that our backward looking way of thinking makes even more difficult to address.  Happy 2013.

Phil Kernick Chief Technology Officer
@philkernick www.cqr.com
 

Monday, 4 February 2013

Myth #9: We trust our staff

We are secure because we trust our staff.  We have a solid awareness programme, and after all, security is only a problem on the Internet.  If only it were true.

We might imagine that the most common internal attackers are the IT staff as they have full and unrestricted access to all of our systems.  As Microsoft wisely said in their 10 Immutable Laws of Security, a computer is only as secure as the administrator is trustworthy.

But system administrators aren’t the only insiders with means, motive and opportunity.

The Verizon 2012 Data Breach Investigations Report looked at the type of role held by internal attackers.  The results are eye opening.  While 6% of breaches were due to system administrators, 12% were by regular employees, 15% by managers and even 3% by executive management!

The truth is that trust must be earned, never expected.  All insiders have means and opportunity, all they need is motive.

To lower the risk, wise businesses perform background checks for new employees moving into sensitive positions, apply appropriate segregation of duties to lower the potential for attack, and then implement good detective controls to catch it if and when it happens.

If you trust but verify, then this myth is plausible.

Phil Kernick Chief Technology Officer
@philkernick www.cqr.com
 

Monday, 7 January 2013

Myth #5: It’s too risky to patch

I can’t count the number of times I’ve been told by a client that it’s too risky to patch. The justifications are varied, but they usually fall into one of these general categories: (a) we can’t afford any downtime; (b) it’s a legacy system; (c) patches have been known to cause problems; or (d) our systems aren’t certified if we patch.

Let’s look at each of them in more detail.

“We can’t afford any downtime” is code for the implementation doesn’t have any redundancy or resilience, combined with a lack of understanding of whatever business process it is supporting. There is no such thing as 100% uptime, as some of the natural disasters in the last year have proved. And if there is a real business requirement for something close to 100% availability, then building a system without appropriate redundancy is an epic fail. This has nothing to do with patching.

“It’s a legacy system” is an excuse used by businesses that have poor strategic planning. A system which no longer has patches available, also no longer has support available. If a business is running a critical system with unsupported components I hope the IT manager is wearing their peril sensitive sunglasses! That way when they have a critical failure it will be without all the fear and worry that normally precedes one. This also has nothing to do with patching.

“Patches have been known to cause problems” is an example of the logical fallacy called the excluded middle. Just because a bad event has ever happened, doesn’t mean that the opposite is always true. By using this same logic, we should never get in a car as car crashes have caused deaths. It is true that patches sometimes do caused problems, but this isn’t a reason not to patch. While this is at least related to patching, it’s actually more about having a poor testing process, insufficient change management, and lack of understanding of risk management.

“Our systems aren’t certified if we patch” is code for letting the vendor set the security posture rather than the business. I mentioned this before in Myth #2 as a problem with outsourcing security, and it’s equally true here. This really doesn’t have anything to do with patching either.

In reality the certain loss from not patching is far higher than the theoretical loss from patching. In the Defence Signals Directorate top 35 mitigations against targeted cyber-attacks, patching applications is #2 and patching operating systems is #3. I really think that DSD has a much better understanding of the risk than most IT managers.

Patching is a foundation for good security as it eliminates the root cause of most compromises. Better patch management leads to lower accepted risk, and this is something that all executives want.

Any system too risky to patch is too risky to run, and that is why this myth is completely busted.

Phil Kernick Chief Technology Officer
@philkernick www.cqr.com

Monday, 17 December 2012

Myth #4: We comply with PCI DSS

There are a lot of organisation who think they are compliant with the controls in the PCI DSS, but really aren’t.  There are even more that were compliant at a point of time in the past, but aren’t now.  But let’s for the moment assume that an organisation really is compliant with the 6 objectives, 12 requirements and 225 controls in the PCI DSS.  Does this mean that they are more secure?

The Verizon 2012 Data Breach Investigations Report provides statistics on organisations that suffered a data breach, but should have been compliant with the PCI DSS.  If they were compliant they were 24× less likely to suffer a loss.  This is a really clear statistic, companies really are far more secure if they are compliant with the PCI DSS.

Of course this shouldn’t be a surprise, since the standard is just good security practice, and if organisations take this good practice and apply it to everything, it naturally follows that they will be more secure.

But there were still breaches from PCI DSS compliant organisations.  This doesn’t imply that the standard isn’t good enough – there is no such thing as perfect security – but more perhaps reflects that the only part of an organisation covered by the standard is the cardholder data environment.  It’s possible to have a compliant cardholder data environment, but neglect security in other areas, and still get compromised.

Compliance drives security, but does not equal security.

If PCI DSS is used as a basis for the entire security culture, then this myth is confirmed.

Phil Kernick Chief Technology Officer
@philkernick www.cqr.com

Tuesday, 11 December 2012

Myth #3: We have the best hardware

We have the best hardware.  We have firewalls from more than one vendor.  We have anti-virus appliances at the gateway.  We have excellent logging capabilities.  We’ve just implemented a data loss prevention solution.  And we’ve had the smartest engineers hook it all up.  Of course we are secure, our vendors told us so!

If you go back to Myth #1, most of the businesses that suffered a data breach had the best hardware.  It didn’t stop the bad guys.

The Verizon 2012 Data Breach Investigations Report has some really enlightening statistics about the timing of data breaches.  Most compromises happened within minutes of initial attack, and data exfiltration happened within minutes of compromise.  But detection of the compromise didn’t happen for months, and containment took weeks after that.  And many of these breaches happened to companies with all the best hardware.

The thinking underpinning this myth is that as technology created the problem, it can also solve it.  As most of these technical systems are scoped, implemented and managed by capable technologists, they are unfortunately blind to the truth.  Information Security is a People Business.  It’s not about the technology.  It’s never been about the technology.

People are the easiest system to attack, and people can subvert any security control.  And much to the annoyance of the technologists, they can’t be patched, and they can’t be upgraded!

Hardware provides a solid platform, and without it security isn’t possible.  But policy, configuration and management trump functionality every time.  Many businesses focus too much on capex and so will overspend on the former, and underspend on the latter.

That makes this myth busted.

Phil Kernick Chief Technology Officer
@philkernick www.cqr.com

Monday, 3 December 2012

Myth #2: We’ve outsourced our security

We don’t need to worry about security because we’ve outsourced it.  I’ve increasingly heard this from clients, so clearly many large businesses believe it to be true.  As this myth is quite pervasive, it needs more analysis: what do our clients mean by “security”, what do they mean by “outsourced”, and why have they taken this path?

Let’s start with outsourcing.  It’s one of the 10 year cycles in the IT industry: outsource non-core functions, then discover that they actually are core and bring them back in.  Wash, rinse and repeat.  For security this can make more business sense than for IT in general, as most businesses are not set up to support security 24×7, can’t retain the specialists they would need to do so anyway, and aren’t in the security business.  So outsourcing isn’t inherently a problem.

But maybe they aren’t talking about staff.  Maybe it’s just infrastructure that’s been outsourced.  The Cloud Security Alliance has an entire body of knowledge on how to do this well.  So having infrastructure managed by a third-party isn’t inherently a problem either.

So does having your security outsourced make you inherently more secure?  According to the Verizon 2012 Data Breach Investigations Report, the answer is no.  An organisation is just as likely to have had a data breach if the assets are managed internally as externally.  This is a disappointing result, but hardly surprising as managing IT is not the same as managing security.

What many businesses really think they are outsourcing is accountability for security, and that isn’t possible.  Businesses need to define their own security policy, and then select an outsourcer based on their capability to meet it, and then keep them honest.  Otherwise they end up with the outsourcers risk appetite, which might be quite different from their own.

In the end, you really do get only what you pay for.  If your outsourcer is certified to an recognised international standard, such as ISO27001 then you will pay more, but you will get a secure result.  If you go down the cheap and cheerful route with security outsourcing, unfortunately you probably won’t end up either cheap or cheerful.

This myth is plausible, as it is possible to successfully outsource security, but it isn’t easy.

Phil Kernick Chief Technology Officer
@philkernick www.cqr.com

Wednesday, 21 November 2012

Myth #1: No-one will attack us

Guess what the following organisations have in common: a game host in the USA; a pizza chain in India; a news aggregator in Mexico; and a festival organiser in Ireland.  Answer: they were all the victims of a data breach during the first three weeks of September 2012.  According to the OSF Data Loss DB, they were just four of the 27 organisations that publicly disclosed that they’d been breached in those three weeks.  The number of undisclosed breaches is probably orders of magnitude greater.
Many organisations feel that they are safe because they don’t believe that anyone is interested in their data.  Even more feel safe because they believe that they’ve never been attacked.

Unfortunately the truth is somewhat more uncomfortable.
Every organisation’s data is interesting to someone: hackers, competitors, hactivists, even nation states; and if you are connected to the Internet you have been attacked, and unless very lucky or very careful, you’ve been compromised.

But who sets out to steal the corporate secrets of a pizza chain?  This is the wrong question.  The question implies that the target was selected first, then the attack happened second.  In reality in today’s Internet it’s much more likely that the opposite happened, that the entire internet was attacked, and the targets selected that were vulnerable.  Including the pizza chain.
But is this plausible?  The Internet is big!  You might think that it’s a long way to the corner shop, but that’s nothing compared to the Internet.  The IPv4 Internet can have a maximum of 2 billion directly addressable hosts, and as of July 2012 ISC reported that about 900 million were connected.  That is still a lot of address space to attack!  Today automation, fast links, and cloud computing have turned an impossible task into something that can be done for a few dollars in a few days.

So every service published on the Internet will be found.  And if they are vulnerable they will be attacked.  This week.
If you still think that you have weeks to patch your Internet facing hosts, you are amongst the good company of those who have been compromised but just don’t know it yet.

If you needed an excuse to get your IPv6 migration started, I can’t think of a better one, as it moves scanning the entire Internet back into the impossible category.
Then there are targeted attacks…

This myth is completely busted.

Phil Kernick Chief Technology Officer
@philkernick www.cqr.com
 

Tuesday, 20 November 2012

Top 10 Information Security Myths


Any sufficiently complex field has a collection of myths associated with it.  They appear to be a normal part of the expansion of the knowledge base, where a premise is put forward, evaluated, and either accepted or discarded.  Myths can be thought of as the fuel for the scientific method.
However some myths seem to be cherished even when provably false.  This is true in the individual fields of information technology, psychology and law, and when put together into the field of information security, they can be more pervasive and harder to dispel.

In this series we’ve distilled the feedback we’ve had from 10 years of client conversations, and come up with the top 10 myths in information security.
Like all myths, some will be busted, some are plausible and a few even confirmed.

Top 10 Information Security Myths
Myth #1: No-one will attack us
Myth #2: We've outsourced our security
Myth #3: We have the best hardware
Myth #4: We comply with PCI DSS
Myth #5: It’s too risky to patch
Myth #6: We have good physical security
Myth #7: A security review is just an audit
Myth #8: Security is too expensive
Myth #9: We trust our staff
Myth #10: We have a security plan

Phil Kernick Chief Technology Officer
@philkernick www.cqr.com


Wednesday, 31 October 2012

CQR CONSULTING MERGES WITH TECHNOSYS


CQR Consulting and TechnoSys Consulting Services have announced a merger to create one of Australia’s most experienced and diverse information security providers. CQR Consulting Managing Director David Simpson said the move was the result of the strong compatibility between the two companies. “Both consultancies are fiercely independent of information security product vendors and share a passion for working cooperatively with clients to achieve the best outcomes,” Mr Simpson said.

CQR Consulting primarily works in the private sector while TechnoSys is South Australia’s largest provider of information security services to the government sector. “Our combined knowledge and skills position us to provide even greater value to our customers,” Mr Simpson said. The company will continue to trade as CQR Consulting and will be the state’s largest information security service provider, with a workforce of more than 30 security specialists. The merged company will continue to be based at the CQR Consulting offices in Dulwich, central Adelaide, with offices in Sydney, Melbourne and Oxford, United Kingdom.

TechnoSys managing director, Jeff Gwatking, said the merger was an exciting opportunity for the company. “The merger will enable us to offer an expanded set of services and skills to our existing clients, while providing our staff the opportunity of a broader range of clients and projects”, Mr Gwatking said.  “It really is a case of everybody wins.” Mr Gwatking will assume an executive management position in CQR Consulting.

Phil Kernick Chief Technology Officer
@philkernick www.cqr.com