Showing posts with label IT Security. Show all posts
Showing posts with label IT Security. Show all posts

Friday, 13 September 2013

Staking a Claim in Social Media

This week I had a call from a lawyer who said that social media accounts in the name of one of their clients had been created and were being used for malicious purposes.  They wanted to know what they could do about it.

When deploying security controls we need to consider prevention, detection and response, and this case is no different.

Prevention.

There are a significant number of people - many of them in very senior roles - who wear as a badge of honour that they don't have any social media accounts.  Saying "I don't understand this new-fangled social media" may sound reasonable today, but 100 years ago the same people would have been saying "I don't understand this new-fangled electricity", and then gone on to sink their fortunes into steam power.

I'm not suggesting that everyone become Facebook addicts.  However I am definitely recommending that all companies and anyone with a senior role go out and register accounts on all of the major social media sites, as a prevention against anyone else doing it in their name.  There is no validation of who registers an account, and due to an interesting bootstrapping problem it really is impossible for the social media providers to confirm the identities.  Twitter's blue tick isn't the answer.

We did this with domain names a decade ago, and we have to do it all over again with social media now.

Detection.

Search for yourself on the search engine of your choice.  While it might be vanity, it also will allow you to determine if anyone else is pretending to be you.  Most of the major search engines allow you to set up alerts on new pages that they find with a given term, and you can use this as a detection mechanism against imposters.

This may be practical if you have a distinctive name, but is going to be quite difficult for the John Smiths of the world.  Even my name isn't unique in my own city, so getting in first and registering early becomes very important.

Response.

If and when someone does register a social media account in your name, there are a limited number of things that can be done about it.  It is always possible that they really do have the same name as you, and you got in late, in which case unless they are committing fraud by pretending to be you specifically you have no comeback.  Consult your lawyer on defamation laws in your jurisdiction as your only response.

Just like the domain squatters of the last decade, we now have social media squatters.  They can be dealt with in similar ways: (a) pay them what they ask to get the identity back; (b) raise a complaint with the social media provider; or (c) call the lawyers.  The difference here is that the social media providers are for profit companies, rather than not for profit organisations, and they don't have the same social responsibilities.

Ironic, isn't it.

Phil Kernick Chief Technology Officer
@philkernick www.cqr.com
 

Friday, 23 August 2013

Game of Phones

Apple iOS 7 will be released next month, and it's time for us to once again declare our allegiance to our feudal technology overlords.  Overlords that are starting to feel remarkably like those from the popular HBO TV show.

Blackberry is House Stark.  Solid, loyal, dependable, secure and dead.  Their only hope are their bastard offspring.

Android is House Lannister.  They have spent many years behind the scenes manipulating the empire, and have only recently seen the opportunity to show their power openly.  Technologically dominant, and masters of strategy, they are sure they can think their way to the top, and then hold it.  Security comes through threat, rapid change and the culling of the weak.

iPhone is House Targaryen.  They have come from out of the wilderness and they have dragons, an ancient mystery that has not been harnessed by anyone in living memory.  They also have a sexy leader, and everyone emotionally wants to get behind them.  But after the initial revelations at the end of the first season, there really hasn't been anything new for the last two years.  Security comes through central control and fervent loyalty of the followers.

Windows is House Baratheon.  They were once dominant, but time has passed them by, and internal bickering stops them from really being a force any more.  Because they have ruled in the past, they believe they have the absolute right to rule in the future.  Security isn't a concern, only survival, but only the most loyal followers expect anything other than a spectacular crash.

The thing to remember is that the only winners in the Game of Phones are the main houses.  If you are a peasant or vassal - and in reality we all are - then the best we can do is raise our flag in support of one of the main houses, and hope they don't sacrifice us for their greater good.

Until then, enjoy the ride.

Phil Kernick Chief Technology Officer
@philkernick www.cqr.com
 

Friday, 9 August 2013

War on War on Cybercrime

Be afraid.  Be very afraid.  The UK has just announced that it is losing the war on cybercrime, and needs to consolidate cybercrime policing into a new unified structure as part of a shakeup of the country's policing.

Any time a government declares a “war on something”, it costs money, achieves nothing, and distracts from the real issues.  For recent examples consider the total failure of the “war on drugs”, “war on terror”, and “war on poverty”, just to name a few.  War on cybercrime will be no different.

All of these wars are justified by the belief that the government needs to be seen to be doing something, combined with the unshakeable assertion that they are better at protecting us than we are at protecting ourselves.  I disagree with both sides of the argument.

We live in a world with a higher standard of living, with more freedoms and less crime than ever before.  There is no public outcry to protect us.  The only outcry is that generated by the media and the government themselves.

What we need is better education and the tools to protect ourselves.  We are all being attacked all the time, and we can protect ourselves without relying on an ineffective government oversight body that in the end does nothing but serve platitudes.

We need to declare a war on the war on cybercrime.

Phil Kernick Chief Technology Officer
@philkernick www.cqr.com
 

Friday, 2 August 2013

IPv6 Insecurity

Vint Cerf - one of the founders of the Internet - quipped last year that the current IPv4 Internet is the experimental version, and that IPv6 is the production version.  If this is true, then approximately 100% of businesses are still on the beta release and have no plans to move to production.  How can this be, in a world of 36 month IT replacement cycles, when IPv6 has been deployment ready since 1999?

There are a number of reasons, some technical, some psychological, but all to do with security.

Reason #1: Making unnecessary changes breaks things.  There is no compelling reason even today to move to IPv6.  The total number of IPv6 *only* services is approximately none, so not migrating does not limit anything.  Sure we will eventually run out of IPv4 address space, but I predict we will make do at least until 2020.

Reason #2: Complexity reduces security.  Not everything supports IPv6, so deployment requires a dual-stack approach, which significantly increases complexity, and therefore decreases security.  While this is true today, given a 36 month IT replacement cycle, everything will eventually support it by 2016.

Reason #3: We don't understand it.  This is the real reason for the lack of adoption.  IPv6 is not just IPv4 with longer addresses.  It does some things very differently than IPv4, and breaks the well-understood IPv4 security model.  There is no NAT.  There is no ARP.  Multicast matters.  ICMP matters.  We could fix this today, but it will take a generational change of CIOs to really embrace it.  Maybe it won't be scary by the Unix timestamp rollover in 2038.

Interestingly for those of us with a few grey hairs, we've been here before.  We made this same transition from IPX to IP in our Novell networks 20 years ago, but with one very significant difference.  We didn't dual-stack.  On a flag day we just changed all the configurations and got on with it.  But we can't do that this time, because now everything is interconnected, and the risk of cutting ourselves off today is much higher than the risk of running out of addresses at some point in the future.

IPv6 is definitely the future.  While the future is already here, and not very evenly distributed, for most of us the time is just not right.

Phil Kernick Chief Technology Officer
@philkernick www.cqr.com
 

Friday, 28 June 2013

The Ostrich Approach to Security Management


If you find that your security has been compromised, the normal approach the most businesses take to addressing it goes something like this...

Step 1: Admit you have a problem.

Step 2: Blame someone else.

Step 3: Hire a lawyer.

I'm going to spend some time on step 2, as I think that this is where the process really goes off the rails.  Before we can blame someone else, we need to decide who to blame.  All too often instead of blaming the attacker, we blame our IT department for not managing our systems appropriately.  How could they possibly have let this happen?
 
The answer is depressingly simple: senior management are taking the ostrich approach to security management.  If I can't see it, it can't hurt me.  If I stick my head in the sand, I can't see it.  I know how to stick my head in the sand.  Problem solved!

The outcome of this approach is that the perennially blamed IT department are not given guidance on what they should be protecting, how they should be protecting it, nor the training to protect it in the first place.  Most IT departments simply are not competent to answer the question: "Are we secure?".  The only honest answers they could give are "I don't know" or "As best I know how", but this isn't what management want to hear, so this isn't what the IT department says.

To quote Spaf's first principle of security administration: "If you have responsibility for security but have no authority to set rules or punish violators, your own role in the organization is to take the blame when something big goes wrong."

Sand is cheap.  Real security is a lot more valuable.

Phil Kernick Chief Technology Officer
@philkernick www.cqr.com

Tuesday, 25 June 2013

To Protect and Serve Coffee

So much is currently in the news about government surveillance, I'd like to look at a different intersection of law enforcement and data retention - how the police can help you when you are the victim of a cyber-attack.

Unfortunately the decision to involve the police is not trivial, and really depends on what outcome you are hoping for.  If you just want the problem to go away, involving law enforcement can get in the way of your recovery, as they will want to collect forensically sound evidence, and the process of going to court can and does take years.  Even if you go down this path, the likelihood of restitution is very low and it will cost a fortune.  So most businesses don't bother.

If it were a physical crime, we automatically report it as this is a necessary precondition to claiming on our insurance.  There is also no stigma about being broken into physically.  But things are different in the cyber world - there is no cyber-insurance to claim on, and there definitely is a stigma about being hacked.  This is even more reason for businesses to fix it and move on without police involvement.

But if we look at this in a slightly different way, the view changes.  Instead of looking to law enforcement to locate and prosecute the offenders, we can ask for their assistance in collecting and storing any evidence we might need in the future, and provide them with anonymised information that helps to build a profile of the cyber-crime landscape.

Less protect and serve, and more coffee and collaboration.

Unless you are the bad guys, the police are not your adversary, and they really can be good friends.

Phil Kernick Chief Technology Officer
@philkernick www.cqr.com
 

Tuesday, 4 June 2013

Mandatory Data Breach Notification

The Australian Government has just announced that mandatory data breach notification laws will commence in March 2014.  This is an excellent start, and the Government is to be congratulated on the initiative.  I'm not normally one to promote more "cyber" legislation to cover new implementations of old crimes, but this really is a new type of crime for which no existing legislation adequately applies.

We've had identity theft for as long as we've had scammers, but in the pre-Internet world this was done one at a time, and required local knowledge and a lot of effort.  But now it can be done wholesale, from anywhere, to anyone, for nearly no cost.  And this is happening every day.

So how will mandatory data breach notification help?  It won't make us any more successful in prosecuting the attackers, so it won't reduce the number of attacks.  It has nothing to do with helping the people who are the subject of identity theft, so it won't reduce the impact of the crime.
 
Today the only sensible approach to take for any company that has a data spill is to cover it up.  There is no possible positive outcome from telling anyone, and a significant likely negative outcome in terms of reputation damage, share price reduction and loss of market confidence.  So this just looks like more victim blaming.

The real point is to make businesses care about security.  If they know that they will be named and shamed, they are more likely to take the necessary steps to not be breached, and therefore reduce the number of actual breaches, and so reduce the impact on the Australian people.  Raising the cost to the attackers is a win for everyone.

Better security is an investment in the future, not a cost to be minimised.

Phil Kernick Chief Technology Officer
@philkernick www.cqr.com
 

Monday, 27 May 2013

The Law of the Internet

I've spoken to a number of journalists over the last few weeks who have all posited some variation of the simple question: "Don't national governments have the right to govern the Internet?"  While it is a simple question, it doesn't have a simple answer, and perhaps not for the reason that many people think.

There are many technology activists who see cyberspace as the first truly transnational place: where the existing rules don't apply, and a new set of rules - better rules - can take their place; where we can leave meatspace behind and become a meritocracy of the mind.  I strongly recommend reading "Snow Crash" by Neal Stephenson to get a sense of it.

I believe that there are three fundamental problems with this approach: (1) the Internet isn't a place; (2) we don't yet live in a futuristic dystopian civilisation; and (3) the Internet was not designed.  It's the last problem that causes all the angst.

If the Internet had been designed by governments, rather than organically grown by technologists, it wouldn't look anything like what we see today.  There would be the controls that national governments want, but there almost certainly wouldn't have been the flourishing of our society that connectedness has brought us.  If you want a real example of this, look at the Internet of DPRK, or the Great Firewall of China.  Then imagine each of these interconnected with deep packet inspection borders to mimic the physical borders we have today.  There would even likely be Internet passports to allow transit.

But it wasn't designed, and the law-makers were slow coming to the party.  By the time they noticed it was largely too late.  As John Gilmore famously said in 1993: "The Net interprets censorship as damage and routes around it."  It was too late in 1993, and it's way too late in 2013.

Does this mean that the netizens have won, and society as we know it will necessarily fall?

Of course not.  The problems that the Internet brings are the same problems that all societies have had over all of human history.  The links are just faster, and the people vastly more connected.  In the past if someone wanted a business to pay protection money they had to send a hard-man to bully the owners and do a little damage.  There was always the possibility of injury, arrest and incarceration.  Today they DDOS their website, with no possibility of injury, and almost no chance of getting caught.

But the crime is the same.  And the laws to deal with the physical crime are already on the books.

Governments can and should govern the Internet, but they can't control it.  Governing requires the consent of the governed.  Control requires technology.

The Internet is a people problem.  Until national governments realise this, they have no chance.

Phil Kernick Chief Technology Officer
@philkernick www.cqr.com
 

Monday, 20 May 2013

Attribution is Easy

Imagine two neighbours - let's call them Alice and Chuck - who aren't friends, but who regularly do business with each other.  Alice is an architect who designs wonderful houses, and Chuck is a carpenter who builds them.

One day Chuck tells Alice that he can't pay as much for her designs as he used to, as he's found another architect who can do them cheaper.  She's a bit dubious as it is a small town, but she lowers her rates a bit, and they keep working together.

Over the next few weeks, Alice is sure that there is someone looking over her back fence during the evenings, but she can't tell exactly who it is.  She doesn't do anything about it or tell anyone.

A few months go by, and Chuck says that he's been studying at night to become an architect, and that he doesn't need Alice any more.  She's shocked, it took her years to get her degree, and she knows that Chuck spends his nights at the pub.  Is it even possible?  How could this be?  Perhaps it's just his way of driving an even lower price.

Her worst fears are realised.  Not only is he building wonderful houses, they look exactly like her designs.

She confronts Chuck and tells her that she knows that he is stealing her designs, and that he's been looking over her back fence to copy them using a camera with a telephoto lens.  Of course he denies it!  But if she lowers her rates just a little more, then they can start doing business again.

This is an economic market working perfectly - if it is cheaper to steal the design than license it, economic theory drives theft, until the cost of theft is greater than the cost of licensing.

Alice can get all upset at Chuck, but she needs to realise that she has the control.  If she increases the protection of her designs by installing night vision cameras, building a higher fence, and situating her office somewhere with no windows, then she will be able to increase her prices again to cover the investment, and the relationship will continue as before.

Attribution is easy.  Doing something to protect your business is hard.

Phil Kernick Chief Technology Officer
@philkernick www.cqr.com
 

Tuesday, 14 May 2013

Writing Secure Software is Hard

All code is crap.  I know, because I've written a lot of it.  A long time ago, in a galaxy far far away, I wrote a large PHP based enterprise application that is still in production use today.  It's also being used as an example of how not to write code that will survive penetration testing.

Development of this application started in 1998, so in Internet years it is ancient.  A recent review of the application has found that XSS is possible in nearly every field, and while SQLI is harder to do that you'd expect, due to a protection framework I wrote to save me from writing bad SQL, it is still possible.

How can I have gone so far wrong?  The simple answer is I didn't know any better.

There are two foundation reasons why software developers don't write secure code.  The first is that they aren't told they have to, and the second is they aren't shown how to do it.  Both of these reasons applied to me - there was no specification for the application that included non-functional requirements such as security, and none of the university level courses I'd passed on software development even mentioned secure coding practices.  In my defence I'm going to throw in a third reason that these attack types were not well known at the time.

Unfortunately the foundations are just as weak today.  Most development projects don't include security in their requirements document, and most developers are not taught how to write secure code.  My third defence doesn't save us any more as the attack types are very well known.  Rapid prototyping methodologies just make this harder.

It's time to do it better.  All development projects need to have security as part of their project governance, and have developers trained by penetration testers on how their code will be abused.

Microsoft started in 2002 with their secure development initiative.  Most other large development companies have done the same thing.  But there are still a very large number of web facing applications being developed without the safety net of a secure software development lifecycle, and they will continue to fall over much to the surprise of their developers.

The 21st century has been around for a while now, but we're still writing code like it's 1999.

Phil Kernick Chief Technology Officer
@philkernick www.cqr.com
 

Monday, 6 May 2013

Why People Break the Rules

You have a security policy.  Some of your staff even know where it is.  You have a security awareness campaign.  Every year the staff are required to click through the computer based training module.  Your IT department has deployed security controls to every workstation to limit what people can do.  Staff have the flexibility of bring-your-own-device.

And yet your staff still break the rules.  Every.  Single.  Day.

At first glance there can only be two reasons for this flagrant violation of the security policy.  Either the staff are ignorant and don't understand the rules, or they are troublemakers and dismissive of the need to follow them.  The obvious answers are more computer based training and stronger rules.  Perhaps a new screen-saver and some posters will help.

Perhaps not.

I've undertaken social engineering assignments that involved getting into the IT department of a large bank.  Past reception, through the first swipe card door, past the guard station, through the second swipe card door.  Not just once, but on multiple consecutive days to prove that it wasn't a fluke.  Everything was in plain sight of bank employees and contrary to their policies.

So are the bank employees ignorant or troublemakers?  Maybe some are one or the other, but the vast majority are neither.  They are sensible.  Just to be clear, I am saying that breaking the security rules is sensible for many employees of many businesses.

Think about it in economic terms.  Employees are paid for getting the job done.  They get fired for not getting the job done.  They almost never get fired for breaking the rules.  In fact there is rarely any consequence of breaking the rules at all.

So given the choice of getting the job done by breaking the rules, or not getting the job done by following the rules, which is the rational choice?  The one they all make.

Which brings us to the crux of the matter.  Security rules that get in the way of getting the job done will be ignored.  A culture of getting the job done first and compliance with rules a distant second, fosters breaking the rules.

When recast this way, there really are two obvious ways forward.  Either promote a culture where we look at security in the same way manufacturing businesses look at safety, and report every incident and near miss.  Or mercilessly fire people for non-compliance.

Love or fear is a choice.

Phil Kernick Chief Technology Officer
@philkernick www.cqr.com
 

Tuesday, 30 April 2013

Worrying about Supply Chain Security

How often do you look at the "made in" label on the equipment you buy?  A glance across my desk says that my Apple iPhone was made in China, my Casio calculator was made in China, and even the Rexel stapler was made in China!  In fact the only thing I can find on my desk that wasn't made in China is a tube of toothpaste, and that was made in Mexico.

Didn't we get over the "red menace" and "yellow peril" in the 1960s?  Apparently not.  But before we plumb the depths of paranoia and xenophobia which are bubbling beneath the surface of supply chain security, perhaps it's worth thinking in a little more detail about what we really mean.

Security is the conservation of confidentiality, integrity and availability.

Traditional supply chain security concerns have been availability issues - can we get the parts that we need when we need them.  We dealt with this by using multiple suppliers, understanding lead time, and holding stock.  Nothing has changed here.

Once availability was addressed, we moved to integrity issues - are the quality of the stock we receive good enough for our purposes.  We dealt with this by over-ordering and batch testing.  Nothing has changed here.

Finally once everything else was working smoothly, we moved to confidentiality issues - are our suppliers stealing our intellectual property.  We dealt with this by contracts and wishing really hard.  Nothing has changed here either.

There is a reason that everything is made in China, and that is money.  It's much cheaper to produce goods there than here - for any definition of "here" that involves the first world.  That reduction in price came at a cost - the cost of control.

But this hasn't addressed the first question: should we be worried about supply chain security?  Of course we should, we always have, and we've always found mitigating controls to manage the risk.  That isn't any different today.

If you don't like that your equipment is manufactured overseas, then create a local manufacturing industry.

If you worry that your suppliers are stealing your intellectual property, apply rigorous audits, and take the work elsewhere if they break the rules.  Remember they also have a profit motive.

This really is a first world problem.  We created it by outsourcing.  Now we get to live with the consequences of our choices.

Phil Kernick Chief Technology Officer
@philkernick www.cqr.com
 

Monday, 22 April 2013

Why no-one gets SCADA security right

SCADA is an acronym for Supervisory Control and Data Acquisition.  That's a bit of a mouthful and unless you've studied Engineering it's not clear what it means, so here's a simple definition: SCADA is computer controlled physical processes.  The common examples given are power stations and water treatment plants, but it's much more than that.  Building management systems that control the temperature, lights and door locks: that's SCADA.  The production line at a large bakery that makes your bread: that's SCADA.  The baggage system at the airport that loses your bags: that's SCADA.  The traffic lights that annoy you on your drive to work: that's SCADA.

It's everywhere.  It's all around us.  And it's all implemented badly.  Maybe that's too strong - it's all implemented inappropriately for the threat model we have in 2013.

We have to set the way back machine to the 1980s to understand why we are in the mess we are today.

Traditionally SCADA systems were designed around reliability and safety.  Security was not a consideration.  This means that the way the engineers think of security is different.  In IT security we consider Confidentiality first, then Integrity and finally Availability.  This matches with our real world experience of security.  But in SCADA systems it's the other way around - Availability first, then Integrity, and finally Confidentiality a very distant third.

There are two very good reasons for this approach.

Firstly: Keeping SCADA systems running is like balancing a broom stick on your finger - you can do it, but it takes a lot of control, and if you stop thinking about it, the broom stick falls.  This is the fundamental reason that the dramatic scenes where the bad guy blows up a power station as shown in movies just can't happen.  If you mess up the control the power stations stops generating power, it doesn't explode.

Secondly: Every business that controls real world processes has a culture of safety: they have sign boards telling how many days since the last lost time injury, and are proud that the number keeps going up.  Anything that gets in the way of human safety is removed.  That's why control workstations don't have logins or passwords.  If something needs to be done for a safety reason, it can't be delayed by a forgotten password.

All of this made perfect sense in the 1980s when SCADA systems were hard wired analog computers, connected to nothing, staffed by a large number of well-trained engineers, and located in secure facilities at the plant.

That isn't true now.  Today SCADA systems are off-the-shelf IT equipment, connected to corporate networks over third party WAN solutions and sometimes the Internet, staffed by very few over-stressed Engineers, sometimes not located even in the same country.

So what happened in between?  Nothing.  Really.  SCADA systems have an expected life of about 30 years.  The analog computers were replaced by the first general purpose computers in the late 1980s, and they are only now being replaced again with today's technology.  They will be expected to run as deployed all the way to 2040.

I hope you've stocked up on candles.

Phil Kernick Chief Technology Officer
@philkernick www.cqr.com
 

Tuesday, 16 April 2013

Decline of the PCI empire

The Payment Card Industry Data Security Standard - PCI DSS - is a standard with 255 controls that you must comply with if you store, process or transmit credit card information.  Complying with the standard is the cost of doing e-commerce today.  The cost is high, and going to get higher, and as with all monopoly empires this increase will eventually lead to its downfall.

Disclaimer: CQR is a QSA company and I am a QSA.  I have no special knowledge about what the PCI council is going to do, so this is a fairly bold statement.  I base my assessment on simple economics.

PCI DSS v3.0 will be released in October 2013.  The only certainty is that it will have more controls, and they will be harder to comply with, and it will be more expensive both to implement and have audited.  Today most level 3 and 4 merchants are struggling with PCI.  Next year will break some of them - some will just fail to comply, and others will consider no longer taking credit cards.  Three years later, in October 2016, PCI DSS v4.0 will be released, and this will break the rest.

Don't get me wrong - PCI DSS is a good standard, it serves the purpose it was designed for, and if all merchants complied with it there would be far fewer credit card breaches.  But we need to go back to economic basics: if the cost of the control exceeds the value of the service, then it makes no economic sense to offer the service.  Somewhere around the release of PCI DSS v4.0 this will cross-over.

Here's my prediction for the inevitable decline: more and more merchants will stop taking credit cards directly.  PCI DSS only applies if you store, process or transmit credit card data.  So if merchants stop doing this directly, and instead use a third party service provider to process card data, they will no longer have a compliance burden.  Merchants will still have a cost to bear, as the service provider will need to be compliant, but that cost can be amortised over many more merchants, leading to the cost of the control dropping back below the value of the service, and economic theory prevailing.

We are going to keep taking credit cards because they are just too convenient.  But the market for PCI services is going to shrink radically, and in the end this is going to make all of us safer.

Phil Kernick Chief Technology Officer
@philkernick www.cqr.com
 

Monday, 8 April 2013

Running with Scissors

There are things that we just shouldn't do - like running with scissors.  We can be told not to do them.  We can know intellectually not to do them.  But until we've stabbed ourselves or someone else it just doesn't sink in.

I've been seeing a lot of discussion recently on attack as pro-active defence - especially related to botnets.  The proponents make a good case that they are making everyone safer.  The opponents say that any unauthorised access - even to disable malware - is wrong and must not happen.  In both cases they have the implicit assumption that the people who own the computers that have been turned into bots are also victims.  I think it's time we addressed the elephant in the room.  We should adjust our thinking and stop thinking of them as victims and start thinking of them as part of the problem.

The only reason they have been turned into bots in the first place is that they haven't enabled even the most basic protections on their computer.  They are running with scissors.  They are stabbing people with the scissors.

We can no longer accept this.  Basic protections won't stop a determined attacker, but turning on automatic patching and running a free antivirus solution will stop most of them being owned most of the time.

It's time the software and operating system vendors made it impossible to turn off these sort of basic protections.  And it's time for society as the real victim of cybercrime demanded it.

Phil Kernick Chief Technology Officer
@philkernick www.cqr.com


 

Wednesday, 20 March 2013

19th century PKI

Over the last few years more and more reports have been published claiming that PKI was fundamentally flawed.  The failure of the Dutch CA DigiNotar is widely claimed to be the final proof.  But I disagree.  The problems with PKI fall into two categories: "you're doing it wrong"; and "you're using it wrong".  Neither of these have anything to do with the fundamental underpinning cryptography.

The problem that PKI is intended to address is trust.  I can trust what you say if someone I trust authorises what you say.  It really is that simple to say, and at the same time fiendishly complicated to implement correctly.

It may surprise you to know that we've been doing PKI since the end of the 19th century, in the role of Justice of the Peace.  This is a person who will witness a signature on an official document.  The receiver of the document trusts that the document is genuine as they trust the JP, and the JP saw you sign it.

However just like current PKI problems, there are identical problems in the 19th century version.  When I had a legal document witnessed at the local public library, the JP had no way of validating that the form I was signing was genuine.  He also made no effort to validate that what I signed was really my signature, nor that I was the person referenced on the form - which makes sense as there is no way he could have done that anyway.

What he asserted is that a real person made a real mark on a real piece of paper.  Everything else is covered by laws against fraud.  And this has worked for more than 100 years, and continues to work today.

If we used current PKI to do only this - assert that a real computer made a real communication at a definite time, everything would be fine.  But we don't.  We want to know which computer, and so ask questions about identity, and then act surprised when the implementations fail us.

PKI is the answer.  It's the question that's wrong.

Phil Kernick Chief Technology Officer
@philkernick www.cqr.com
 

Monday, 4 March 2013

The Perils of Cloud Analogies

Moving your operations to the cloud is like... a dream for those who love analogies.  All sorts of things have been claimed, but there is only one reality.  It's like outsourcing, because that's exactly what it is.

The biggest business risk with outsourcing is that you replace technical controls with contracts, and while a move from tactical operation to strategic management looks excellent in a business plan, it can fail badly when interacting with the real world.  The claim that "insert-vendor-here" should be better at running the infrastructure because they developed it, is much more an article of faith than a well-reasoned position.

Consider the failure of the Windows Azure platform over the last weekend.  I noticed it when I couldn't play Halo 4.  As a gamer it didn't occur to me that there was anything deeper than the Halo servers weren't working, but it turns out they were hosted on a cloud infrastructure.  And the cloud had failed.  Completely.  The reason: "Storage is currently experiencing a worldwide outage impacting HTTPS operations due to an expired certificate."  In 2013.

Information security is a people business, and the people failed.

As Sony previously discovered, the total failure of their game platform is a pain, but it isn't going to threaten the company.  To Microsoft's credit they had it all restored in about 8 hours.

But Windows Azure doesn't just host games - it hosts businesses.  And the same failure happening in the middle of the week would mean that businesses that had fully moved to the Microsoft cloud could do nothing.  No backup.  No failover.  No disaster recovery.  Because all the availability controls were outsourced.  And it is very unlikely that the clients using the service are big enough to make any contractual claim for loss.

This isn't just a Microsoft problem, Amazon had the same sort of outage last year.  Every cloud hosting provider will have these problems.

So here's my cloud analogy: it's like putting all your eggs in one basket - a basket you've never seen and can't locate - along with everyone else's eggs, and having faith that this will be managed well by the fox.

Phil Kernick Chief Technology Officer
@philkernick www.cqr.com
 

Monday, 18 February 2013

Information Security Themes for 2013

Everyone else is making predictions as to what will be the important information security trends in 2013.  I think they are all wrong.  Not because the writers are uninformed, just because they are unimaginative.  It’s easy to look to the past, draw a line through the dots, scale it up and to the right, and predict the future.  Except these sort of predictions are safe, boring and they never allow for disruptive events.

Here are a few of the safe predictions that others have made:

·         mobile malware will increase

·         state sponsored attacks will increase

·         malware will get smarter

·         hactivists will get smarter

·         IPv6 security will matter

I agree with all of them, but then who wouldn’t.  Up and to the right.  And nearly everyone making these predictions sells something to mitigate them.

So what do I think the themes for 2013 will be?  I have only one information security theme that I think really matters.  Only one theme that will confound the industry, and add to the number of grey hairs sported by CIOs.  Only one theme we cannot avoid, even though we are really trying to do so.

Authentication.

Everything else pales in comparison.  It really is back to basics.  2012 was the year that we saw more password dumps than ever before.  It was the year the hash-smashing as a service became mainstream, and not just performed by spooky government agencies.  It was the year that we saw a mobile version of the Zeus crime-ware toolkit to attack SMS two factor authentication.  It was the year logging into sites via Facebook became the norm, and not the exception.

And these are all symptoms of an underlying problem.  Passwords suck.  Passphrases are just long passwords, and they also suck.  Every two factor scheme out there really sucks – mostly because I have so many different tokens that I have to carry around depending on what I want access to.

The problem is that we are tied into the past: something you know, something you have, something you are.  We spend more and more time trying to prove these to so many disparate systems that the utility of the systems asymptotes to zero.

So instead of looking back we need to look forward: somewhere I am, something I do, something I use.

Instead of trying to authenticate the user, we need to instead authenticate the transaction.  And that is a hard problem that our backward looking way of thinking makes even more difficult to address.  Happy 2013.

Phil Kernick Chief Technology Officer
@philkernick www.cqr.com
 

Monday, 11 February 2013

Myth #10: We have a security plan

We have a security plan, and I can point you to the binder that contains it.  It’s got all the sections that the consultants told us we needed: policy, risk management, personnel security, information classification, incident management and BCP.  So we must be secure!

No doubt the magic binder is in the bottom of a locked filing cabinet, stuck in a disused lavatory with a sign on the door saying 'Beware of the Leopard'.

Plans that exist only for compliance purposes aren’t functional, and quite literally aren’t worth the paper they are written on.  No-one knows about them, no-one follows them, no-one keeps them up to date.  The only thing that they really are useful for is waving at clueless auditors.

That said, we have a security plan at CQR.  Actually we have a security management system certified to ISO 27001.  But you’d expect that of a security company.  This is because we practice what we preach.

So here’s the preaching: security plans only work if they are part of the day to day operations.  If they are just what you do, not what you drag out to appease the auditors, then practical and pragmatic plans really do add value.  I know it’s a cliché, but security really is a journey, not a destination, with a security plan being the map.

With a good plan, security is easy and this myth is confirmed.

Phil Kernick Chief Technology Officer
@philkernick www.cqr.com