Showing posts with label secure. Show all posts
Showing posts with label secure. Show all posts

Thursday, 8 May 2014

Privacy Awareness Week Day 4: Business Obligations: What should I be doing to protect personal information?

Before we can talk about protecting personal information, the first question you must ask is “What personal information do we process throughout the organisation?
Do you understand:
a) How you collate personal information and when?
b) Why you collect personal information?
c) What sort of information do you collect?
d) Who handles it?
e) Where does it go?
Once you have an understanding of the basics you can begin to define how to control and manage it securely.

The ‘WHAT’ question is an important one, from this you can determine whether your existing security practices are appropriate.  E.g. an application processing simply names and addresses would need far less security than an application that records credit card data or medical data.

Steps to securing personal data:
1 – Identify the information processed

2 – Classify the information (e.g. is it public, confidential or medical)

3 – Value the information in terms of impact of loss.  What impact would it have to an individual or to the organisation if:
a) it was subject to unauthorised access?
b) you could not rely on the information processed?
c) the information was no longer available?

4 – Conduct a risk assessment considering:
a) How you collect the information;
b) How it is processed;
c) The involvement of third party entities;
d) How the information is shared.

5 – Determine the required security controls to help protect personal information.  This will include controls such as:
a) Training and awareness of staff – so they understand what is expected when handling personal information;
b) Documented policies and procedures;
c) Access controls – ensure that technical controls are applied so that only authorised personnel can access the information;
d) Data sharing agreements and contracts with third parties;
e) Data Backup arrangements and recovery plans;
f) Incident management – how will you respond to a breach to personal information?

6 – Conduct a gap analysis.  Identify what security controls you already have in place.
a) Do they help manage the identified risks? 
b) What are the gaps?
c) What can be improved?

7 – Implement change.  Improve the security controls you already have in place and implement the new controls.

Other posts from Privacy Awareness Week
Privacy Awareness Week, Day 1: What is privacy and changes to the Ac
Privacy Awareness Week Day 2: Protect your privacy online
Privacy Awareness Week Day 3: What you can do to protect your privacy when using mobile phones

Yvonne Sears
Senior Security Specialist
@yvonnesearsCQR
www.cqr.com

Monday, 14 April 2014

The Heartbleed Bug, gone in a heartbeat.

There is a hole in the heart of Internet security which has the potential to expose countless encrypted transactions.  It’s been named the Heartbleed Bug.  The bug was accidentally incorporated into OpenSSL in late 2011.  OpenSSL is an open source library that many software developers use to implement SSL/TLS encryption to provide security and privacy for communications over the Internet.

So how does it work?
When you connect to a secure Internet site to access your email, social media account, or Internet banking, the server you connect to will send back what is called a ‘heartbeat’, and just like your heartbeat it is how your computer and the server stay connected whilst you are logged in.  This heartbeat is used so that the server knows that you are still there and wishing to connect to your online account.  Once you log out this heartbeat stops meaning the server then knows that there should no longer be a connection and so your online account is no longer accessible.

The heartbeat is a very small message, but by using the bug an attacker may be able to get access to more of the memory of the web server than it should, and this memory may contain sensitive information useful to an attacker.  This might include usernames and passwords, session keys or even the web server’s private key.

So am I affected and what should I do?
This is a hard question to answer.  If your web site uses an old version of OpenSSL, then they are not affected.  Even if they do use the vulnerable version of OpenSSL, it would require an attacker to be using the bug at exactly the time you are using the site to be able to grab your credentials.  The best we can say is that it’s possible that you have been directly or indirectly affected.  Unfortunately the Heartbleed bug leaves no trace of exploitation, so you are unable to see if it has been used against you.

The best thing for us all to do is change our passwords if our provider tells us that they were exploited.  It might even be a good idea just to change all those old passwords that you’ve been using for years, just in case.  Here are some tips for creating a secure password:
·         Be a minimum of 8 characters long
·         Use upper and lower case letters
·         Substitute numbers or symbols for letters
·         Do not use simple personal information (i.e birthdays, kids names, pet names)
·         If you keep a written copy of your passwords use and encrypted method of accessing them, not a note in your wallet.
·         An easy thing to remember is a phrase, try abbreviating the phrase and using each of the first letters as your password. Using numbers can help make this harder to guess.

The OpenSSL team have created a fix and this is being rolled out across the Internet to correct the bug.

How can I find out if my website is affected?
A useful tool to check the configuration of your Internet provider is https://www.ssllabs.com/ssltest/

I would like more information of Heartbleed and its effects.
Here are some of places to look for more information.

Providing detailed information of Heatbleed and detailed Q&A
The Heartbleed Hit List: The Passwords You Need to Change Right Now
How Heartbleed Works: The Code Behind the Internet'sSecurity Nightmare

Sarah Taylor
www.cqr.com

Monday, 6 May 2013

Why People Break the Rules

You have a security policy.  Some of your staff even know where it is.  You have a security awareness campaign.  Every year the staff are required to click through the computer based training module.  Your IT department has deployed security controls to every workstation to limit what people can do.  Staff have the flexibility of bring-your-own-device.

And yet your staff still break the rules.  Every.  Single.  Day.

At first glance there can only be two reasons for this flagrant violation of the security policy.  Either the staff are ignorant and don't understand the rules, or they are troublemakers and dismissive of the need to follow them.  The obvious answers are more computer based training and stronger rules.  Perhaps a new screen-saver and some posters will help.

Perhaps not.

I've undertaken social engineering assignments that involved getting into the IT department of a large bank.  Past reception, through the first swipe card door, past the guard station, through the second swipe card door.  Not just once, but on multiple consecutive days to prove that it wasn't a fluke.  Everything was in plain sight of bank employees and contrary to their policies.

So are the bank employees ignorant or troublemakers?  Maybe some are one or the other, but the vast majority are neither.  They are sensible.  Just to be clear, I am saying that breaking the security rules is sensible for many employees of many businesses.

Think about it in economic terms.  Employees are paid for getting the job done.  They get fired for not getting the job done.  They almost never get fired for breaking the rules.  In fact there is rarely any consequence of breaking the rules at all.

So given the choice of getting the job done by breaking the rules, or not getting the job done by following the rules, which is the rational choice?  The one they all make.

Which brings us to the crux of the matter.  Security rules that get in the way of getting the job done will be ignored.  A culture of getting the job done first and compliance with rules a distant second, fosters breaking the rules.

When recast this way, there really are two obvious ways forward.  Either promote a culture where we look at security in the same way manufacturing businesses look at safety, and report every incident and near miss.  Or mercilessly fire people for non-compliance.

Love or fear is a choice.

Phil Kernick Chief Technology Officer
@philkernick www.cqr.com
 

Monday, 4 March 2013

The Perils of Cloud Analogies

Moving your operations to the cloud is like... a dream for those who love analogies.  All sorts of things have been claimed, but there is only one reality.  It's like outsourcing, because that's exactly what it is.

The biggest business risk with outsourcing is that you replace technical controls with contracts, and while a move from tactical operation to strategic management looks excellent in a business plan, it can fail badly when interacting with the real world.  The claim that "insert-vendor-here" should be better at running the infrastructure because they developed it, is much more an article of faith than a well-reasoned position.

Consider the failure of the Windows Azure platform over the last weekend.  I noticed it when I couldn't play Halo 4.  As a gamer it didn't occur to me that there was anything deeper than the Halo servers weren't working, but it turns out they were hosted on a cloud infrastructure.  And the cloud had failed.  Completely.  The reason: "Storage is currently experiencing a worldwide outage impacting HTTPS operations due to an expired certificate."  In 2013.

Information security is a people business, and the people failed.

As Sony previously discovered, the total failure of their game platform is a pain, but it isn't going to threaten the company.  To Microsoft's credit they had it all restored in about 8 hours.

But Windows Azure doesn't just host games - it hosts businesses.  And the same failure happening in the middle of the week would mean that businesses that had fully moved to the Microsoft cloud could do nothing.  No backup.  No failover.  No disaster recovery.  Because all the availability controls were outsourced.  And it is very unlikely that the clients using the service are big enough to make any contractual claim for loss.

This isn't just a Microsoft problem, Amazon had the same sort of outage last year.  Every cloud hosting provider will have these problems.

So here's my cloud analogy: it's like putting all your eggs in one basket - a basket you've never seen and can't locate - along with everyone else's eggs, and having faith that this will be managed well by the fox.

Phil Kernick Chief Technology Officer
@philkernick www.cqr.com
 

Monday, 25 February 2013

The Sky Falling, NOT!


FUD: Fear, Uncertainty and Doubt.  It seems to drive the product segment of the security market, and it really annoys me.  The sky is falling.  Cybercrime is rampant.  And on, and on, and on...

Let's dial the emotion down, and look at the underlying premise.  How safe online are we really?

As I look out my window, the sky is not falling, it is a beautiful blue.  However there are a few clouds and it may rain tomorrow.  If the doomsayers were in the weather industry instead, they would be telling us all the carry umbrellas at all times, wear raincoats just in case, and take out lightning protection insurance.  I don't see anyone on the street taking these sort of precautions, because they are all able to make a sensible assessment of the likelihood of rain.  Unfortunately they are not able to make a similar sensible assessment on the likelihood of a security compromise, so they worry.  And worry is the marketing tool of choice.

Cybercrime is certainly a problem, but the main problem is the "cyber" prefix.  Cybercrime is just crime.  We don't talk about transport-crime when a thief uses a car as a getaway vehicle.  We don't call it powertool-crime when a safe is cracked.  So why make such a big deal about the enabling technology?  Everything is online now, so everything is "cyber", so let's stop using the word.  People have been stealing from each other since they first decided to pile rocks up in a cave, and it is not much different today.  The majority of crime is theft and fraud, and this is a very rare event in everyday life.  It does happen.  It will continue to happen.  It may be a large absolute value, as much as hundreds of millions of dollars, but the world economy is in the hundreds of trillions, and if we've got crime down to below 0.0001% then we should be pleased about it, not worried by it.

I grew up in a small country town, where everyone knew everyone, and people didn't lock their doors.  Today the same town is much larger, unknown people are the majority, and everyone locks their doors.  In the online world, we are now in the large town, but still acting like we are in the small one.  We need to take sensible precautions against the bad guys, but not spend all our days worrying about them.  And at least know where your umbrella is!

Phil Kernick Chief Technology Officer
@philkernick www.cqr.com
 

Monday, 4 February 2013

Myth #9: We trust our staff

We are secure because we trust our staff.  We have a solid awareness programme, and after all, security is only a problem on the Internet.  If only it were true.

We might imagine that the most common internal attackers are the IT staff as they have full and unrestricted access to all of our systems.  As Microsoft wisely said in their 10 Immutable Laws of Security, a computer is only as secure as the administrator is trustworthy.

But system administrators aren’t the only insiders with means, motive and opportunity.

The Verizon 2012 Data Breach Investigations Report looked at the type of role held by internal attackers.  The results are eye opening.  While 6% of breaches were due to system administrators, 12% were by regular employees, 15% by managers and even 3% by executive management!

The truth is that trust must be earned, never expected.  All insiders have means and opportunity, all they need is motive.

To lower the risk, wise businesses perform background checks for new employees moving into sensitive positions, apply appropriate segregation of duties to lower the potential for attack, and then implement good detective controls to catch it if and when it happens.

If you trust but verify, then this myth is plausible.

Phil Kernick Chief Technology Officer
@philkernick www.cqr.com
 

Wednesday, 30 January 2013

Myth #8: Security is too expensive

Let’s not kid ourselves – security isn’t cheap.  We have to buy hardware, and software, and staff, and training and auditors, and in each case somebody is putting their hand in your pocket and taking their cut.  But that’s not what this is about.  The myth is that it’s too expensive, that it doesn’t add value and that it’s only a “nice to have”.

Instead of thinking about security, for a moment imagine taking your family to a public swimming pool for a fun day out…

Public pools have fences.  They have lifeguards.  They have water in the pool, that is the right depth, the right temperature and has the right treatment to ensure that it is safe.  They have non-slip surfaces and signs that say “no running”.  They have lots of controls all designed to keep everyone safe, and most of them not noticed by anyone.

But the fences aren’t 10m high.  There are not hundreds of lifeguards.  The water still splashes out of the pool.  There aren’t patrols with assault rifles enforcing the “no running” rule.  These would be silly.  These would be a waste of money.

Security can be too expensive if spent in the wrong place, whether in a business or a public pool.  Businesses that overspend on hardware and underspend on testing are wasting money just like putting armed guards at a public pool.  They probably believe security is too expensive, but that isn’t really their problem.

For some businesses security is not considered a cost at all, is a core strategy.  Qantas is rightly proud of their safety record.  They don’t believe that safety is too expensive.

Information security is really just data safety.  Know what information is important to your business and protect it well, but not too well.

Security is a measure of the health of your company, and that makes this myth plausible.

Phil Kernick Chief Technology Officer
@philkernick www.cqr.com


Monday, 21 January 2013

Myth #7: A security review is just an audit

Here’s the thought process behind this myth: security is just risk management; risk management underpins compliance; compliance is driven by audit; audit is well understood.  There are many well defined and accepted audit methodologies for areas such as finance (SOX, SAS 70), process (COBIT) and security (ISO 27001).  Therefore any competent auditor, with an appropriate checklist should be able to perform a security review.

All risk management methodologies, whether qualitative or quantitative assume that risk is the product of impact (what will the loss be if the event occurs) and likelihood (how likely is the event).  Using this methodology events which are catastrophic but rare, and events which are insignificant but almost certain may both be labelled as medium risk.  And the beauty of medium risk events is that they are almost always accepted by the business.

The problem is that this analysis is fundamentally flawed when considering security.

The risk management methodology is designed for random and accidental events.  It is well understood how often buildings burn down.  It is well understood how long the average power failure will be.  This is true because actuaries have been recording tables of unlikely events for more than 100 years.  But IT security isn’t old enough as a discipline to have actuarial tables, which is exactly why you can’t buy anti-hacking insurance.

The insurers know something that businesses haven’t worked out yet.  Attackers completely control the likelihood.  If they have decided to attack, the likelihood is almost certain, no matter how it’s been assessed in a risk methodology.  Being hacked isn’t accidental and it isn’t random.  Remediation of all security vulnerabilities with high impact rather than just high risk is required to improve security.

But if you ask an experienced security specialist to undertake a security review with a current and appropriate checklist, and then you act on all the high impact findings, it’s plausible that you will be more secure.

Phil Kernick Chief Technology Officer
@philkernick www.cqr.com


Monday, 14 January 2013

Myth #6: We have good physical security

We have good physical security implemented by guards, guns and gates. All systems are in secure server rooms, located at secure sites, and since the bad guys can’t get to them, they can’t attack them.

This myth presupposes good firewalls, so let’s assume that attack from outside is too difficult. Do organisations really have as good physical security as they believe, and does this keep them safe?

Physical security is implemented by combining three different techniques:

(1) deterrence – making the risks too high to attack in the first place (guns);

(2) prevention – making it too hard or too expensive to attack (gates);

(3) response – having a capability to detect or capture the attacker even if successful (guards).

It does seem plausible that if an organisation gets all of these right, that physical security will protect them. The problem is that they never get them right, and physical access is almost always the easiest way to attack.

If a bad guy really wants to attack an organisation, none of the deterrence mechanisms matter, they’ve already decided to attack. Strike one.

The only prevention mechanism that has any chance of success is complete exclusion of all non-employees from a site. If visitors are let in, prevention has been bypassed. If there are any contracts with any third-party services at all, the only thing that has been done is to require an attacker to buy a second-hand contractor logo shirt from a charity shop. Network level security inside an organisation is usually very poor, and the attacker has just bypassed the firewall. Strike two.

A competent attacker who is determined to physically attack is going to rely on both looking like they should be there, and the normal human nature not to question strangers. The attacker won’t be stopped even in organisations with a name badge requirement and posters everywhere saying challenge strangers. And a simple disguise will make CCTV useless. Strike three.

Put bluntly: deterrence doesn’t work; prevention doesn’t work; and response doesn’t notice. It’s even worse than that, because the belief that organisations have good physical security when they really don’t, makes them blind to physical attack. This is especially true in branch offices.

Physical security underpins everything else, but it isn’t enough by itself, and that is why this myth is busted.

Phil Kernick Chief Technology Officer
@philkernick www.cqr.com

Monday, 7 January 2013

Myth #5: It’s too risky to patch

I can’t count the number of times I’ve been told by a client that it’s too risky to patch. The justifications are varied, but they usually fall into one of these general categories: (a) we can’t afford any downtime; (b) it’s a legacy system; (c) patches have been known to cause problems; or (d) our systems aren’t certified if we patch.

Let’s look at each of them in more detail.

“We can’t afford any downtime” is code for the implementation doesn’t have any redundancy or resilience, combined with a lack of understanding of whatever business process it is supporting. There is no such thing as 100% uptime, as some of the natural disasters in the last year have proved. And if there is a real business requirement for something close to 100% availability, then building a system without appropriate redundancy is an epic fail. This has nothing to do with patching.

“It’s a legacy system” is an excuse used by businesses that have poor strategic planning. A system which no longer has patches available, also no longer has support available. If a business is running a critical system with unsupported components I hope the IT manager is wearing their peril sensitive sunglasses! That way when they have a critical failure it will be without all the fear and worry that normally precedes one. This also has nothing to do with patching.

“Patches have been known to cause problems” is an example of the logical fallacy called the excluded middle. Just because a bad event has ever happened, doesn’t mean that the opposite is always true. By using this same logic, we should never get in a car as car crashes have caused deaths. It is true that patches sometimes do caused problems, but this isn’t a reason not to patch. While this is at least related to patching, it’s actually more about having a poor testing process, insufficient change management, and lack of understanding of risk management.

“Our systems aren’t certified if we patch” is code for letting the vendor set the security posture rather than the business. I mentioned this before in Myth #2 as a problem with outsourcing security, and it’s equally true here. This really doesn’t have anything to do with patching either.

In reality the certain loss from not patching is far higher than the theoretical loss from patching. In the Defence Signals Directorate top 35 mitigations against targeted cyber-attacks, patching applications is #2 and patching operating systems is #3. I really think that DSD has a much better understanding of the risk than most IT managers.

Patching is a foundation for good security as it eliminates the root cause of most compromises. Better patch management leads to lower accepted risk, and this is something that all executives want.

Any system too risky to patch is too risky to run, and that is why this myth is completely busted.

Phil Kernick Chief Technology Officer
@philkernick www.cqr.com

Monday, 17 December 2012

Myth #4: We comply with PCI DSS

There are a lot of organisation who think they are compliant with the controls in the PCI DSS, but really aren’t.  There are even more that were compliant at a point of time in the past, but aren’t now.  But let’s for the moment assume that an organisation really is compliant with the 6 objectives, 12 requirements and 225 controls in the PCI DSS.  Does this mean that they are more secure?

The Verizon 2012 Data Breach Investigations Report provides statistics on organisations that suffered a data breach, but should have been compliant with the PCI DSS.  If they were compliant they were 24× less likely to suffer a loss.  This is a really clear statistic, companies really are far more secure if they are compliant with the PCI DSS.

Of course this shouldn’t be a surprise, since the standard is just good security practice, and if organisations take this good practice and apply it to everything, it naturally follows that they will be more secure.

But there were still breaches from PCI DSS compliant organisations.  This doesn’t imply that the standard isn’t good enough – there is no such thing as perfect security – but more perhaps reflects that the only part of an organisation covered by the standard is the cardholder data environment.  It’s possible to have a compliant cardholder data environment, but neglect security in other areas, and still get compromised.

Compliance drives security, but does not equal security.

If PCI DSS is used as a basis for the entire security culture, then this myth is confirmed.

Phil Kernick Chief Technology Officer
@philkernick www.cqr.com

Tuesday, 11 December 2012

Myth #3: We have the best hardware

We have the best hardware.  We have firewalls from more than one vendor.  We have anti-virus appliances at the gateway.  We have excellent logging capabilities.  We’ve just implemented a data loss prevention solution.  And we’ve had the smartest engineers hook it all up.  Of course we are secure, our vendors told us so!

If you go back to Myth #1, most of the businesses that suffered a data breach had the best hardware.  It didn’t stop the bad guys.

The Verizon 2012 Data Breach Investigations Report has some really enlightening statistics about the timing of data breaches.  Most compromises happened within minutes of initial attack, and data exfiltration happened within minutes of compromise.  But detection of the compromise didn’t happen for months, and containment took weeks after that.  And many of these breaches happened to companies with all the best hardware.

The thinking underpinning this myth is that as technology created the problem, it can also solve it.  As most of these technical systems are scoped, implemented and managed by capable technologists, they are unfortunately blind to the truth.  Information Security is a People Business.  It’s not about the technology.  It’s never been about the technology.

People are the easiest system to attack, and people can subvert any security control.  And much to the annoyance of the technologists, they can’t be patched, and they can’t be upgraded!

Hardware provides a solid platform, and without it security isn’t possible.  But policy, configuration and management trump functionality every time.  Many businesses focus too much on capex and so will overspend on the former, and underspend on the latter.

That makes this myth busted.

Phil Kernick Chief Technology Officer
@philkernick www.cqr.com