Showing posts with label awareness. Show all posts
Showing posts with label awareness. Show all posts

Monday, 5 May 2014

Privacy Awareness Week, Day 1: What is privacy and changes to the Act

web banner with border

 This week (5th -10th May) is Privacy Awareness Week (PAW) and CQR has partnered with the Office of the Australian Information Commissioner (OAIC) to help promote Privacy Awareness amongst the community.

So what is Privacy?
Privacy is about the protection of an individual’s personal information.  We are all responsible for protecting our own identity and that of others.  Think about it:  We expose or own personal information on a daily basis.  When we use social media, contact our utility companies and shop online we provide a large amount of our own personal data.  You may make the assumption that the person or website you are sharing your information with will take care of it, ensure it is secure and not share it with anyone else.
This to a degree is true and most companies will have a privacy policy in place to demonstrate a level of commitment to protecting your personal information, but this isn't a fool proof solution.

The person who is actually responsible for your personal data at the end of the day is you!  What is the best way to safeguard yourself and look after your own identity?  Have you ever taken the time to think about it?

To help you understand, Australia an independent Government agency responsible for privacy functions that are conferred by the Privacy Act 1988 (Privacy Act) called the Office of the Australian Information Commissioner (OAIC).  The OAIC provides advice and guidance to the public, Businesses and Government agencies on how they are to handle personal information. 

The changes to the Privacy Act on 12th march 2014 brought about a heightened awareness of the message that we should be protecting our own privacy and together with PAW CQR has put together a program of Blogs covering:
-          How you can protect your privacy online;
-          What you can do to protect your privacy when using mobile apps;
-          Business obligations to privacy; and
-          How to manage breaches to personal information.

We hope that you will keep a keen eye out on blogs, get engaged in conversation and most importantly retweet the messages to colleagues, family and friends to share the importance of Privacy.


You can find more information on your privacy rights and Privacy Awareness Week from the OAICwebsite

Yvonne Sears
Senior Security Specialist
@yvonnesearsCQR
www.cqr.com

Monday, 22 April 2013

Why no-one gets SCADA security right

SCADA is an acronym for Supervisory Control and Data Acquisition.  That's a bit of a mouthful and unless you've studied Engineering it's not clear what it means, so here's a simple definition: SCADA is computer controlled physical processes.  The common examples given are power stations and water treatment plants, but it's much more than that.  Building management systems that control the temperature, lights and door locks: that's SCADA.  The production line at a large bakery that makes your bread: that's SCADA.  The baggage system at the airport that loses your bags: that's SCADA.  The traffic lights that annoy you on your drive to work: that's SCADA.

It's everywhere.  It's all around us.  And it's all implemented badly.  Maybe that's too strong - it's all implemented inappropriately for the threat model we have in 2013.

We have to set the way back machine to the 1980s to understand why we are in the mess we are today.

Traditionally SCADA systems were designed around reliability and safety.  Security was not a consideration.  This means that the way the engineers think of security is different.  In IT security we consider Confidentiality first, then Integrity and finally Availability.  This matches with our real world experience of security.  But in SCADA systems it's the other way around - Availability first, then Integrity, and finally Confidentiality a very distant third.

There are two very good reasons for this approach.

Firstly: Keeping SCADA systems running is like balancing a broom stick on your finger - you can do it, but it takes a lot of control, and if you stop thinking about it, the broom stick falls.  This is the fundamental reason that the dramatic scenes where the bad guy blows up a power station as shown in movies just can't happen.  If you mess up the control the power stations stops generating power, it doesn't explode.

Secondly: Every business that controls real world processes has a culture of safety: they have sign boards telling how many days since the last lost time injury, and are proud that the number keeps going up.  Anything that gets in the way of human safety is removed.  That's why control workstations don't have logins or passwords.  If something needs to be done for a safety reason, it can't be delayed by a forgotten password.

All of this made perfect sense in the 1980s when SCADA systems were hard wired analog computers, connected to nothing, staffed by a large number of well-trained engineers, and located in secure facilities at the plant.

That isn't true now.  Today SCADA systems are off-the-shelf IT equipment, connected to corporate networks over third party WAN solutions and sometimes the Internet, staffed by very few over-stressed Engineers, sometimes not located even in the same country.

So what happened in between?  Nothing.  Really.  SCADA systems have an expected life of about 30 years.  The analog computers were replaced by the first general purpose computers in the late 1980s, and they are only now being replaced again with today's technology.  They will be expected to run as deployed all the way to 2040.

I hope you've stocked up on candles.

Phil Kernick Chief Technology Officer
@philkernick www.cqr.com
 

Monday, 25 March 2013

Personal Information is the Currency of the Internet

When we talk about privacy of personal information on the Internet what do we mean?  Many people assume it is the punch-line to a joke, as it is the accepted wisdom that there is no privacy on the Internet.  But the wisdom of the crowds is not something I'd bet on.

Legally personal information is anything that can identify an individual.  But this is an overly broad definition, and includes everything you have on your business card.  Morally personal information is that which is in the sphere of your domestic life.  But the work/life balance is increasingly blurred so that doesn't really work either.  A practical definition of personal information that needs privacy protection is anything that can be used against you.

In the past this has been easy to understand and easy to protect.  We used well understood physical security controls.  If you want to stop someone looking into your bedroom window then close the curtains.  But today it's much harder to understand, as the controls are now all logical, changeable, and set by publicly listed corporations.  If you think you understand the Facebook privacy controls today, wait until they change them tomorrow.

These same public corporations are not privacy advocates.  Facebook and Sun have publicly said that the age of privacy is over.  Google, Microsoft and Apple have all gone to court to fight against having to keep your personal information secure.  But this is entirely rational behaviour on their part - if you don't pay for the service you are not the customer, you are the product.

But do we protest too much.  Do we really care about our privacy?

Turn on a TV anywhere in the Western world and you will be bombarded with reality TV shows.  Go to any news-stand and look at the array of gossip magazines.  These forms of entertainment are very popular, and very, very profitable.  And they are all based on voyeurism and abusing the privacy of others.  There is even a mainstream movie coming out this year called Identity Thief, that will let us laugh along at the hapless victim.

I think that there is an explanation, that explains our use of Facebook, that explains reality TV, an explains why privacy on the Internet really does make sense.

Personal information is the currency of the Internet.  It's what we use to pay for services.  It should be protected in the same way we protect our wallet, and we should make sensible choices about where to spend it.

For the value we get from Facebook, for most of us the spend is reasonable.  For the winners of reality TV shows, the spend is trivial compared to the real world cash they convert their privacy into, even if the same can't be said for the losers.

But if we don't protect our privacy, we will have nothing left to spend.  And no-one likes being poor.

Phil Kernick Chief Technology Officer
@philkernick www.cqr.com
 

Wednesday, 20 March 2013

19th century PKI

Over the last few years more and more reports have been published claiming that PKI was fundamentally flawed.  The failure of the Dutch CA DigiNotar is widely claimed to be the final proof.  But I disagree.  The problems with PKI fall into two categories: "you're doing it wrong"; and "you're using it wrong".  Neither of these have anything to do with the fundamental underpinning cryptography.

The problem that PKI is intended to address is trust.  I can trust what you say if someone I trust authorises what you say.  It really is that simple to say, and at the same time fiendishly complicated to implement correctly.

It may surprise you to know that we've been doing PKI since the end of the 19th century, in the role of Justice of the Peace.  This is a person who will witness a signature on an official document.  The receiver of the document trusts that the document is genuine as they trust the JP, and the JP saw you sign it.

However just like current PKI problems, there are identical problems in the 19th century version.  When I had a legal document witnessed at the local public library, the JP had no way of validating that the form I was signing was genuine.  He also made no effort to validate that what I signed was really my signature, nor that I was the person referenced on the form - which makes sense as there is no way he could have done that anyway.

What he asserted is that a real person made a real mark on a real piece of paper.  Everything else is covered by laws against fraud.  And this has worked for more than 100 years, and continues to work today.

If we used current PKI to do only this - assert that a real computer made a real communication at a definite time, everything would be fine.  But we don't.  We want to know which computer, and so ask questions about identity, and then act surprised when the implementations fail us.

PKI is the answer.  It's the question that's wrong.

Phil Kernick Chief Technology Officer
@philkernick www.cqr.com
 

Tuesday, 12 March 2013

Printing to the Internet

You've deployed a brand new networked printer, and after getting it all set up and working, what's the next step?  How about connecting it to the public Internet.  So that anyone, anywhere, at any time can print anything they want and waste all your paper and toner.

Madness you say!  Not it would seem in universities in Taiwan, Korea and Japan.

A little Google hacking and we have 31 internet connected Fuji Xerox printers.  Some of them have public IP addresses, but many of them have been actively published through a NAT firewall.  So this was a conscious choice!

Perhaps it's just a clever way for attackers to exfiltrate data, but I've learned not to attribute to malice that which is better explained by incompetence.

Here's my advice: If you want to print to a work printer from home, this is not the way to do it.

Phil Kernick Chief Technology Officer
@philkernick www.cqr.com
 

Monday, 4 March 2013

The Perils of Cloud Analogies

Moving your operations to the cloud is like... a dream for those who love analogies.  All sorts of things have been claimed, but there is only one reality.  It's like outsourcing, because that's exactly what it is.

The biggest business risk with outsourcing is that you replace technical controls with contracts, and while a move from tactical operation to strategic management looks excellent in a business plan, it can fail badly when interacting with the real world.  The claim that "insert-vendor-here" should be better at running the infrastructure because they developed it, is much more an article of faith than a well-reasoned position.

Consider the failure of the Windows Azure platform over the last weekend.  I noticed it when I couldn't play Halo 4.  As a gamer it didn't occur to me that there was anything deeper than the Halo servers weren't working, but it turns out they were hosted on a cloud infrastructure.  And the cloud had failed.  Completely.  The reason: "Storage is currently experiencing a worldwide outage impacting HTTPS operations due to an expired certificate."  In 2013.

Information security is a people business, and the people failed.

As Sony previously discovered, the total failure of their game platform is a pain, but it isn't going to threaten the company.  To Microsoft's credit they had it all restored in about 8 hours.

But Windows Azure doesn't just host games - it hosts businesses.  And the same failure happening in the middle of the week would mean that businesses that had fully moved to the Microsoft cloud could do nothing.  No backup.  No failover.  No disaster recovery.  Because all the availability controls were outsourced.  And it is very unlikely that the clients using the service are big enough to make any contractual claim for loss.

This isn't just a Microsoft problem, Amazon had the same sort of outage last year.  Every cloud hosting provider will have these problems.

So here's my cloud analogy: it's like putting all your eggs in one basket - a basket you've never seen and can't locate - along with everyone else's eggs, and having faith that this will be managed well by the fox.

Phil Kernick Chief Technology Officer
@philkernick www.cqr.com
 

Monday, 18 February 2013

Information Security Themes for 2013

Everyone else is making predictions as to what will be the important information security trends in 2013.  I think they are all wrong.  Not because the writers are uninformed, just because they are unimaginative.  It’s easy to look to the past, draw a line through the dots, scale it up and to the right, and predict the future.  Except these sort of predictions are safe, boring and they never allow for disruptive events.

Here are a few of the safe predictions that others have made:

·         mobile malware will increase

·         state sponsored attacks will increase

·         malware will get smarter

·         hactivists will get smarter

·         IPv6 security will matter

I agree with all of them, but then who wouldn’t.  Up and to the right.  And nearly everyone making these predictions sells something to mitigate them.

So what do I think the themes for 2013 will be?  I have only one information security theme that I think really matters.  Only one theme that will confound the industry, and add to the number of grey hairs sported by CIOs.  Only one theme we cannot avoid, even though we are really trying to do so.

Authentication.

Everything else pales in comparison.  It really is back to basics.  2012 was the year that we saw more password dumps than ever before.  It was the year the hash-smashing as a service became mainstream, and not just performed by spooky government agencies.  It was the year that we saw a mobile version of the Zeus crime-ware toolkit to attack SMS two factor authentication.  It was the year logging into sites via Facebook became the norm, and not the exception.

And these are all symptoms of an underlying problem.  Passwords suck.  Passphrases are just long passwords, and they also suck.  Every two factor scheme out there really sucks – mostly because I have so many different tokens that I have to carry around depending on what I want access to.

The problem is that we are tied into the past: something you know, something you have, something you are.  We spend more and more time trying to prove these to so many disparate systems that the utility of the systems asymptotes to zero.

So instead of looking back we need to look forward: somewhere I am, something I do, something I use.

Instead of trying to authenticate the user, we need to instead authenticate the transaction.  And that is a hard problem that our backward looking way of thinking makes even more difficult to address.  Happy 2013.

Phil Kernick Chief Technology Officer
@philkernick www.cqr.com
 

Monday, 4 February 2013

Myth #9: We trust our staff

We are secure because we trust our staff.  We have a solid awareness programme, and after all, security is only a problem on the Internet.  If only it were true.

We might imagine that the most common internal attackers are the IT staff as they have full and unrestricted access to all of our systems.  As Microsoft wisely said in their 10 Immutable Laws of Security, a computer is only as secure as the administrator is trustworthy.

But system administrators aren’t the only insiders with means, motive and opportunity.

The Verizon 2012 Data Breach Investigations Report looked at the type of role held by internal attackers.  The results are eye opening.  While 6% of breaches were due to system administrators, 12% were by regular employees, 15% by managers and even 3% by executive management!

The truth is that trust must be earned, never expected.  All insiders have means and opportunity, all they need is motive.

To lower the risk, wise businesses perform background checks for new employees moving into sensitive positions, apply appropriate segregation of duties to lower the potential for attack, and then implement good detective controls to catch it if and when it happens.

If you trust but verify, then this myth is plausible.

Phil Kernick Chief Technology Officer
@philkernick www.cqr.com