(Forcepoint, Oct 2018)  Let’s be clear:  organisations today face real challenges protecting critical data.

Contextual TrustAnd it isn’t just obvious targets, such as healthcare providers and financial services.  Individuals happily share personal information via social networks, and the wealth of data tracked via IoT is amazing.  If you have a credit score (who doesn’t), there’s a lot of information that’s out there about you.

Sadly, when I say out there, that’s not just in the hands of the credit agencies.  The 2017 breach of credit reporting agency Equifax exposed 143 million American consumers.  (And don’t be fooled into thinking this doesn’t apply in Australia…)  Similarly, in September of this year, the largest breach in Facebook’s history exposed the personal information of nearly 50 million users.  In today’s society, even if you’re off the grid, you’re never really “off the grid”.

Given the lack of good choices for consumers, end users find themselves in the unenviable position of having little choice but to trust companies their data traverses.  Similarly, though, these same companies are also in the position of having to trust – or distrust – those people, devices, systems, and infrastructure that make up the overall IT environment.  That’s a lot of trust, which is part of the challenge.

The Government Angle

In the commercial world, businesses could start by taking a page from the government approach.  Government agencies have a long history with trust – one only has to look at the clearance process that validates the trustworthiness of those who access classified data and how classified data is isolated from those who do not possess the requisite clearances for access.  Governments also understand the need to balance the dual concepts of trustworthiness and risk acceptance as part of using multilevel secure systems.  Mostly, this works well, even against some fairly determined external attackers as well as malicious insiders and spies.  It’s easy to focus on the times this approach has broken down, but we also need to be aware of the many times it has performed exactly as designed.

But even this framework has weaknesses.  Trust is interwoven throughout, but only in parts of the government, and even then, sometimes dosed out in a rather “all or nothing” way.  The problem is the way we typically remove context from the trust process – something that stems from the different way we approach trust in the online world.

When you meet and get to know someone socially, most of us take cues from those initial appearances and conversations to approximate how trustworthy someone is.  However, that trust is situational.  You might trust your new friend to provide a ride to dinner, but not trust them (yet) with the keys to your car.  It’s not just about determining whether a person is “good” or “bad”, but whether they are likely to perform a particular action reliably or not.  For example, you might know someone is entirely well-intentioned but very clumsy.  You might not trust that clumsy person with your most treasured crystal glass, for example, even though they are honest, reliable, and kind.  It’s not that they’re bad… they’re just bad with fragile things.  It’s a subtle but important difference: it’s not just do you trust someone, but do you trust someone with respect to a particular action.

Now let’s compare this to how we view trust with respect to computing.  Here, defenders often apply a high degree of “inside/outside” thinking along the lines of “what’s inside is good and what’s outside is bad”.  Once I’m logged in to a machine or part of an organization, I’m pretty much given free rein within my granted rights.  There are some checks and balances: insider threat approaches such as DLP (data loss prevention) and PAM (privileged access management) attempt to identify those insiders who are a danger to the organisation.

Contextual Trust

However, the overarching paradigm is one of trust or distrust.  Black or white without the grey.  It’s the same with machines: once a machine is placed on the network, we generally trust it completely. This type of “trust” isn’t trust at all – because it’s not situational – it’s a more “permit or deny” privilege-based system… and it’s easy for an attacker to exploit.  Essentially, by removing context and reducing trust to a binary decisioni, we allow anyone who gets into that trusted domain to do whatever they like.  Which makes the main challenge for an attacker simply getting in the door.

The way forward is fairly clear.  Wide adoption of fine-grained, context-sensitive trust.  And this contextual model of trust needs to be applied to more than just people on specific programs, but to every entity that interacts with a system, and the system itself.

The only long-term solution to data management is to embrace this contextual-trust-based architecture in a consistent and broad manner. Moving away from the “inside/outside, good/bad” mindset has already started (behavioural analytics are a much needed first step and can be very effective in detecting abuses of granted trust) – and it should be encouraged.  What’s needed next is to recognize that trust comes in degrees and to utilize this contextual trust concept to deliver risk-adapting cybersecurity policies. The world really is about shades of grey; treating it that way will enable us to protect the data that is most important to us.

Source link