Skip to main content
Blog

teissTalk: Malicious or Non-Malicious? Tackling the Remote Insider Threat

By 20 April 2021No Comments

On 13 April, teissTalk host Jenny Radcliffe was joined by a panel of four cybersecurity professionals to discuss insider threats, especially the threat posed by remote workers. 

Working from home opens up organisations to many new risks. What are they? 

It starts with people having to use their own equipment. So there is a sudden and massive expansion of BYOD. There are other factors too. At home there are distractions that can be contained in an office environment – pets, children, deliveries. Another issue is confidentiality. Many people live with other people and so when they are dealing with sensitive information on screens and in calls this information may also be made available to the people they live with. 

People are tired and distracted and so there is a perfect storm between technical issues, such as the perimeter expanding and remote working, and human issues which may well be new to the organisation. 

Some people have no real separation between work and personal life anymore. And so you can get people working at all hours, people over working, people working in sub optimal conditions. Organisations are having to address this in different ways. 

One solution is monitoring: knowing what people are doing and how productive are they being. When people are at home it is far harder to force them to work in a particular way. Things like security can get ignored. And as we move back to office work there may well be problems caused by the fact that people are used to working in new flexible ways. The old rigidities of office life may no longer be acceptable. 

Has security training changed now that people are used to working from home? 

Broadly it’s going to be the same. The work hasn’t changed even if the tools to do it have. But this is a great opportunity to talk about security. In the past we have had a top-down approach, saying “This is how you must do things.” But now we are having to say “This is what we want to accomplish; you are the person doing it so how will you do it effectively and securely?” 

In other words, we need to shift the conversation away from “Do this” to “How do you want to be supported?” This means that awareness training still needs to be part of the equation, but it needs to be delivered as a conversation rather than a lecture. Organisations will have to to give up some autonomy and collaborate with employees to achieve their goals. 

How do we manage the issue of “over-privileged” users?  

Over-privileged users are people who are able to do more online than their role requires. The challenge is understanding what is too much privilege, given that innovation and flexibility is so important. The issue is not so much stopping over-privilege as understanding whether there are risks involved in people having too many privileges and if so how to manage these risks. 

When you run a small business, you need to give people as many privileges as possible – as otherwise the managers end up doing everything. 

For larger organisations, the key may well be anomaly detection – identifying when someone is doing something unexpected online that they haven’t done before. People need different access at different times and the requirement is to measure the risk associated with that access and put in mitigations to reduce risk. Focussing on the user’s credentials is not enough. You need to look at the wider context of the activity. 

How can the Human Factors Analysis Classification be adapted for cyber? 

HFAC was created for the US Navy to investigate the cause of air crashes. It’s easy to investigate whether a metal bolt on an aircraft has failed. But how do you investigate whether a human failed? Looking at people and how they work within an organisation is an important part of any security process.  

One thing that HFAC does is eliminate blame, even self-blame. It allows people to step away from the desire for retribution and understand why something happened and look for ways of stopping the same thing happening again. It looks for patterns and the points of leverage where small changes might make major impacts on safety in the future. 

The Navy learned an important lesson when they started to use technology to try to engineer the human out of the process of flying planes. It’s the pilot who flies the plane and the technology needs to support the pilot rather than replace them. People now understand that they need to make systems more human centric and that is where HFAC can help. 

There is a tendency to assume that everything is malicious. But is it? 

Curious users may do some unusual things. We have seen users “practising” their hacking skills on corporate networks as a way of understanding how hacking works. Not a great idea but not malicious. Rather than disciplining people, or stopping them from doing things, its rather better to advise them, asking ”Have you thought whether this might be dangerous?” or “If someone saw you doing this could you justify it to them?” 

Helping people ask themselves the right questions is more effective than stopping them from doing things. In fact, if you look at the statistics, very few incidents are malicious; many more involve inappropriate intent. So organisations need to pay attention to their workers, understand what is driving them and how they are being pressured. Find the root cause of a problem, rather than taking an incident at face value. 


teissTalk host Jenny Radcliffe was talking to Jordan M. Schroeder, Managing CISO at HEFESTIS, Robin Lennon Bylenga, Human Factors analyst, Jean Carlos, Group Head of Information Security at Nomad Foods, and Richard Cassidy, Senior Director of Security Strategy at Exabeam. You can access the recording of this teissTalk here

Source

All rights reserved Teiss Recruitment Ltd.