Could AI improve identity management and security?

Lee Painter

Lee Painter, CEO at Hypersocket Software, considers how Artificial Intelligence (AI) could enhance Idenity and Access Management (IAM) and help protect IT networks from future security breaches.

It feels like barely a week goes by without a new data breach dominating the headlines. According to the Identity Theft Resource Center, data breaches in the USA rose to an all-time high in 2016, with a 40% increase being reported compared to 2015. It’s tempting to believe that these are a result of hackers operating from halfway around the world. However, the majority of cases are carried out much closer to home and are often due to a mix of poor password security, software vulnerabilities, human error, malicious insiders and the abuse of access and privileges.

Identity and Access Management (IAM) has a major role to play in countering the growing threat from . For many organisations, IAM is already a crucial weapon in their cyber security arsenal; its a way to mitigate against data breaches as well as manage the additional risks that come with remote working and Bringing Your Own Device (BYOD). It seems the adoption of IAM solutions is set to gain even more momentum; a recent MarketsandMarkets research report predicts that the global market for these solutions will grow to $12.8bn by 2020, up from $7.2bn in 2015.

So how does it work? Identity and Access Management in action

IAM solutions enable a network or system to authenticate the identity of a user against a set of pre-prescribed credentials. Depending on the system being accessed, these can range from a simple username and password to digital certificates and physical tokens. In the last few years, biometric ID and passwords have gained traction. Biometric ID can range from fingerprints, iris scans and facial recognition, to keystrokes or even authentication based on heartbeats.

Traditionally, the strength of authentication required has depended on the sensitivity of the material being accessed, as well as the impact should these resources fall into unauthorised hands. Public information is likely to require little or no authentication, while proprietary, classified data and accounts with administrative privileges will require stronger authentication – preferably using multiple passwords.

While the above still holds true, recent thinking around best practice in IAM has moved on. The focus has shifted from authenticating identity and authorisation to controlling access. Working on the principle of least privilege in practice, means that every user – whether an individual, device, programme or process – is given access only to the resources needed to fulfil an employee’s role.

Least privilege is an approach that acknowledges how serious the insider threat is to businesses. For instance, just because someone has established their identity as an employee, with the right credentials this should not mean unfettered access to company systems.

While this is sound in theory, in reality least privilege and deciding who should have access to what and when can be difficult for organisations to implement and leave their systems vulnerable. One issue with applying least privilege in IAM is that users are usually given access privileges based on their position in an organisation, but employees rarely fit neatly into single roles. They may need special one-time access or each person fulfilling the same role might need slightly different types of access. Another challenge is that some organisations fail to extend the concept of least privilege access across the whole organisation and fail to monitor those classified as privileged users, (e.g. systems administrators).

The way forward with Artificial Intelligence

How might Artificial Intelligence (AI) help? So often with data breaches it’s not the management of the identity that is the cause, but the transfer of credentials to an unknown party. While least privilege access control does afford some protection, there are clearly shortfalls. Identity management and access control have always been two sides of a coin, but in the future AI will be the glue to bind them together to much greater effect.

Moving on from biometric passwords, it’s not difficult to conceive that AI could identify a user with extra security by using sight and sound. Rather than checking against pre-defined credentials, a machine would be able to understand and confirm whether a person was who they claimed to be, by using visual and aural clues. It could also learn when to grant access, and act accordingly. Permitting access on the basis machine learning is the logical next step on from biometric ID.

Though it’s a new technology, there are still risks associated with biometrics. most recently by Japan’s National Institute of Informatics. It suggests that most people’s smartphone cameras now produce such detailed images that people posting peace sign selfies on social media accounts could unwittingly be giving their fingerprints to potential hackers.

AI also offers the potential for intelligent, immediate security to implement fine-grained access control. Just because a user proved who they were at log on two minutes ago using traditional passwords or even with biometric ID, should the system continue to believe they are who they say they are? Taking clues from visual images and voices could assist to eliminate it. AI systems could constantly monitor users as they move around the network. But in addition, behavioural factors and real-time risk analysis can also come into play.

Working within a user’s access permissions, AI systems could monitor in present time any unusual, irrational or erratic behaviour. They could detect whether a user is trying to access a part of the system they wouldn’t normally or downloading more documents than they generally would. The rhythm of a user’s keyboard and mouse movements could be observed to identify irregular or uncommon patterns.

Taking this a step further, it’s not inconceivable that insights from an individual’s online identity and activity – their social profile, groups they are part of, people they follow and websites they visit – could be used to determine a risk score. Drawing this data together, actions taken by the AI system could range from an alert being triggered, to specific areas of a corporate system being switched off for a user, to access being instantly revoked.

Of course with such a level of monitoring privacy concerns abound and that is whole new area for discussion. What is clear is that in future, the truly intelligent system will be able to know, understand, monitor and act, drawing whatever clues it requires on a user. Identity and credentials will not be separate elements. An individual’s identity will become their credentials. That should be the ultimate goal of any AI system.

Lee Painter is CEO of Hypersocket Software. You can follow him on Twitter @LeeDavidPainter and us @DigiCatapult. Learn more about Digital Catapult’s intelligent activities here

Leave a comment

Share