Zero Trust is not a shortcut to security
According to a recent Microsoft report, Zero Trust is now ‘the top security priority’ for 96% of the interviewed security decision makers, while 76% said they were currently in the process of implementation. Microsoft’s report also says that the biggest challenges facing adoption are perceived investment and effort to achieve the principle in practice. Sounds familiar, but the reality of Zero Trust is that, for the most part, it’s nothing new.
Organisations don’t need a specialist suite of tools or managed services to implement the foundations of Zero Trust. The core principles are inseparable from the best practices that security firms have preached for almost a decade. As ever, the biggest hurdle is convincing and mobilising the business to make changes, which will have an impact on processes and users and require consensus and commitment at all levels.
By applying industry-accepted best practices across identity and access management, network segmentation, and asset inventory management, organisations can achieve defence-in-depth security. Coverage across these areas is vital to reduce an organisation’s overall susceptibility to threats and the likelihood of a breach; and where prevention is not possible, containing the ‘blast radius’ of a compromise.
Asset inventory management
Before implementing controls to protect the network, it is important to understand what it consists of, how it is laid out, and how an attacker will target it. Central to this is how an organisation inventories and manages its digital assets. Effective asset management helps defenders to understand their attack surface relative to their threat landscape, which allows organisations to, for example, quickly identify and react to emerging threats, such as 0-day vulnerability exploits.
An organisation with effective inventory management will have the visibility to pinpoint exactly where instances of the vulnerable technology version are running on the estate. to inform security monitoring, mitigation and patching activities, increasing agility and decisiveness in a crisis. But the criticality of an asset to the organisation is not the same as its criticality to an attacker. Attackers often leverage assets and services of low importance to the business, but which provide the functionality required for them to traverse the network and achieve their goal.
Therefore, prioritising only the assets which are important to the organisation leaves them exposed to gaps in their network visibility and security coverage. Without the necessary understanding of offensive security concepts, it is impossible to build and maintain the asset register in a way that is contextualised and therefore useful to security operations activities
Then there is the problem of keeping track of assets. Legacy systems often make use of documents or spreadsheets to record and update inventory when assets are introduced, demised, or changed across vast estates of physical and virtual servers. This can result in gaps in data and fewer components being tracked. While newer cloud-based services make it easier to track assets in real-time, autonomous asset management can create too much data, which can be worse than too little data if it cannot be satisfactorily ordered and searched.
Identity and access management
Arguably, effective identity and access management is the most important factor, particularly in modern IT estates leveraging cloud services where assets sit on or closer to the traditional perimeter, meaning historical methods of segmentation are less effective.
Central to effective identity and access management is the principle of least privilege - ensuring that all entities are only able to access the information and resources that are necessary for their legitimate purposes, both inside and outside of a network. This may look like a password-protected, multi-factor authentication VPN. By requiring an entity to demonstrate it has the appropriate access, even when in a ‘trusted’ network segment, organisations can prevent external adversaries from accessing sensitive information while preventing data from being incorrectly shared with the wrong users. This is likely the origin of the Zero Trust term.
A significant number of attacks leverage misconfigurations and the innate functionality of user directory services, such as Microsoft Active Directory, in combination with cloud provider-specific user directory services, to escalate their privileges locally and on the network, to traverse an environment and reach a point from which to cause harm.
Given its importance to an attacker, effective identity and access management should be top of any security team’s priorities to build defence-in-depth. So, why have organisations struggled to implement it? For many, usability and not security have been the priority for their user directory management and as such, their current model has grown organically over time. Now, organisations that have yet to implement more secure identity management infrastructure, processes and tooling find themselves unable to take the necessary steps without incurring significant operational disruption and expense. This can lead to insecure practices such as the use of a single set of administrator accounts for all users and services, the allocation of excessive privileges to standard users beyond those required to perform their role, and re-use of service account credentials and permissions such as default passwords that can be easily guessed.
Best practice guidance has sought to combat these issues, for example with Microsoft’s Enhanced Security Administrative Environment (ESAE), also known as the ‘Red Forest’ model and now superseded by the Privileged Access Strategy and Rapid Modernisation Plan (RAMP). These strategies broadly align with the concept of implementing a tiered enterprise architecture and governing them with multiple separate sets of administrator accounts, which are used to perform specific functions and govern individual services and systems.
Network segmentation
Alongside identity and access management, effective network segmentation contributes to reducing the ‘blast radius’ of a breach, ensuring that an attacker landing at a point on your network doesn’t mean a full-scale compromise. On a flat network without separation between critical systems and services, an attacker can pivot and manoeuvre unopposed. Network segmentation seeks to hinder the adversary by slowing down their ability to migrate and afford the defenders more time to react, contain and prevent real harm.
A segmented network also means that the impact of a compromise is reduced and if a subnet can no longer be trusted, it can be isolated until purged. In some incidents, it is necessary to disconnect the internet, deny staff access to resources and then collect and re-image every single computer. In a segmented network, the extent and impact of an attack can also be ascertained with greater accuracy during both triage and post-compromise recovery.
In practical terms, network segmentation can be achieved through the implementation of separate domains, implementing specialist firewall rules and deploying VLANs with dedicated switches and routers and Network Access Control (NAC) solutions. However, organisations often fail to implement effective network segmentation for reasons of cost, complexity, and integration.
Resource costs aside, network segmentation can mean onerous increases to hardware and/or software costs for physical and virtual segmentation respectively. Virtualised alternatives also carry a significant expense and require the expertise to perform correctly. Network segmentation is often a casualty of usability, as it introduces administration and orchestration challenges for the IT team with complex underlying systems and workflows to implement. Segmentation also requires a clear understanding of business workflows, systems and services to avoid unproductive delays and inefficiencies.
Practical recommendations for a strong security foundation
Identity and access management, asset inventory management, and network segmentation are highly co-dependent and must be synthesised to yield maximum value. They also encounter many of the same challenges as elements of their implementation sit outside the traditional remit of the IT team and require buy-in and consensus across the organisation.
There are practical steps you can take to prepare the groundwork for implementing these best practices that include setting up an interdepartmental working group with executive buy-in to ensure the need for implementation is understood and the potential for short- to medium-term disruption or efficiency loss has been considered and accepted. It is important to assemble a verbose asset and technology register in a manageable format that can be accessed and interrogated by the security team and regularly refreshed with as little manual input as possible.
You should also map your business workflows and user journeys for different roles and departments and understand typical user behaviour patterns to identify atypical activity and out-of-scope systems. This should include both high-level mapping to identify broad interdependencies for functions and user roles, but also at a technical server-to-server or resource-to-resource level so that IT teams can understand the impact of resource segmentation and segregation.
The permissions of all entities within the network should each have a clear definition of why that particular permission is required and not just ‘we think we need it’. There are multiple ways that segregation can be achieved in practice, with the use of completely separate accounts, ‘break glass’ accounts, and just-in-time access, for example. An administrator user with one user account should not have administrative access privileges enabled all the time - instead, they should be activated when needed and for a set period of time.
Finally, evaluate your threat profile, considering:
- What motivations would an attacker have for targeting your business?
- In what ways can an attacker benefit by attacking your business, and what goals might they have once on your network?
- How can an attacker manipulate your digital assets?
- How can your business processes be abused using legitimate / intended functionality?
- How will an attacker traverse across your network to achieve their goals?
- What information and technology assets would present a risk to your business if compromised?
- Where are the most likely and pivotal entry points to your network?
- Who are the most high-risk users / user types to be compromised, or pose the highest risk if they become malicious?
Once these activities have been completed, the logical methods of segmenting your network resources, administrating your user accounts, and protecting your digital assets and technologies will be much clearer. From this point, it will be much easier to move toward the implementation of the best practice approaches described in terms of both technical security and business operations. But remember, change projects of this magnitude will fail without sufficient buy-in from stakeholders across the organisation, no matter how impressive and expensive the service or product is.