The past three years have been challenging in many ways for the retail industry, and this uncertain reality isn’t going anywhere. The industry continues to face challenges; the cost-of-living and energy crisis, and expected period of economic downturn, are understandably unsettling for all sectors, with retail particularly hard hit in a recessionary period.
Retailers need to adapt to smarter technology that can not only support their in-store operations and ensure an excellent customer experience and guarantee secure customer payments and business data but can also help them to transition into a truly omnichannel offering: improving customer experience and operational efficiency – all of which are crucial in these uncertain times.
Many retailers are embracing this challenge and using it as an opportunity to rethink the way in which they do business to ensure long-term survival and profitable growth. One key solution is through implementing a Software-Defined Wide Area Network (SD-WAN).
SD-WAN offers a more flexible approach to connectivity and can potentially provide improved network performance, along with more granular visibility and cost savings compared to traditional network technologies such as Multiprotocol Label Switching (MPLS).
SD-WAN can also provide central orchestration and management and, along with that, network simplification. This can help Managed Service Providers to provide an improved service reducing the time required to deploy sites (shops) and services, which can be critical for retailers who often need to respond quickly to market requirements. As SD-WAN is an overlay technology that can be positioned on top of the underlay, it can be deployed onto existing connectivity, whether that be traditional MPLS, Direct Internet or even 4G/5G. This provides retailers with real freedom of choice when it comes to procuring connectivity services.
Enhancing the in-store experience
The use of technology in-store is rapidly growing as consumers demand a far more integrated shopping experience, with the lines increasingly blurred as to what happens in a physical retail store and what happens online. It is no longer enough to simply offer products – consumers want a shopping experience; from free charging hubs, streamed TVs, smart mirrors, video, and digital signage to the scanning of codes – the pressure on retailers to make a consumer’s experience as engaging and integrated as possible is a clear trend and one that shows no sign of slowing.
To fulfil customer expectations and utilise these technologies, retailers must fundamentally check that their networks are fit for purpose, reliable, and secure. SD-WAN solves the challenge of enhancing the in-store experience by improving network uptime, performance, and redundancy. It also provides the retailer with the ability to support modern technologies and the latest cloud-based apps whilst also prioritising business-critical applications such as payments.
SD-WAN can provide retailers with the peace of mind of not having to worry that their payment systems might be affected because of a lack of network resiliency and increased demands on bandwidth as a result of increased in-store digital features such as customer Wi-Fi or digital signage.
Brand protection
It is more important than ever for brands to protect their reputation, as consumers expect excellent customer service each time they shop. Online review sites and social media platforms are an open source for consumers to criticise a single bad experience, no matter how big or small. Whether that be the speed of service, an issue at checkout or payment – depending on the need and ability of a competitor it could see consumers shopping elsewhere and ultimately harming the brand’s reputation.
It’s therefore crucial for retailers to build an infrastructure that is resilient and able to prevent service disruptions regardless of circuit availability. SD-WAN can provide automatic failover when a service impacting event is detected. Multiple circuits can be bonded or utilised at a single location to provide resilience and increased performance by utilising all available bandwidth. By constantly monitoring the circuits and configuring application service level agreements (SLAs), SD-WAN knows the optimal path to send business-critical traffic at any given time. We all know how frustrating it can be when the card machine doesn’t work at the till or the webpage suddenly crashes when trying to complete a checkout online – SD-WAN can help ensure that the customer’s experience, be that in-store or online, isn’t hampered by network disruption or outages.
SD-WAN is proving to be an invaluable technology, whether that be for enhancing customer experience, enabling business growth or protecting the retailer and its customers from security threats – with both new and old challenges impacting the industry, it’s time for retailers to think differently throughout 2023.
In November 2020, over £40 million was committed to transforming the UK’s rail networks. In May 2021, The Great British Railways: The Williams-Schapps Plan for Rail was presented to Parliament, outlining a thirty-year plan for improving the quality of transport across the country. Digital technology has become increasingly important in the rail industry, from high-speed internet access and mobile entertainment to mobile bookings and live train updates. Passengers want a comfortable and easy experience that improves their travel quality. With thousands of passengers using trains daily, secure and high-performance connectivity is crucial to deliver services, cope with traffic peaks, and connect remote stations.
As the Williams-Schapps Plan is rolled out, many rail providers have discovered that their legacy networks do not deliver the scalability, security, and compliance needed to ensure reliable performance across widely dispersed locations. One rail franchise operator addresses these challenges by implementing a secure Software-Defined Wide Area Network (SD-WAN) to connect its stations, depots, rail operations centre, and data centres.
SD-WAN provides dynamic control of every aspect of a network by decoupling the infrastructure from the service through software definition and network function virtualization. When successfully implemented, SD-WAN allows administrators to integrate multiple access technologies and manage them through a single Graphical User Interface (GUI), providing agility, resilience, and cost control. Daily operations can also benefit from SD-WAN’s capabilities, such as local internet breakout at each site and configuring suitable SLAs to voice and video traffic, improving the user experience when utilizing Microsoft 365 for collaboration.
Implementing SD-WAN can also help with mergers, acquisitions, and divestitures by overcoming the requirement for costly enterprise circuits and adopting almost any internet-based transport mechanisms. Centrally orchestrating the corporate network and utilization of features such as application-based overlays and templates can dramatically improve network management quality, allowing IT teams to react more quickly to any restructuring of the organization.
While the benefits of a successful digital transformation with SD-WAN as the foundation are huge, the challenges involved in such projects should not be underestimated, especially for critical services like rail, where even minor disruptions can have serious consequences. Legacy networks are often highly complex, delivered by multiple providers and based on hardware approaching end-of-life. Engaging with a trusted technology partner with a proven track record in similar projects across the sector can help operators throughout the deployment project and beyond, ensuring expected outcomes are delivered, and passengers are provided with a fully integrated system that increases user experience, improves security posture and facilitates growth and adoption of new services.
Looking for a solution to enhance your rail operations and prepare for any challenge?
Xalient has helped national rail operators navigate obstacles with our full turnkey SD-WAN solution. Our approach combines Silver Peak technology with our consultancy, design, monitoring, and management expertise to create an agile, high-performance network. Read our case study to learn how we helped a major rail franchise operator improve performance and stability, add flexibility, and improve the customer experience.
A year of extremes
2022 was a year of extreme complexities. With the post-pandemic and Brexit fallout, cost of living rises and inflationary pressures, geo-political issues, ongoing climate crisis, supply chain shortages and growing cybersecurity and data security threats, it was undoubtedly another unprecedented year. In fact, ransomware set annual records again, with new ransomware strains emerging. Additionally, cloud adoption continued to grow, while the IT jobs market experienced significant skills shortages. As we look forward to the start of a new year, what trends are on the horizon in 2023 and what issues will organisations be grappling with?
Right-sizing multi-cloud for your environment
In the year ahead, moving to the cloud and undergoing digital transformation initiatives will continue to be of the utmost importance to remain agile, modern and competitive. As workforces utilise hybrid working for the long term, environment modernisation will continue to be a priority. However, a challenge many organisations are still working through is how to deal with legacy networks and technologies, and how best to right-size their cloud or multi-cloud environment for their requirements. Some organisations have found themselves dealing with large and unpredictable cloud egress bills and consequently having to look at how best to right-size their cloud infrastructure to combat this.
A shortage in skills means organisations will look to automate
The IT skills shortage has prompted companies to adopt more outsourcing of services as staff attrition continues to be a challenge. Hybrid working means employees have more choice when it comes to who they work for, while the general skills shortage is exerting upward pressure on wages. While Forrester predicts that global tech spending will rise, hitting $4.8 trillion in 2023, the current skills shortage might impact some of those IT programmes being deployed, which is why talent is a top challenge facing CIOs. This chronic talent shortage is pulling the profession into a wave of change. CIOs must lead their organisations to adopt innovative methods for attracting, hiring, retaining, and developing employees. Organisations will look at how they can best employ automation to drive maximum efficiencies and alleviate pressures. Today automation is already resolving various daily issues for organisations in a faster way than traditional manual approaches. There are many mundane tasks that require manual inputs and take up a lot of time for IT teams. Automation can cut through these, speeding up and streamlining the process, bringing in efficiencies and releasing precious resources for higher value tasks so that skills are better utilised, and employees feel challenged.
IT budgets are likely to feel the squeeze.
IT budgets are likely to come under a squeeze as businesses look to tighten their belts amid continuing rising costs. IT leaders will need to put forward strong business cases to ensure the value of crucial infrastructure modernisation is heard across the business, and that budget constraints don’t hold such initiatives back. IT leaders will need to look at how projects can help drive efficiencies, competitive advantage, and cost savings. However, network security budgets are likely to see the opposite and be expanded. Therefore, budget squeezes will not be at the expense of network security as organisations recognise the criticality of ensuring their networks are secure.
Accelerated adoption of Zero Trust technologies and services.
Adopting a zero-trust approach to networks and security will continue to be a priority in the year ahead. It will be especially relevant to organisations tackling critical projects such as M&A and divestitures as they grapple with the challenges of economic recession. In these environments, zero-trust can be used as a catalyst to accelerate the benefits of separation or merger so that organisations can get a head start on modernisation and ensure compliance, while also ensuring their security posture is strong. According to Gartner, zero-trust network access security is forecast to grow by 31% in 2023 — up from less than 10% at the end of 2021. However, adversaries will deploy new technologies to overcome zero-trust defences and increase their success rate in future attacks.
According to IBM’s 2022 Cost of a Data Breach Report, the average costs increased to USD 4.35 million in 2022, climbing 12.7% from USD 3.86 million in the 2020 report. Additionally, a stunning 83% of organisations surveyed reported having incurred more than one data breach. This means there will be a need for comprehensive threat intelligence, monitoring and alert detection solutions in place, including endpoint device security. There will also be a need for a holistic approach to zero trust with identity at its core, with an end-to-end framework to ensure integration, efficiency, and a strong network to withstand cyberattacks.
Observability will become the watchword in 2023
AI, ML and observability solutions that take AI to the next level, so that organisations are getting more actionable insights and predictions out of their data for improved reporting and analytical purposes, will be paramount. This is where we will see true AI solutions shine – ones that bring in intelligence and richer observability instead of simple monitoring. Observability is more about the correlation of multiple aspects, context gathering and behavioural analysis. Observability correlation enables applications to operate more efficiently and identify when a site’s operations are sub-optimal, with this context delivered to the right person at the right time. This means a high volume of alerts is transformed into a small volume of actionable insights.
Without a doubt, 2023 will be a challenging year, but there will also be opportunities for innovation and growth in certain sectors. This is where working with a cost-effective partner will be critical; a trusted partner that can rapidly pivot, innovate and adapt as requirements and market conditions evolve.
How do we move from network observability to proactive monitoring? It has been a challenge for most network teams for decades. One of the problems network teams face is the vast volume of data they deal with and the lack of time and skills to interpret it.
Two decades ago, that data was generally contained, with the majority of traffic understood by network teams. Today, the explosion of cloud apps and remote working has made understanding traffic a significant challenge. It is not just the volume of data but the complexity of that data that makes the challenge hard to overcome.
So, where do we start? To find out, Enterprise Times talked with Stephen Amstutz, who’s the head of strategy and innovation at Xalient. Amstutz believes that the move to software-defined networking gives us a chance for greater observability of data. He talks about the gains from having greater granularity into how applications are consuming the bandwidth.
But this is not just about utilisation. Amstutz says, “Not only are we getting utilisation statistics, but we’re also getting all of the metadata that goes along with that, so we know what applications are being used and consumed, and we know what users are consuming those applications. We’re able to much more effectively understand how the network is being used.”
That understanding allows an organisation to set its Quality of Service metrics to prioritise key applications. It also highlights where legacy applications are still in use. A critical area when companies are moving to the cloud.
To hear more of what Amstutz has to say, listen to the podcast here: Can AI get you from network monitoring to proactive observability? – (enterprisetimes.co.uk)
Over the past decade, the shift from a traditional IT infrastructure to cloud-based computing has been rapid, with many companies embracing cloud migration. In the coming years, cloud services will dominate – they are already quickly overtaking on-premise in-house traditional IT systems as a reliable, scalable, cost-effective IT solution. Here we look at how cloud transformation can help address some of the challenges businesses face when entering the early stages of growth and scalability.
One of the most significant benefits of cloud transformation is the flexibility it offers businesses, particularly where hard-earned growth is just beginning to take off. Traditional IT infrastructures endure challenges that can be extremely costly in time and money as the company grows.
Let’s take an example; as the number of employees increases, so does the volume of traffic on the network and data usage. With a traditional IT infrastructure, the only alternative solution is to rent or purchase additional hardware, which can be costly in terms of the initial acquisition and ongoing maintenance costs. The on-demand space of cloud computing has virtually unlimited storage space and server resources, meaning it is infinitely scalable, so you can scale up or down depending on the level of demand.
Moreover, it is more cost-effective than traditional IT infrastructure due to payment methods for data storage services. With cloud-based services, you only pay for what is used – similar to how you’d pay your utility bills, and the decreased downtime means enhanced workplace performance and increased profits in the long run. Cloud also allows businesses to support a hybrid working environment, where applications are available from anywhere, and employees can be as productive from the train as they are at the office.
However, the move to the cloud isn’t without its challenges.
Infinite scale brings with it multiple security challenges. As each new virtual server or appliance is deployed, with potentially critical data stored, how do you ensure that access to that data is correctly managed? How do you ensure that the correct security policy is applied to every new cloud workload? With cloud applications inherently accessible from anywhere, how do you police an equally infinite security perimeter? How do you monitor performance and ensure user experience on a platform you no longer manage?
How can a cloud security platform help you transition to the cloud?
A cloud-native platform can scale with demand and is consumed by businesses in the same predictable model as any other SaaS or cloud product. It can provide all of the benefits one would expect from a cloud service; centralised management, resilient architecture, global coverage and consistent application of policy, whether the workload is on-premise, cloud or SaaS-delivered.
As with many cloud services, time to value is short, allowing you to deploy quickly and reap the benefits immediately, providing secure access to your cloud workloads from day one, and releasing your teams to concentrate on the task at hand – migrating everything else.
Internet Access secures access to the Internet, whether the user is on-premise or remote working, with policy managed and delivered centrally, but service provided using a distributed cloud model to maximise performance and user experience – secure your perimeter at cloud scale.
Cloud Security Posture Management can help you to understand how your cloud platforms have been (and should be) deployed to ensure that your move to the cloud isn’t exposing security flaws, and Private Access enables you to minimise your attack surface and move towards a Zero Trust architecture for access to your cloud applications.
A Digital Experience solution provides detailed information on the end-user experience, delivering support teams deep insight into cloud-delivered applications, so they can pinpoint issues quickly and resolve them as quickly as possible – ensure your users thoroughly enjoy the benefits of the cloud.
Click here to find out how Xalient can help to deliver your transformation.
Written by Stephen Amstutz, Head of Strategy and Innovation, Xalient
In today’s world, the volume of data and network bandwidth requirements are growing relentlessly. So much is happening in real-time as businesses adapt and advance to become more digital, which means the state of the network is constantly evolving. Meanwhile, users have high expectations around applications – quick loading times, look and feel visually advanced, with feature-rich content, video streaming, and multimedia capabilities – all of these devour network bandwidth. With millions of users accessing applications and mobile apps from multiple devices, most companies today generate seemingly unmanageable volumes of data and traffic on their networks.
Networks are dealing with unmanageable volumes of data
In this always-on environment, networks are completely overloaded, but organisations still need to deliver peak performance from their network to users with no degradation in service. But traffic volumes are growing, and this is bursting networks at peak hours, akin to the M25; no matter how many lanes are added to the motorway, there will always be congestion problems during the busiest periods.
As an example, we’re seeing increasing need for rail operator networks to handle video footage from body-worn cameras, in order to cut down on anti-social behaviour on trains and at stations. However, this directly impacts the network, with daily uploads of hundreds of video files consuming bandwidth at a phenomenal rate, yet the operators still need to go about their day-to-day operations while countless hours of video footage are uploaded and processed.
This is a good example of where AI and ML can and is helping organisations take a proactive stance on capacity and analyse whether networks have breached certain thresholds. These technologies enable organisations to ‘learn’ seasonality and understand when there will be peak times, implementing dynamic thresholds based on the time of day, day of the week, etc., as a result. AI helps to spot abnormal activity on the network, but now this traditional use of AI/ML is starting to advance from ‘monitoring’ to ‘observability’.
So, what is the difference between the two?
Monitoring is more linear in approach. Monitoring informs organisations when thresholds or capacities are being hit, enabling organisations to determine whether networks need upgrading. Whereas observability is more about the correlation of multiple aspects and context gathering and behavioural analysis.
For example, where an organisation might monitor 20 different aspects of an application for it to run more efficiently and effectively; observability will take those 20 different signals and analyse the data making diagnostics with various scenarios presented. It will leverage the rich network telemetry and generate contextualised visualisations, automatically initiating predefined playbooks to minimise user disruptions and ensure quick restoration of service. This means the engineer isn’t waiting for a call from a customer reporting that an application is running slow. Likewise, the engineer doesn’t need to log in and run a host of tests, and painstakingly wade through hundreds of reports, but instead can quickly triage the problem. It also means network engineers can proactively explore different dimensions of these anomalies rather than get bogged down in mundane, repetitive tasks.
This delivers clear benefits to the business by reducing the time teams spend manually sifting through and analysing realms of data and alerts. It leads to faster debugging, more uptime, better performing services, more time for innovation, and ultimately happier network engineers, end-users and customers. Observability correlation of multiple activities enables applications to operate more efficiently and identify when a site’s operations are sub-optimal with this context delivered to the right engineer at the right time. This means a high volume of alerts is transformed into a small volume of actionable insights.
Machines over humans
Automating this process, and using a machine rather than a human, is far more accurate because machines don’t care how many datasets they must correlate. Machines build hierarchies, and when something in that hierarchy impacts something else, the machine spots certain behaviours and finds these faults. The more datasets that are added, the more of a picture this starts to build for engineers who can then determine whether any further action is required.
Let’s touch on another real-life example. We are currently in discussions with a large management company who own and manage petrol station forecourts. They have 40,000 petrol stations, and each forecourt has roughly 10 pumps, equating to 400,000 petrol pumps across the US. Their current pain point is a lack of visibility into the petrol pumps and EV chargers connected to the network. As a result, when a pump or charger is not working, they might only become aware of this following a customer complaint, which is far from ideal.
The network telemetry that we are able to gather, and that behaviour analysis, means we could provide business insights, not just network insights. We could see if a petrol pump stops creating traffic, which triggers a maintenance request to go and fix the pump. This isn’t a network problem, but the network traffic can be leveraged to look for the business problem. This is a use case for fuel pumps and EV chargers but imagine how many other network-connected devices there are in factories or production facilities worldwide that could be used in a similar way.
Getting actionable insight quickly
This is where our AIOps solution, Martina, predicts and remediates network faults and security breaches before they occur. Additionally, it helps to automate repetitive and mundane tasks while proactively taking a problem to an organisation in a contextualised and meaningful way instead of simply batting it across to the customer to solve. Martina discovers issues with recommendations around tackling the problem, ensuring that organisations always have high-performing resilient networks. In essence, it essentially makes the network invisible to users by providing customers with secure, reliable, and performant connectivity that works. It provides a single view of multiple data sources and easily configurable reporting so organisations can get insights quickly.
Executives and boards want their network teams to be proactive. They won’t tolerate poor network performance and want any service degradation, however slight, to be swiftly resolved. This means that teams must act on anomalies, not thresholds, to understand behaviour to predict and act ahead of time. They need fast MTTD and MTTR because poor-performing networks and downtime impact brand reputation and ultimately cost money! This is where proactive AI/ML observability really comes into its own.
We are working with the Product and Process Innovation project (PIPA) as part of our MARTINA expansion. The PAPI project is part-funded by the European Regional Development Fund as part of the European Structural and Investment Funds Growth Programme 2014-2020, in partnership with the Northern Powerhouse, and delivered by the University of York.
How does Identity and Access fit into the Zero Trust Framework?
Identity and Access is a vital component of Zero Trust. It is crucial to securing business data, keeping customers confident and employees protected. Any high-level security model really breaks down into a trust issue: Who and what can I trust? – the employee, the devices, and the applications the employee is trying to connect to. In the middle is the network but today, the corporate backbone is the internet. Identity is the fundamental feature in controlling who has access to your company data, from where and using what device.
With Zero Trust, we assume everything on the internet holds risk, and that no user or application should be trusted regardless of whether the person or entity is “inside” or “outside” an organisation’s perimeter. Instead, we must continuously and rigorously verify anything and everything before granting access.
Most organisations have some sort of Identity solution – especially with cybercrime escalating, and a record-breaking number of data breaches of increasing sophistication and severity taking place year-on-year.
Organisations with less sophisticated tools, or who are not making full use of their solution i.e., only implementing a Multi-Factor Authentication (MFA) or just utilising basic credentials to access a VPN (Virtual Private Network), represent a significant percentage of victims targeted, especially during the pandemic. As a consequence, the Zero Trust model has quickly become a fundamental security requirement rather than a ‘nice-to-have’.
One would expect this to be high on the list of priorities for an organisation that has a vastly distributed workforce. The company may have accumulated many tools that do the same thing – VPN clients, Endpoint Detection and Response (EDR), Antivirus and Remote Access etc. -and, as a result, has identified a gap in their security posture and policy. Xalient’s Identity and Access module consolidates and manages these tools so that the user at the start of the journey has the correct experience from the get-go. Furthermore, the framework utilises identity verification, authentication factors, authorisation controls, as well as other IDAM and cybersecurity capabilities to verify a user before any level of trust is awarded.
Organisations are looking for a secure solution for their applications, devices, and their users, which is why the Zero Trust model becomes a fundamental component, regardless of where they are located.
The shift to remote working
With remote and hybrid working now commonplace, there has been a mass migration away from the secure perimeter, which has put more emphasis on consumption of cloud services. The concept of trying to extend the secure perimeter to the location of the user and the application means businesses must be ready to implement Zero Trust for all types of users, not only employees but partners, contractors, and customers too.
At the same time, organisations need to harness the power of applications. They need to be highly productive with fast and easy access to the applications they need to do their job. This is not only essential but is fundamental to becoming a modern digitised business. To enable this environment, businesses need reliable network access from the edge to the core and security that is based on a Zero Trust framework to ensure robust, efficient, and secure access to essential business applications from wherever the employees and/or users are located.
What does Xalient’s Identity and Access Module Encompass?
As part of Xalient’s Zero Trust Framework, the Identity and Access module supports both remote and branch / on-prem cloud and cross-domain technology. The module is focused on providing solutions to the questions of trust, specifically user, device and location. We offer a consultative approach drawing on significant technology expertise and experience, with a world-class Managed Services offering. Our dedicated team are experienced with industry leading IDAM, EDR and NAC (Network Access Control) solution vendors, and have the skills required to design, build, and manage your global Identity and Access Management solution for you. Our certified consultants and administrators can advise on how you can ensure only the right people access your network, but also do so efficiently and securely, wherever they are in the world.
Today’s enterprises conduct business and use digital technologies in ways that are evolving constantly. This digital transformation is making traditional perimeter-based cybersecurity IT infrastructure redundant. The days when every user and every device that is sat inside the organization’s premises or firewall can be automatically trusted are over for good.
For decades, the enduring principle in corporate IT policy was the ‘castle and moat’ approach to securing user access to applications. Everything that needed to be accessed securely sat inside the castle and once the drawbridge was up and the castle was protected by its moat (or firewall), nothing unknown could get in or out, and everyone could trust each other. However, over the last 10 years applications and workloads have moved to the cloud, and users are increasingly accessing them remotely via the internet. This means that traffic is going from a user that was sitting inside the castle to an application that now sits outside. The network is no longer a secured enterprise network. Instead, it is the unsecured internet and the solutions employed to keep attackers out are no longer effective.
Megatrends
In addition to the technological changes in the way enterprises operate today, there have also been massive global macroeconomic shifts that have fundamentally changed the way companies hire staff and engage with customers around the world. This globalization of business and trade is an unstoppable trend and has been accelerated by the pandemic, with employees potentially working anywhere. The result is that organizations have been looking carefully at how they solve the problem of allowing employees – wherever they are located physically – to access mission-critical applications securely.
In the pre-Covid era, remote work was not uncommon, but now that working from home has become widespread, security technologies and processes based purely on established geographic locations are becoming irrelevant. Overnight in some countries, tens of thousands of workers have gone from the office to being at home where they are sharing broadband connections with family, friends, and gamers. With a remote workforce, the use of potentially unsecured Wi-Fi networks and devices increases security risks exponentially.
Not only are employees’ work-from-home setups and environments not as secure as the office, but the broadband connections are weaker too. This means their experience of trying to access office applications is suboptimal. Their Wi-Fi router may not have been configured for WPA-2; their IoT devices on the home network, like baby monitors or smart thermostats, are running a hodge-podge of security protocols if any; and all this is being managed through a corporate VPN that is slowing traffic down even more. It’s not difficult for a threat actor to work out that an organization is using a centralized firewall and then launch a DDoS attack that threatens to take down the business.
Zero Trust verification
In this environment, more and more enterprises are now adopting a Zero Trust approach. Zero Trust is a security concept centered on the belief that organizations should not automatically trust anything inside or outside its perimeter and instead must verify anything and everything trying to connect to its systems before granting access. Without an overarching system like a Zero Trust framework, employees working in a secure environment can no longer be verified — or controlled. Zero Trust employs least-privilege and “always-verify” principles, offering complete visibility within the network, whether in data centers or the cloud.
CIOs, CISOs, and other corporate executives are increasingly implementing Zero Trust as the technologies that support it move into the mainstream; as the pressure to protect enterprise systems and data grows significantly; and as attacks become more sophisticated. By removing the centralized approach to policy enforcement and moving towards more of a distributed SaaS model where security is delivered via the cloud – coupled with encryption and SD-WAN technology – identifying the user and providing access to the applications they want becomes far more effective and cost-efficient compared to MPLS. This approach enables distributed teams to collaborate and talk to each other without requiring centralized locations and security postures that mandate VPNs, with associated costs and poor performance issues.
Challenge and benefits
It is undoubtedly a challenge for most large enterprises with established IT teams that have worked on a ‘trust but verify’ basis using corporate firewalls and VPNs, to change direction and move towards a Zero Trust basis, but in our view adopting this approach does bring other benefits. In a Zero Trust environment, security controls are deployed with the assumption that the network is already compromised. No unauthorized processes or applications are allowed to execute, and authentication is required for access to data.
With no network perimeter for the enterprise to manage, users can be anywhere and on any device. The devices that workers use are less likely to be ones assigned by the employer. Employer-owned laptops and phones are traditionally managed, patched, and kept up to date with security tools and policies. However, with everyone working remotely, employees may forget basic cyber hygiene skills and start to use their own devices to access work networks or apps. They could be using their work laptops to shop online between Zoom calls. Even if zero trust security can’t force employees working at home to use work devices only for work, it can control the potential for a security breach because of the fundamental “trust nobody; verify everything” rule that enforces access controls at every point within the network.
If the enterprise moves to a managed cloud or even hybrid cloud platform and all policy is managed from a single point across the whole organization, CISOs can customize and improve the user experience by only giving employees access to the applications they need to work with, thus reducing latency on remote connections. From a user perspective, they get the quickest access to the apps they need the most.
More cost-effective than MPLS
Another benefit for CISOs is a reduction in Capex when compared to MPLS networking. Historically, businesses have made huge investments in centralizing firewalls and maintaining all the software and hardware required to support their security policies. This expense all moves away as cloud security becomes driven through the SaaS platform and on-demand pricing.
SD-WAN is a core component of Zero Trust and also makes management of it easy, allowing IT to avoid complex network-security architectures, and removing the convoluted connections between appliances and users, while providing the highest security through a cloud-delivered model. Instead of appliances, all traffic is securely connected through a cloud-delivered service, whatever the connection type – mobile, satellite or home broadband. And because the intelligence of the network is software-driven and orchestrated centrally, it can manage the user’s journey through an insecure internet to the location of the application and compresses other applications to make it a vastly more efficient and less costly experience. Moreover, crucially for the enterprise, not only is this all done in a secure way using encryption which enables integrity between the user and the application, but SD-WAN delivers more agility and choice than legacy MPLS.
Without a doubt in 2022 security will be high on the C-Suite agenda. With intensifying trade disputes, an escalating threat landscape, a highly distributed workforce, supply chains stretched to breaking point by the pandemic, and extra pressure exerted by the ongoing effects of Brexit and other escalating geo-political issues, having a secure, productive, agile and cost-effective security framework in place will be paramount.
You did it. You bought Zscaler and now the cloud transformation journey is before you. Now what? More specifically, when you look back a year from now, how successful will you measure the progress and, more importantly, how well positioned are you for the years to come? Kevin Peterson, Xalient’s Senior Cyber Security Strategist and former Director of Security & Network Transformation at Zscaler guides you through Xalient’s Best Practice Guide to your successful Zscaler deployment…
If you have a rather large deployment, then chances are you also paid up for Zscaler’s Deployment Advisory Services (DAS). While that’s the obvious next step, it’s really only the very beginning. Where you go beyond the DAS threshold will define just how great your success story will be. To help shed some light on what that can look like and how you might be one of the showcase installations, here’s how Xalient, a top Zscaler partner, covers the entire Zscaler deployment journey.
Top Tip: Even if you don’t use Xalient for your implementation, there’s no harm in mapping this as best you can to your own capabilities.
Phase 1: Zscaler’s Deployment Advisory
Whether you call it baselining, onboarding, or orientation (all are fitting), this first phase is all about helping you reach, at the very least, the Minimally Viable Product (MVP) in the shortest time possible. Most will estimate this to be about 25% of their core needs, which is actually a good estimate. The goal is to get you comfortably nudged into that sweet MVP spot of where you want to be in the early stages of your deployment, as dictated by which level of DAS you purchased. It’s as simple as that.
“To DAS or not to DAS?” is not even a question here. If Zscaler and/or one of their elite deployment partners recommends that you add this to your installation, find the budget for it and do it. You won’t regret it. I personally haven’t seen a project fail when DAS has been at the forefront, in fact, the best major implementations I can recall, have been in tight partnership and alignment with the relevant playbooks.
To be extra clear, it’s safe to say you will NEVER hear anyone at Xalient say you should forego a well-positioned DAS recommendation and replace it with a similar advisory service. But what you will hear and see from our leadership position is that having an overlay professional services consultant (aka Zscaler coach – managed by Xalient) can provide exponential value in the form of much faster, and thorough implementations.
Xalient Best Practice: Ask your Zscaler sales rep and the deployment partner to agree that DAS is a fit for this new installation (or major upgrade) and then supplement it with a more comprehensive professional services coaching engagement. The next phase will show why it matters.
Phase 2: Growth 25%-75%
As the baseline orientation and onboarding comes to an end, the big rollout is upon us. It’s time to ramp up to 75% of the deployment in partnership with your Xalient Professional Services Coach. Right now, you probably have 2 questions on your mind:
1) How is 75% calculated and measured?
Honestly, even for the best of us it’s somewhat arbitrary. It could be based solely on the percentage of licenses or blended with a checklist of features to be implemented. But before things even start, everyone knows what the target looks like. The next question explains why it doesn’t really matter that much.
2) Why just 75%?
Our goal is to get your organization rolled out as fast as possible so that we can move on to the next great customer. As projects near the end of completion, things tend to slow down as people start to get reassigned to other projects. This ends up hurting for a number of reasons, such as not taking full advantage of the knowledgeable resources in the first 3/4 of the deployment. By forcing our core involvement into the first 3/4, we are pushing you to be ready earlier. That’s what the business wants.
Xalient Best Practice: Avoid ‘staff augmentation’ approaches. You can go and ask an army of recruiters to help you find a Zscaler engineer to join your team and it likely won’t deliver the results you are after. Many organizations think they want the default “contractor”, when what they really need is a program built for speed, depth-of-knowledge, and accuracy. When done right, everyone succeeds faster. And the scalability and resilience offered by a service offering is exponentially more capable than any single person.
Phase 3: 75% – Infinity
This is where it really gets fun! You have been highly successful up to this point and Xalient has you ready to take it across the goal line…yourself. And you absolutely should want that personal and professional satisfaction.
But you are also, quite understandably, nervous at the prospect of losing your daily coaching. Have no fear, there are 3 key choices you can make at this point to assure your future success.
Decision Time
Xalient Best Practice: All customers are destined for a managed service. It could be your own in-house model, outsourced, or a combination of the two. Just don’t think you have to take it all on yourself, as there’s a lot of value in having SLA-driven and backed services to keep things on track. Just find and adopt your best managed services model as soon as possible.
Takeaways
DON’T look to for “staff augmentation”, but rather a team of solution experts that can be both hands-on and outstanding coaches.
DON’T “lose the plot” to your transformation success story (you worked too hard to throw it away a year or two down the road — for any reason)
DO look to managed services for peak over-the-horizon success, continuity, and growth.
About the Author:
Kevin Peterson is Xalient’s Senior Cyber Security Strategist. With over three decades of global information security and analyst experience, ranging from leading roles in the Fortune 10 (McKesson) to some of the most game-changing tech companies (Microsoft, Juniper Networks, Zscaler). At Zscaler he served as their Director of Security & Network Transformation, where he was also a founding member of their top global Solution Architects team. Since 2013, his focus has been 100% targeted at coaching the largest and most transformational global cloud security programs in order to deliver the success stories for others to follow, thereby shaping the exponentially more capable next generation.
By Kevin Peterson, Senior Cybersecurity Strategist, Xalient
Remote and hybrid working patterns have extended the corporate world into every home and user device, and as the global pandemic recedes, this is a trend that is here for the long term. In fact, it is hard to overstate the pace and extent of digital transformation undergone by the enterprise environment in the past two years. As 2022 unfolds, the daily working experience for employees looks very different to the way it looked before the pandemic.
Why “the network” has become irrelevant
Now that the hybrid environment has evolved employees can be anywhere; in the office, at home, on a train or in a coffee shop. From a security point of view, locking down the enterprise perimeter and securing network access is no longer what matters; to some extent the network has become almost irrelevant, instead the focus is now around securing applications. At the same time, organisations need to harness the power of applications, they need to be highly productive with fast and easy access to the applications they need to do their job. This is not only essential, it is foundational to becoming a modern digitised business. To enable this environment, businesses need reliable network access from the edge to the core and security based on a Zero Trust model to ensure robust, efficient and secure access to essential business applications from wherever employees are located.
As enterprises have accelerated their digital transformation initiatives the number of possible attack vectors has grown, as digital systems need to have multiple access points for customers, partners, and employees, and this has created a vastly expanded attack surface. As a result, cybercrime has escalated, and a record-breaking number of data breaches of increasing sophistication and severity are taking place year-on-year.
Operating on a Zero Trust basis
The stark reality is that this new hybrid workforce brings an increasing level of risk. With work happening at home, the office, and almost anywhere, and cyberattacks surging, security must be the same no matter who, what, when, where and how business applications are being accessed. Now that the security control organisations once had has quite literally left the building, this makes it critical that each and every connection operates on a Zero Trust basis. Cybersecurity leaders have historically called this “default deny”, which it still is. Only now, thanks to cloud platforms that tie user and device identity into the equation, the controls to make it a reality are both scalable and elegant.
What we mean by Zero Trust is that organisations effectively eliminate implicit trust from their IT systems, and this is replaced or embodied by the maxim ‘never trust, always verify’. In practice this means only trust those who have appropriate authority to access. Zero Trust recognises that internal and external threats are pervasive, and the de facto elimination of the traditional network perimeter requires a different security approach. Every device, user, network, and application flow should be checked to remove excessive access privileges and any other potential threat vectors.
Nevertheless, working with a remote workforce isn’t a new concept. There are plenty of visionary enterprise organisations that have been thinking about this issue for a long time, but sophisticated solutions haven’t always been available. In the past, enterprises relied on Virtual Private Networks (VPNs) to help, albeit minimally, solve user trust issues, but now the time is right to re-think enterprise security models in light of the modern security solutions that are available which can be implemented easily and cost-effectively.
Rewind to the security backstory
Ultimately, any high-level security model really breaks down into a trust issue: Who and what can I trust? – the employee, the devices, and the applications the employee is trying to connect to. In the middle is the network but today, more often than not, the network is the internet. Think about it. Employees sit in coffee shops and log onto public browsers to access their email.
So now what organisations are looking for is a secure solution for their applications, devices, and users.
Every trusted or ‘would-be trusted’ end-user computing device has security software installed on it by the enterprise IT department. That software makes sure the device and the user who is on the device is validated, so the device becomes the proxy to talk to the applications on the corporate network. So now the challenge lies in securing the application itself.
Today’s cloud infrastructure connects the user directly to the application, so there is no need to have the user connect via an enterprise server or network. The client is always treated as an outsider, even while sitting in a corporate office. The servers never even see the client’s real IP address (because they don’t need to) and even data centre firewalls are of far less value as the Zero Trust model, and expertly applied policies and controls, are now exponentially better.
Death to the VPN!
In this new construct the VPN dies, thanks to Zero Trust Network Access (ZTNA), and networks become simplified with lower operational running costs, thanks to SD-WAN.
So, does the old client VPN truly die? Yes, it does! The reason is that we are now only concerned with what we trust: the user, their device, and the destination. Notice that “the network” isn’t part of that. Why? Because we don’t trust users or their devices any more on the corporate network than we do on public networks. So even when connected to a LAN port on the desk, they have the same seamless security posture and always-on application (not network, but application) access that they would if there were on public WiFi.
Just as film is no longer used for taking pictures, VPNs are no longer the future for application access. Everyone now sees that the real need is not for users to access networks, but rather just to access the applications as though they are all cloud accessible. That’s the Zero Trust-based future for us all.
New thinking
Most enterprises realise that it is time to enhance remote access strategies and eliminate sole reliance on perimeter-based protection, with employees instead connecting from a Zero Trust standpoint. However, most organisations will find that their Zero Trust journey is not an overnight accomplishment – particularly if they have legacy systems or mindsets that don’t transition well to this model. That said, many companies are moving all or part of their workloads to cloud and, thus, greenfield environments. Those are the perfect places to start that journey and larger organisations, with complex IT environments and legacy systems, might see the road to Zero Trust as a multiphase, multiyear initiative.
This is where organisations can work with partners, like Xalient, to assist with implementing security controls and Zero Trust models in the cloud utilising our Xalient Zero Trust Framework. This framework provides a firm security foundation to underpin digital transformation initiatives, helping organisations take their first steps towards becoming a Zero Trust connected enterprise. It does this by addressing common areas of compromise between a user or device and the application or data source being accessed or consumed. And it does it wherever the users, devices, data and applications are located.
In today’s hybrid environment, implementing a Zero Trust approach enables organisations to start to really drive down the risk factors while ensuring the enterprise is future-proofed for 21st century business. With cyber threats only set to escalate, this peace of mind is essential.