AppNeta | The Path

Posts Tagged ‘Virtualization

 

The benefits of virtualization are clear cut – massive financial savings in the long run. Virtualization enables organizations to use inexpensive hardware as terminals for multiple desktops which lowers costs in energy, hardware, maintenance and licensing. From a daily user’s standpoint, the convenience of accessing their personal desktops from any device effectively accelerates their time-to-value. However, the transition to virtualization can be costly and companies have to cough up now to achieve the benefits later.

Virtualization enables users to access distributed enterprise applications securely from any remote client device — when it works. If the network fails to perform against defined standards, end users of virtual applications and desktop sessions experience sluggish performance, system freezes and often outright disconnects. Regardless if Virtual Desktop Infrastructure (VDI)  is hosted in your own data center, or remotely, all performance issues – and finger pointing – will come down hard on an IT team.

While occasionally hosted on the LAN, VDI is more commonly reached over a WAN connection. If the performance of this link cannot be ensured, there is no point in virtualizing. Due to the nature of virtualization, a majority of stalls occur when employees are accessing their desktops over the WAN. Critical VDI links can become compromised during peak usage and need to be continuously monitored.

While we all know the frustration that slow applications produce, end users have zero patience for latency or poor performance when it comes to their entire desktop. VDI carries the highest sensitivity level of all applications and its success rating is directly dependent on user satisfaction.

IT professionals who do not pre-asses the network before virtualizing, put their jobs on the line. It is common to underestimate how much data is cycling weekly until there are attempts to move it, and taking an accurate reading of the WAN link is critical. Virtual software providers offer bandwidth requirements – but can the network guarantee that that bandwidth is available? Even during peak utilization? Is there space left to grow?

Requirements for a Successful VDI deployment

• Insight into the critical links while continuously monitoring the performance of VDI from the perspective of your remote site end users.

• Understanding of the measurement from the connection of end users back to the server and the ability to compare real-time against the key performance indicators needed by VDI services to perform properly.

• Alerting and reporting on network issues affecting system and virtualized application performance for pro-active troubleshooting.

Current tools used to monitor VDI include Xangati, Liquid Labs and Lakeside. These solutions are critical for monitoring the health and state of the virtual machines or locations where the application is being consumed. This means that the connection between end users and the virtualized servers is often left unsupervised. There may be green lights showing for all the devices, yet the phone is bright red with complaints. When polling devices produce a summary every few minutes, seconds of latency can be invisible on a monitoring screen, but it won’t be to the engineer who receives the phone call. VDI performance is dependent on the links between remote users and servers.

PathView Cloud offers the ability to assess, monitor and remotely troubleshoot performance from a virtual or physical system to any other target across LANs, WANs, even segments you don’t own or manage.

Want to learn more? Visit AppNeta or start a free trial on your network today!

Desktop and application virtualization offer a more efficient approach to PC management, offering reduced administrative, hardware and energy costs, along with stronger security. Virtual desktop infrastructure (VDI) can also improve productivity, by enabling users to access distributed applications remotely from any client device.

Sign up for a free trial today!

 

The expanding virtual workforce, growing popularity of mobile devices, and ongoing IT belt-tightening have combined to make VDI one of the most critical IT services for organizations of all sizes. But, in order to realize the benefits of virtual desktops, you must be able to ensure the performance of the access network that connects remote clients to the virtual infrastructure. Most applications that are being virtualized were not designed for remote access in the first place. So when network performance   falters below acceptable thresholds, applications become sluggish, freeze and finally disconnect – leaving users aggravated and unproductive.

Unfortunately, many VDI deployments are hampered by frustrating and hard-to-identify network performance problems. SNMP-based network management tools can only provide information on the devices that a company manages directly, rendering them unhelpful with troubleshooting network performance issues along the VDI service path. Traditional “netflow analysis” solutions are costly, difficult to deploy and manage, and consume massive amounts of bandwidth, making them unsuitable for remote sites.

What is required for managing network performance for VDI deployments is the ability to assess, continuously monitor and remotely  troubleshoot network performance from a virtual or physical system to any other target – across LANs (wired or wireless…), WANs and even public networks. Application engineers must be able to:

  1. Pre-Assess the customer network to understand if the current network is ready (or not) for VDI deployment in order to set expectations and resolve any issues proactively.  The assessment needs to understand the specific operational requirements of the selected VDI vendor.
  1. While assessments are a key first step, they are only good for a given point in time and networks are very dynamic in nature.  Therefore to assure success, continuous monitoring of key network performance indicators (KPIs) that are critical to the successful delivery of VDI services is essential. These include total, consumed and available capacity, network utilization, latency, packet loss and jitter.   It’s vital that the monitoring be able to understand the network in totality (i.e., in the same way the VDI infrastructure will leverage the network…) and not affect production VDI network traffic.
  2. Compare real-time network performance with the KPI values that virtual desktop services need to function properly, in order to ensure overall Quality of Service.  When differences do appear, alert key operations staff to any SLA violations and enable intelligent diagnostics to understand the where and the why in order to reduce meantime to repair. AppNeta has the experience and expertise to know the exact performance thresholds needed for all major vendors to assure success with the VDI performance.

How can network engineers gain a real-time view into network performance between remote users and virtualized servers – and respond more quickly to their frustrated end users? AppNeta’s PathView Cloud technology offers instant value and immediate insight to identify whether the cause of poor performance resides along the network or within the VDI. By enabling the proactive monitoring of access networks against KPIs, PathView Cloud lets you troubleshoot and pinpoint the performance problems of virtualized applications at remote sites so you can ensure QoS for users.

Want to learn more? Check out the Live Demo or start a FREE trial today!

It is a sign of the times that I need to clearly define the term “cloud services” if I am going to use it as an entry point to this blog. And since I wouldn’t dare assert my position to be expert enough to properly define this term (any attempt would surely bog down this entire effort), I will turn to the main sources of knowledge of our time…

If I type “Cloud services” into Google, the top response is of course a link to Wikipedia.  The Wikipedia search for “cloud services” gets redirected to “cloud computing” which is defined as:

Web-based processing, whereby shared resources, software, and information are provided to computers and other devices (such as smartphones) on demand over the Internet.

It is nice to see my opening premise is not far off the mark. Simply put, a cloud service is a web-based service that is delivered from a datacenter somewhere, be that the internet or a private datacenter to “computers.” For now, let’s leave the definition of an endpoint alone.  I know that is a big reach, but this is my blog, and it really isn’t the point.  The point is that for all of these services, they are generally delivered from a small number of centralized datacenters and consumed at some relatively large number of remote offices.

That is where things get interesting.

If we lived in a world where email and simple web page delivery was the state of the art, well, I wouldn’t have anything to write about, but we don’t.  The mainstream services that are being deployed in education, government, and enterprise accounts are ushering in a completely new level of performance requirements on the networks they depend upon.  Voice over IP (VoIP), video conferencing, IP based storage systems for file sharing, backup, and disaster recovery, and recently the deployment of virtual desktop services all bring with them new performance requirements.  Yes, that means more bandwidth, but that is just the tip of the iceberg.  All of these applications also have very real requirements on critical network parameters such as (packet) loss, end to end latency, and jitter.   Unlike simple transaction and messaging applications like HTTP delivery and email, when these new “performance sensitive” applications run into in appropriate loss, latency, and jitter, the result is application failure.  Dropped calls and video sessions.  Failed storage services including backup and recovery, and “blue-screens” where virtual desktop sessions belong.  What causes seemingly healthy networks to suffer from latency, loss, and jitter issues?  More on that in a later blog……

Successful cloud service delivery to remote sites is dependent on managing performance at that remote site.  Not datacenter application performance, or server performance, or network device performance.  Service level performance analysis from a remote site is a new topic, and we call it Remote Performance Management or RPM.

Let’s start with the basics, what do we know about RPM.

First, RPM is a location dependent topic.  Of course, the traditional datacenter performance management issues need to be dealt with.  That is part of datacenter service delivery 101.  No debate.  But if we care about the service quality that the users are experiencing, then we need to understand performance from the perspective of the end user, at the remote site.

Next, we need to address the complete performance management lifecycle.  Simply put, Assess the remote office performance PRIOR to service deployment; Monitor the remote office performance DURING service operations, Troubleshoot issues QUICKLY (like you’re there), and Report on the good, the bad, and the ugly.  When you add it all up, you need a broad set of capabilities to meet these needs

Finally, we need to keep it simple, affordable, and scalable.  The problem with most solutions around the remote office is not the device cost, but rather the administrative cost.

The bottom line is that if you are attempting to deliver today’s critical services for remote site consumption, you need to understand performance, so you’d better check your RPMs…….

Apparent Networks recently conducted a survey of network professionals from across a variety of industries from service providers to government to non profit organizations.

And the findings are clear:  as organizations transition to a more advanced virtualized infrastructure with greater capacity and capability, there are numerous, complex challenges that we must be prepared to identify and troubleshoot within a virtual environment.
 
For example, the survey found that respondents who have virtualized operating systems or applications said their top issue is troubleshooting virtual infrastructure bottlenecks; of those who have virtualized desktops or servers, the top challenge is performance of virtual machines; and for those who have virtualized their networks, the top issue is interoperability of different virtual platforms and overall performance.

Interestingly, while so many face issues with performance and infrastructure bottlenecks, less than ten percent have adopted tools specifically designed to help with these issues in virtualized environments. Of those who have adopted tools specifically for monitoring the performance and availability of their virtual infrastructure, nearly all reported being satisfied (33%) or extremely satisfied (66%) with their transition to a virtual environment. On the other hand, 25% of those who did not adopt tools reported being dissatisfied with their virtualization deployment.

Tools such as PathView Cloud that provide visibility into virtual infrastructure make it much easier to find and diagnose bottlenecks that can impact the usability of this infrastructure.  PathView Cloud is unique in its ability to test network performance from any virtual machine to any endpoint, including physical and virtual hardware resources. Other virtual machine management products on the market today focus only on monitoring how VMs interact with their virtual operating systems. Unlike those products, PathView Cloud provides a view of network performance from the perspective of applications running on virtual machines. Specifically, the PathView Cloud can be installed on VMware ESX and ESXi hypervisors.

Download a free copy of the survey report and the free PathView Cloud trial today – ensure a quick, easy transition to a virtual environment.

What are the top performance challenges when virtualizing infrastructure and services? As organizations make this transition to virtualization, new complexity arises, especially when managing the networks that connect it all. Is there enough network capacity?  How is the network affecting application performance? Is there adequate network response time? These issues can all affect the success of virtualization initiatives.

When you virtualize systems and applications, it is critical that you can see from core virtualized resources to remote clients around the world. These network paths may consist of virtual NICs, virtual switches, hypervisors, connectivity from virtual desktop, soft phones, handsets connected to branch offices, wireless networks, carrier networks, corporate WANs, and more. The question all network managers need to have an answer for is: How is this entire virtual and physical infrastructure performing together and how do I ensure the performance is enough to support the applications and services that traverse it?

One tool that helps you to see through both physical and virtual network to answer these questions quickly is PathView Cloud.

PathView Cloud is a free hosted network management service that resolves many of these issues by providing clear performance analysis from the point of service delivery to the point of service consumption. The PathView tool analyzes performance of your network without putting agents all over. In the time it takes to brew a pot of coffee, you can download, install, and configure PathView Cloud so you can measure total capacity, utilization, latency, jitter, packet loss, and Quality of Service. You can troubleshoot problems with intuitive root cause analysis and monitor the network continuously while comparing performance against pre-set thresholds.  All you need to do is specify a target system and PathView quickly measures the performance of the end-to-end network path; whether the target is a virtual server in the same data center, a virtual desktop client around the world, or any place in between.

PathView can also tell you if your LAN or WAN will support your virtualization-powered fault tolerant or disaster recovery plan.  Install PathView on a system and measure to your secondary host.  PathView will quickly display the key performance indicators you’ll need to feel confident in your infrastructure.

So go ahead, embrace virtualization and all of its benefits. But embrace it with PathView to increase your chances of success.

 

The marriage of 10 Gigabit Ethernet and virtualization seems a matter of amazingly good timing. 10G did not seem to be offering much new to the networking world apart from being the same as 1G only faster. The urgency of increasing capacity for the most part just wasn’t there – at least not in the LAN or the server.

Oh sure, there will always be applications or parts of the network screaming out for more bandwidth, such as backup systems, database servers, and aggregation points in core networks. But most LANs, services and end hosts seemed to have been sated for the time being by 1G and aggregate 1G connections – at least for now and for the foreseeable future. It seemed like 10G would remain a niche capability for a while longer.

This assessment is supported by the fact that 10G has been well ahead of the server technologies; very few machines have been capable of pushing out enough bytes per second to use up even a fraction of that capacity. There are always internal bottlenecks that limit.  This is exactly what happened in the early years of 1G. Few machines could push out more than 300-400 Mpbs, mostly due to driver inefficiencies, limited CPU and slow busses. Nowadays, most solid workstations can get pretty close to 950 Mbps with the right applications pushing out the bytes.

So there wouldn’t be much point in arming most machines with 10G, at least for several years to come. And the number of really high-end systems using them would be few and far between. Or so it seemed.

With virtualization taking off like Mentos in a glass of Coke, it suddenly became apparent that the broad-based need for 10G would become a reality much sooner then anticipated. It will still take the hardier servers to fill the pipe – but more and more services are being pulled off smaller individual servers and pushed into virtual machines.  This rush to virtualized consolidation means that there are many more really big machines acting as virtual hosts that can ably fill 10G and more services running out through 10G network interfaces. More switches with 10G ports are following suit. And suddenly the pull for 10G is much higher than it would have been otherwise.

Why does all that matter so much? Well, think of virtualization as a kind of accelerant for networking. For example consider the recent proclamation by folks at Network World that 10G has caused significant shift in network design from three-tier to two-tier. They explicitly reference the influence of virtualization on the impact of 10G networks.

All this indicates that, instead of a long, drawn-out transition to 10G over the next decade, we can expect to see prices come down, performance increase, and capacities all through the network shoot up over a relatively short period of time. Well, except maybe at my house. But that’s another story. 40G and 100G may not find as much traction available when they arrive – but it can be assured that Ethernet will continue to dominate networking thanks to this happy convergence of supply and demand.

When companies were using virtualization technology to consolidate IT infrastructure that wasn’t truly real-time in nature, the network performance of these virtual machines was not a major concern. Generally, all they had to do was put a Gig E card into the host and everything would work itself out fine. All of the VM monitoring and management tools focused on CPU, memory and disk utilization and did not pay much attention to network performance. Sure the network performance for each VM was still atrocious, but that was overshadowed by all of the other efficiency gains of implementing VM technology.

With VM based deployments becoming the standard for infrastructure, the performance of virtual networks is becoming more critical. In PathView 2.0 we have filled a critical hole in the tools available for monitoring your VM infrastructure. Because our monitoring techniques traverse the network – both physical and virtual  and the same as users and applications accessing these VMs – we can accurately tell you how well, or poorly, your VM network performance is.

We spent a considerable amount of time tuning our monitoring and diagnostic capabilities so that our results are as accurate to VMs as they are to physical operating systems, and they are done in a way that does not impact production application performance. And because we can monitor to targets over networks that you do not own and or control, these same VM enhancements benefit our monitoring of cloud-based systems which are using virtualization technology at their core.

So for people running production VM environments or responsible for the network in VM-centric IT environments, PathView 2.0 should be a key addition to your monitoring tools. The capability to see through the cloud of your physical network, virtual switches and virtual network cards to monitor the true network performance of VM infrastructure and diagnose network problems anywhere along this path closes a critical gap in your IT toolkit.


Follow us on Twitter!

Error: Twitter did not respond. Please wait a few minutes and refresh this page.

%d bloggers like this: