AppNeta | The Path

Posts Tagged ‘Video Conferencing

Managing Network Performance for Video Conferencing (Including “Video Sprawl”)

cisco-telepresenceToday’s video conferencing and telepresence solutions offer amazing reliability, incredible business value and a huge range of features and price points. No wonder there’s so much video running on enterprise networks!

But like other IP-based applications from VoIP, to SaaS, to virtual desktops, video conferencing applications are highly sensitive to network performance. The health of your network is vitally important to your business – and video conferencing can have a major impact on network performance. You need to know how your network is performing to ensure that your phone calls and videoconferences don’t degrade or blip out completely.

Managing network performance for video conferencing deployments requires two levels of insight:

  1. The ability to measure and manage network performance metrics like bandwidth, jitter and packet loss to ensure consistent service quality across your growing portfolio of deployed IP-based applications.
  2. Knowledge of the ever-growing network utilization and the impacts associated with all the desktop-to-desktop, browser-based video calls your employees are making for business and personal reasons – so-called “video sprawl.” (Streaming media, personal mobile devices and other ad hoc network usage further adds to “sprawl.”)

Many companies are deploying new videoconferencing services and essentially hoping they’ll “just work.” But “hope is not a strategy” – it’s a risky path. To mitigate that business risk, you need a solution to access the capacity of your network prior to deployment. Further, you need a process for monitoring network performance in real-time across your entire extended enterprise, so that you can verify service levels and understand what users are experiencing.

How can you ensure that your network is ready and able to support new video conferencing deployments? AppNeta is sponsoring a 60-minute webinar with No Jitter today (Wednesday, June 15) at 11AM Pacific/2PM Eastern Time on the topic: “Best Practices for Managing the Performance of Your Video Conferencing Deployments (Those You Know About and Those You Don’t).”

If your organization is in the midst of rolling out videoconferencing (and whose isn’t?), this objective, reliable information will be just what you need. Please join us live, or stop back later to hear the recording.

To learn more about AppNeta’s affordable, cloud-based network performance management solutions and how they can help maximize the value and performance of your videoconferencing and other IP-based business services, visit www.appneta.com.

As a pre-sales engineer, I see a lot of interesting network performance management scenarios while working with future customers on product trials.  I’ve seen everything from a managed switch that had a rogue 10-meg port to a problematic WiFi access point, located in the basement of a hospital!

On a recent trial , I was working with a network engineer who works for a video conferencing services provider.  Contrary to what I expected, they were not looking to solve a customer’s problem.  This particular customer was concerned with their own internal Unified Communications platform.  There were three core offices on the east coast, and a remote office in the UK.  Once I heard we were dealing with UC over the WAN between remote offices, I thought “Jackpot! This is PathView Cloud’s forte.”  This is going to be like a Shaquille O’Neal dunk at the Garden.  However, in the words of Lee Corso, “Not so fast, my friend.”

Once we had the PathView Cloud microAppliances deployed in the four various offices, we configured the network paths in a full mesh manner.  The spider web was starting to come together very nicely.  But as I looked the PathView dashboard, I started to see some violations represented by red bubbles on the interface.

Looking more closely at the results of the hop by hop path analysis, it was evident that duplex conflicts everywhere.  Every phone, video camera, even PC in this office was showing a duplex conflict.  “Great!” I thought.  We found the problem.  Change the duplexes. Case closed, let’s go home.

If it were only that easy…

Once we changed the settings, PathView Cloud continued to detect errors.  Some were cable errors, some were limiting errors.  I started to scratch my head – what could this possibly be?  I ran the results by a couple other engineers on my team.  Adam Edwards, Director of Systems Engineers, thought it could be the switch which was bad.

After I shared the findings with the customer, he was a bit hesitant because his SNMP polling device was saying the switch was running as it is supposed to.  To humour me, he swapped out the switch.  As soon as that happened, the performance was drastically better.  We still have not been able to definitively determine what caused the issue, though we are pretty sure it’s something tied to the settings that controlled the RTP/RTCP streams.

During the analysis phase of this, it felt like a twelve round bout with Mike Tyson.  We found duplex conflict, rate limiting, and eventually uncovered that the whole switch was bummed out.  All of this came right through the PathView Cloud interface within minutes of deploying the microAppliances.  PathView Cloud performance was like David Ortiz batting in the bottom 9th. Only he will win.

High-definition video conferencing from your computer or mobile device out to anyone, anywhere has arrived.

While all tiers of the video conferencing and telepresence marketplace are experiencing strong growth, the biggest leaps are happening with low-cost, desktop- and browser-based “single-codec” systems.

Among the many options in this burgeoning space:

  • Tandberg, now owned by Cisco, offers a range of office, desktop and mobile video conferencing and solutions that combine high quality with low cost.
  • Cisco also owns WebEx, which has long combined video conferencing and desktop sharing through a browser given sufficient bandwidth.
  • Skype currently offers “free” high-definition quality video calling on Windows that requires only an HD webcam and 512kbps connectivity.
  • The feature of the iPad 2 that’s creating the most buzz among executives is probably FaceTime video conferencing, which works quite well over wi-fi. The BlackBerry PlayBook will also include a video conferencing app.

For business as well as personal reasons, a skyrocketing number of ad hoc, browser-based video conferences will be going out over your network – sooner than you think!  Likewise, more and more organizations are installing affordable telepresence technology in executive offices and conference rooms.

The question you, and your IT team, need to ask is: Can your network handle this massive influx of extra traffic? How big of an impact will it have on the performance of all the other network-based services your business now relies on – from VoIP to SaaS/cloud applications to virtual desktops to online backups?

Every one of these critical systems will falter and fail abruptly if network performance degrades even slightly below a specific threshold. The greater the volume of traffic converging on the network, the greater the likelihood of service quality problems resulting in dropped calls, disrupted meetings, failed backups and reduced overall productivity.

While everyone is discussing the massive growth of video-conferencing, we are failing to talk about a key component – how will we deploy and manage the performance of this sensitive and now critical application?

Many businesses are expecting their new videoconferencing services to “just work.” But do they? Do you have a way to assess the capacity of your network prior to deployment? Can you successfully monitor network performance in real-time, both at the home office and at remote sites? What are your employees, partners and customers experiencing on the phone, in the conference room or at their computer?

To monitor and troubleshoot the performance of videoconferencing, VoIP, Unified Communications and Collaboration (UC&C) and other IP-based applications, companies must look beyond traditional performance management solutions like SNMP tools. These systems aren’t designed to measure the quality of network-dependent services from the standpoint of distributed users, particularly when delivered over third-party and public networks.

To address the dynamic performance challenges associated with today’s converged IP networks requires Remote Performance Management capabilities. Remote Performance Management lets you pre-assess, monitor and troubleshoot how remote and co-located users are experiencing video conferencing, UC&C and other IP-based services, end-to-end, in real-time, from anywhere.

Available as a cloud-based service, PathView Cloud Remote Performance Management is easy to configure and manage, uses almost no network bandwidth and is cost-effective for organizations of any size. If you’re rolling out a video conferencing application, anytime soon, visit www.apparentnetworks.com for more information.

It is a sign of the times that I need to clearly define the term “cloud services” if I am going to use it as an entry point to this blog. And since I wouldn’t dare assert my position to be expert enough to properly define this term (any attempt would surely bog down this entire effort), I will turn to the main sources of knowledge of our time…

If I type “Cloud services” into Google, the top response is of course a link to Wikipedia.  The Wikipedia search for “cloud services” gets redirected to “cloud computing” which is defined as:

Web-based processing, whereby shared resources, software, and information are provided to computers and other devices (such as smartphones) on demand over the Internet.

It is nice to see my opening premise is not far off the mark. Simply put, a cloud service is a web-based service that is delivered from a datacenter somewhere, be that the internet or a private datacenter to “computers.” For now, let’s leave the definition of an endpoint alone.  I know that is a big reach, but this is my blog, and it really isn’t the point.  The point is that for all of these services, they are generally delivered from a small number of centralized datacenters and consumed at some relatively large number of remote offices.

That is where things get interesting.

If we lived in a world where email and simple web page delivery was the state of the art, well, I wouldn’t have anything to write about, but we don’t.  The mainstream services that are being deployed in education, government, and enterprise accounts are ushering in a completely new level of performance requirements on the networks they depend upon.  Voice over IP (VoIP), video conferencing, IP based storage systems for file sharing, backup, and disaster recovery, and recently the deployment of virtual desktop services all bring with them new performance requirements.  Yes, that means more bandwidth, but that is just the tip of the iceberg.  All of these applications also have very real requirements on critical network parameters such as (packet) loss, end to end latency, and jitter.   Unlike simple transaction and messaging applications like HTTP delivery and email, when these new “performance sensitive” applications run into in appropriate loss, latency, and jitter, the result is application failure.  Dropped calls and video sessions.  Failed storage services including backup and recovery, and “blue-screens” where virtual desktop sessions belong.  What causes seemingly healthy networks to suffer from latency, loss, and jitter issues?  More on that in a later blog……

Successful cloud service delivery to remote sites is dependent on managing performance at that remote site.  Not datacenter application performance, or server performance, or network device performance.  Service level performance analysis from a remote site is a new topic, and we call it Remote Performance Management or RPM.

Let’s start with the basics, what do we know about RPM.

First, RPM is a location dependent topic.  Of course, the traditional datacenter performance management issues need to be dealt with.  That is part of datacenter service delivery 101.  No debate.  But if we care about the service quality that the users are experiencing, then we need to understand performance from the perspective of the end user, at the remote site.

Next, we need to address the complete performance management lifecycle.  Simply put, Assess the remote office performance PRIOR to service deployment; Monitor the remote office performance DURING service operations, Troubleshoot issues QUICKLY (like you’re there), and Report on the good, the bad, and the ugly.  When you add it all up, you need a broad set of capabilities to meet these needs

Finally, we need to keep it simple, affordable, and scalable.  The problem with most solutions around the remote office is not the device cost, but rather the administrative cost.

The bottom line is that if you are attempting to deliver today’s critical services for remote site consumption, you need to understand performance, so you’d better check your RPMs…….


Follow us on Twitter!

Error: Twitter did not respond. Please wait a few minutes and refresh this page.

%d bloggers like this: