AppNeta | The Path

Archive for the ‘Web Performance’ Category

Today’s business services are increasingly dependent on predictable network performance and availability. At the same time, changes to network infrastructure  – such as the addition of IP-based services like VoIP, video conferencing and Software-as-a-Service (SaaS); the proliferation of Wi-Fi-connected devices; unmanaged elements such as streaming audio– all mean that performance requirements for your customers’ critical applications demand a much higher level of service delivery and quality assurance.

Recurring-RevenueFor MSPs, your ability to provide continuous insight into customers’ dynamic network infrastructure is critical to delivering your existing services and ensuring a quick, effective response to customer performance problems. Some key elements to assuring performance for your customers’ networks and building your business at the same time include:

— Providing monthly reporting and immediate performance alerts when issues arise.

— Differentiating your business by moving from a traditional, break-fix engagement model based on remediating failures to a proactive, strategic service tailored to customer needs.

— Reducing trouble tickets;  eliminating truck rolls and on-site engineers

How can your business transition from break-fix troubleshooting to continuously and proactively managing your customers’ network infrastructure?

PathView Cloud offers remote site network and application performance monitoring that delivers exceptional network insight, alerts, reporting and troubleshooting, from one integrated network performance management solution.

Here are three easy ways MSPs can use PathView Cloud to get started with new remote, continuous network performance monitoring services:

#1: Network health assessments

Proactive network assessments are a great way to highlight network issues, while showing your customers the value of continuous network performance monitoring. PathView Cloud lets you deliver not only comprehensive, point-in-time assessment reports, but also continuous assessments over a business cycle (e.g., seven to fourteen days). This information can lead directly to a managed services discussion.

#2: “Top talker” reporting at remote sites

Who is doing what on your customer’s network? PathView Cloud enables you to give customers periodic (e.g., monthly) net flow reports that deliver valuable insight into who and what is consuming network bandwidth. Like a network assessment, this information can illuminate network issues while also presenting an ideal opportunity to discuss managed service options.

#3: Cloud services readiness assessment

Assessing a customer network’s readiness for cloud services is a simple undertaking with PathView Cloud. AppNeta recommends running such assessments for seven days to capture a full business cycle. A pre-deployment assessment can save your customers significant time and money by providing a holistic view of their network’s ability to handle additional traffic, as well as identifying ongoing and  transient network issues. It also facilitates a discussion of the value of continuous monitoring services post-deployment to validate and ensure compliance with SLAs.

More and more MSPs are using PathView Cloud to realize ongoing revenue streams from unmatched remote site performance visibility and continuous network assessment offerings.  Check out the live PathView Cloud demo, or try PathView Cloud on your network today with a free 14-day trial!


unified-communicationsThe cost reduction and infrastructure consolidation benefits of Unified Communications and Collaboration (UC&C) are so compelling that many organizations are rolling out these services without a solution to manage the user experience.

UC&C aims to converge telephony, messaging, mobile communications, video conferencing and presence-enabled applications onto a common, IP-based network. But chances are that network is already burdened with a host of services, from e-mail to SaaS applications to Internet media streams to online storage to virtual desktops.

The more traffic is loaded onto the network – and the greater the distance between the users and the services consumed – the higher the risk of poor network performance and application failure. UC&C applications, like many of today’s complex, network-dependent applications, falter and crash abruptly as soon as network performance degrades below a specific threshold. Even minor performance issues often result in degraded VoIP call quality, faltering videoconferences, or complete service failure.

Companies that hope to simply “drop in” UC&C services and expect they’ll work with no significant hiccups are taking a major business risk. To assure service delivery, UC&C systems demand stable and dependable network performance. This means not just sufficient bandwidth, but also minimal latency, packet loss and jitter.

Traditional, SNMP-based network management tools aren’t capable of monitoring the experience of remote users accessing IP-based services, which are entirely dependent on real-time network performance. Netflow analysis tools can help bridge the performance management gap, but they generally require significant bandwidth and are expensive to deploy and manage. Few organizations have the cash to deliver netflow analysis to and at the remote sites where the capability is most needed.

Network engineers and CTOs are well aware that in many cases their tools lack the “intelligence” needed to manage service levels across an ever-growing range of IP-based applications, including UC&C. Problem resolution becomes a time-consuming crapshoot, and capacity planning is simply a question of “how much bandwidth can we afford?”

To understand what’s happening with UC&C at remote sites, you need integrated network performance management capabilities that enable you to continuously monitor service levels end-to-end across any network infrastructure running IP-based services – even those hosted by third parties. You need to be able to:

  • Quickly and accurately assess a network’s readiness for a new or expanded UC&C deployment
  • Continuously monitor the performance of on-premise, hosted SIP or fully hosted UC&C services over any network
  • Cost-effectively measure and report on specific SLAs to meet and ensure the performance needs of your users

AppNeta’s cloud-based PathView Cloud network performance management solutions deliver instant value through actionable insight into the network performance metrics that are vital to the success of your UC&C deployments.

To learn more about how PathView Cloud technology can enable you to successfully manage the performance of your UC&C services, visit

Add to FacebookAdd to DiggAdd to Del.icio.usAdd to StumbleuponAdd to RedditAdd to BlinklistAdd to TwitterAdd to TechnoratiAdd to Yahoo BuzzAdd to Newsvine

Like this at Facebook!

Santa Claus.  The Easter Bunny.  Good tasting, fat-free snack foods.  Myths?  Maybe so.

But one absolute myth that is 100% untrue, and that 99% of the vendors of network performance solutions have been perpetrating for years (and whose users have been gobbling up like zero calorie french fries…) is the myth of bandwidth; more specifically, the myths of utilized and available bandwidth.

Before I do my online version of “Myth Busters”, let’s take a minute to define a few key terms, bandwidth and throughput.

Although often used interchangeably (and used differently outside the world of networking…) when it comes to IP networking, these terms refer to two very different things.

  • Bandwidth speaks to the capacity of a given network and
  • Throughput speaks to how many bits per second actually traveled across the network.


Ok, think of your network as a water pipe.  At a given fixed water pressure, the diameter of the pipe will determine the maximum amount of water that can flow through the pipe. That is the bandwidth.  A bigger pipe, more capacity (bandwidth) – smaller diameter pipe less bandwidth (capacity).

If we stood at the far end of the pipe and measured how much water arrived, now we know the pipe’s actual throughput.  If the pipe had perfectly consistent diameter along its’ entire length and there are zero leaks, and if the water only had to travel in one direction at the same speed all of the time, then the throughput and the bandwidth of the pipe would be the same.

Of course, even in your homes, there are often small leaks; and changes in the size and back-pressure of the pipes happen all the time as different faucets open and close. Very seldom does any system of pipes (even a small system like in your house…) manage to have the throughput come close to 100% of bandwidth (capacity).  It gets worse with complexity. If we look at municipal water systems across the U.S., the average system loss is 16% (or more than 800 billion gallons a year …) and many larger cities are dealing with losses of 20 to 30% or more. Yikes!

How does this relate to IP networks?

Well, if the bandwidth (capacity) was the exactly the same along the entire length of the network service delivery path (source IP to destination IP), the packets only travel in direction all the time, the distance the packets travel remains constant (no route changes…), and there was no cross traffic to deal with, zero packet loss or other slow downs (including duplex mismatches, MTU misalignment, QoS bits being stripped or remapped, serialization or processing delays, etc), then network throughput and network bandwidth would be the same.

But we all know that on complex WANs (or even a moderately complex LANs or Wireless LANs (WLAN), there are many conditions that prevent throughput from equalling bandwidth (capacity).  Since the primary determining factor of throughput is the actual bandwidth (capacity), getting your arms around this figure is the first step to understanding your actual throughput – and this is where the myth of bandwidth is most often passed along by vendors today.  Solutions that measure the “what is” bits per second (bps) values – regardless of if they get the bps values by asking the network elements themselves via SNMP, WMI or NetFlow or if they perform packet sniffing and count actual packets “on the wire” –  all chart those values against the provisioned (or theoretical…) capacity of the network based on a value the user enters.  Have a GigE network interface on your server? BAM!  Your maximum bandwidth is 1000 Mbps.  Leasing a T3 from your carrier?  Whammo! Your maximum bandwidth is 45 Mbps.  Then the vendors chart the measured “what is” bps values against the user entered total bandwidth values and you in turn get a mythical utilization and available bandwidth result. Lions, tigers and bears – oh my!

There are other commercial and open source solutions that attempt to measure network throughput via packet flooding. However, many of these solutions propagate the reverse myth that throughput equals bandwidth (capacity).  Of course running a packet flooder can only give you an accurate throughput value when the nothing else is running on the network (when is that again?) and these kinds of solutions tend to really annoy the application owners because they completely fill up the network.  But from a pure performance measurement perspective, their throughput results really tell you nothing about bandwidth (capacity) – they tell you the throughput of your water pipes, but you have no idea if the diameter is in fact what you’re paying for.  The myth goes both ways unfortunately.

Yet the biggest danger of the bandwidth myth is that you don’t really have an accurate and timely understanding of the true capacity of your service delivery paths.  If you operate your network (and support the applications that in turn rely on the network…) based on the mythical figures produced by your SNMP tool, you may in fact be operating FAR closer to point of application failure than to you realize. Far worse, you may ALREADY be experiencing application failure or other application delivery quality issues and looked your bandwidth chart and said “Well, I’ve got plenty of available capacity, so that’s not it” when that was PRECISELY the problem which resulted in high loss, or irregular jitter patterns that made your application delivery suffer.   You were the victim of a false negative, which often are hardest things to deal with when troubleshooting.

The path-based technology in PathView Cloud is in fact a real-time myth buster.   Through a patented methodology that measures the true end-to-end service delivery path, we determine the layer 3 network’s true maximum achievable capacity (bandwidth) and the utilized capacity and can therefore paint the true picture of available capacity. This works over any IP-based network be LAN, WAN, Wifi or satellite and you can measure the true network capacity across third-party networks and even into end-points that you have no access to, cloud-based or otherwise.   The best part is that we do this every 60 seconds with such a low touch (around 20 packets per minute…), that your applications won’t even know we’re there standing guard.

On a complex network it’s pretty rare to have bandwidth (capacity) be equal to throughput, in fact I’m pretty sure if such a network does exist, you’ll find the Easter Bunny having a fully immersive video conference with Santa Claus, each watching the other enjoying their delicious zero calorie french fries.

“Would it help if I connected to the school’s wireless?” asked our guest speaker. “No, that’s even slower than the open wireless” was the response from students in my class, which is why we spent a third of the Creative Director’s presentation time waiting for his commercials to load. My Advertising, Media and Society class was fortunate enough to have a professional ad executive share his expertise with us, but the presentation was hindered by a slow internet connection.

As a student at a top-ranked business school, I often think in business terms. Waiting on a slow internet connection, then, is opportunity cost. Simply put, my time could be spent more productively elsewhere if only the internet worked like it should. My parents are paying a lot of money for my undergraduate education and I expect the network to work 24/7. I admit that I often take the internet for granted because I grew up with it. Because of this, though, I understand how it should work and then become frustrated when it doesn’t.

My frustrations peak when I am on campus and cannot access my school email. To put things into perspective, students at my school send more emails than text messages; you can imagine how much an 18-22 year old texts (hint: a lot). At my school, it is an unwritten rule that you are expected to send and receive emails at any time. For me, not being able to connect to the internet means that I am out of the loop and thus, falling behind on important communications from professors and colleagues.

The issue of internet connectivity is brought up every year as student government elections roll around and then ignored as the semester progresses and students get wrapped up in group projects, papers and presentations. I was reminded of this fact last night as my Facebook page was bombarded with the request: “Vote for me and I promise to fix the internet!” If only my school’s IT team could experience these problems with the same visibly as the students affected, they could really make a difference.

Think back to February 3rd, 2011.

For most of us, it was a day much like any other.   The headlines were full of news about an Egypt in turmoil as it struggled with its eventual march to Democracy, and there were massive demonstrations and violence in Yemen.

One seemingly tiny bit of news that wasn’t picked up by the mainstream press (this, despite the fact that it will undoubtedly affect billions of us, regardless of where we call home or what form of government we live under) – the internet ran out of addresses, or more specifically, IPv4 addresses.

The Number Resource Organization (NRO) – a group responsible for coordinating the efforts of the five Regional Internet Registries including AfriNIC, APNIC, ARIN, LACNIC and RiPE NCC issued the following news release on Feb 3rd 2011.

That’s a pretty big deal – no more IPv4 addresses means that we’ll all be dealing with IPv6 and it’s “funny looking” IP addresses (the familiar IPv4 address of would look like fe80:0:0:0:0:0:c0a8:101 in IPv6)  sooner than we’d probably like.  The good news is that the new IPv6 address space is just a bit larger (approximately 340 undecillion)  than the previous generation IPv4 address space – large enough in fact that if each and every cell and bacteria in every human being (about 200 trillion total per person…) on the planet (about 6.9 billion of us so far …) each had its own unique  “static IPv6 address”, we could repeat that over 44 trillion times before we’d run out of IPv6 addresses.

I think we’re OK for awhile, at least when it comes to IP addresses.

However, today, April 1st, 2011 is even a bigger day to remember.

This morning at 1:08 AM EDT, the Internet actually ran out of bandwidth.   Using our patented path-based technology to measure all the key internet pipes, we measured 100% network utilization for the entire public Internet for approximately 2 minutes and 13 seconds.  The pipes were 100% utilized using about 1,918,344 Mbps – no more room.

Things cleared up pretty quickly after that – but it happened once, and it’s likely to happen again.    Waiting in line for the latest dancing baby video will soon feel like trying to buy an iPad2 at your local retailer.   You’ll be able to do it, but you’ll just have to be patient.

No doubt, more bandwidth will be brought online soon – but only time will tell if the supply can keep up the demand.

Happy April!

Follow us on Twitter!

Error: Twitter did not respond. Please wait a few minutes and refresh this page.

%d bloggers like this: