AppNeta | The Path

Author Archive

Don’t you hate when service providers refuse to acknowledge the issue is on their end?

It really annoys me when a support team fobs you off with generic answers showing their thinly veiled disinterest. For this last tale I chose one very close to home for us here at Apparent.  I want to show that we practice what we preach!  We use a hosted VoIP provider that offers multiple servers throughout North America.  We started seeing sporadic voice loss to one particular server, Server X.  When the issue was raised to the attention of our provider, it was suggested we simply use one of the other servers.  We did this, but we wanted to prove a point and here’s how we did it: Whenever we saw an issue on Server X an issue was raised.  At first we received canned responses indicating a possible issue with our network.

Ok, I thought, they don’t know how to read the diagnostic… Let’s help!  I proceeded to setup microAppliances across 3 ISP networks here in Vancouver.  The next time we saw an issue I proceeded to forward path monitoring & diagnostic data from 3 sites reflecting the same issue at the exact same time.  I provided a brief write-up explaining what it all was, and this time they listened!  The case was escalated to the senior engineering team straight away and a big thank you was sent back for bringing the issue to light! !

So there you go; nothing but the straight goods.  Got a question?  Ping support.  We’re here to help.

You can reach me or the rest of the Apparent Networks Support Team at


Part of the support role is to reach out to new customers and provide some general product training – ‘onboarding’ as we like to call it.  It’s not every day that we uncover network issues when providing this training with a client setup but it happens often enough not to phase me.

The other week I was working with a new client whose network was connected to the internet via a T1.  We were casually adding both dual-ended and single-ended WAN paths to demonstrate the differences when we saw data loss!  Yes, we stumbled across a rogue network issue the client had long suspected, but could never quite prove.  What a perfect demo!  We had a look at the dual-ended path and could clearly see loss on the upstream.  Of course, I was getting asked where this loss was being introduced.  Yep, you guessed it – D.I.A.G.N.O.S.T.I.C.! 
One look at the single-ended path diagnostic told me the gateway was introducing the loss.  We actually had to end the call at this point because the client had another meeting but I took a cheeky look later to see if the issue had been resolved.  Indeed it had; I could clearly see a blip where the microAppliance sequencer lost connectivity while network changes were being made, and zero data loss after connectivity was restored.  PathView – 1,  Data Loss – 0! Zing!

Isn’t it an awful feeling when you’re told something by someone who has a vested interest in you believing them?  Let’s face it, who has ever wholeheartedly believed what a salesman told them?  Yeah, there are cries of  ‘snake oil!’ in my head when I’m approached by these types.  So, I’m here to give you the goods.  I work on the Support team here at Apparent Networks and  over the next three days, I’ll bring you a few tales of performance management woe, and how Apparent Networks solutions came to the rescue… Really!

The first case involves an MSP responsible for the local data network. This site consisted of a business cable internet uplink with two LANs onsite; one for data, the other for voice.

The challenge was this: The client’s VOIP call quality was suffering and it was up to this MSP to prove a LAN fault, ISP fault, or VOIP system fault.  You guessed it – everyone was pointing fingers at each other!  Ok, so, how do you tackle this problem? The client installed the PathView & the FlowView Plus switch in the data LAN as they didn’t yet have permission to touch the VoIP LAN.. We set-up single- ended & dual-ended paths to one of the public Apparent Networks responders.  Why both? The dual ended path gave us a dual-ended view of UDP packets but it couldn’t provide full diagnostics because mid-path devices don’t respond to UDP.  Couple this with a single ended path to show diagnostic data – and voila; a complete picture!

What did we find?  We found crazy oversubscription of the link.  Because we used dual-ended we knew upstream was the direction that was saturated.  Great, step one done!  The next question I’m always asked is ‘what is this data and who is causing it?’ FlowView Plus to the rescue!  We ran a capture and within minutes we knew we had computers on the network uploading loads of mail to a hosted mail provider!  One firewall rule later and our MSP client was very happy that they restores voice service as well as fixed a previously unreported slow internet issue!  It’s always nice to uncover problems you didn’t know you had!

Ok, stayed tuned for The Straight Goods Part II…

Follow us on Twitter!

Error: Twitter did not respond. Please wait a few minutes and refresh this page.

%d bloggers like this: