AppNeta | The Path

Archive for the ‘Cloud Performance Center’ Category

Sound familiar? I know this is a question many of you ask yourself regularly, if not daily.

There aren’t too many people that would debate the value provided by understanding flow data: the ability to get deep and near real-time insight into the applications and users that are consuming resources on your network.  Flow analysis has been built into a wide array of network analysis tools so you have more options than ever before to understand your network usage. Additionally, flow tools no longer slow down the performance of routers and firewalls because high end and mid-range hardware is powerful enough to avoid the impact on these devices.

But, until now, getting Flow visibility to and from remote sites has been too expensive and cumbersome. Despite the evident value of understanding flow data, I see some major needs in getting this insight into remote offices in a way that is:

  • Easy – Admins don’t want to manage and maintain another server or software sensors at the remote site.
  • Inexpensive – it needs to be affordable enough to rollout to each office without breaking the bank.
  • Universally available – it needs to work at each office, regardless of which network gear is installed there.

There are solutions that may hit one or two of these, but until now all three have been elusive.  Today, Apparent Networks is officially integrating FlowView into the PathView Cloud service and brings all three benefits to network managers in a simple, cloud based package.

PathView Cloud has been built from the ground up to give you real time performance visibility to and from remote offices. With a cloud-based service enabled by zero-administration microAppliances, remote site organizations are finally able to get true end-to-end performance visibility. BUT, the first thing we hear out of peoples’ mouths is:

With PathView Cloud and the new FlowView module, organizations have an inexpensive option that not only gives them real time performance to and from all remote offices, it can drill down from those reports and view the users and applications in use at those remote offices.

FlowView is unique from most Flow Analysis systems in that it does not take flow feeds from existing network devices. Instead it creates its own flow records. A Gigabit Ethernet switch-tap is included free with the service which runs at line speed and sends a copy of all network traffic to the PathView Cloud microAppliance. We then analyze that traffic and generate Flow records that are securely sent to the cloud service for reporting, and can also be streamed to a local flow analysis tool for companies that have already invested in a tool.

FlowView was designed to be self sufficient, and does not require a span, tap port, or network devices capable of generating Flow records at the remote offices. At a total price of $499 (including one year of service, all hardware, training and customer support) FlowView is dramatically more cost effective than other flow-only solutions on the market and, quite simply, it can’t get any easier to use. Within the web user interface a single switch turns on flow analysis and the collection, streaming to the cloud through a secure, compressed SSL tunnel and in-line reporting with real time performance analytics is all done automatically.

Having issues with the performance from to or from remote offices? Well, you now have a tailor-made solution to finally answer “exactly what is going on at that remote office” in an easy, cost effective and universal package.

Sign up for a free demo, or learn more about FlowView here!

Software-as-a-Service (SaaS) deployments are irresistibly appealing to organizations of all sizes, especially those with multiple sites. From browser-based access, to painless installations and upgrades, to minimal IT overhead, seamless scalability, and lower TCO, SaaS is a smart, cost-effective option.

SaaS differs from “cloud” in that SaaS applications can be hosted inside a company’s firewall, as well as remotely. More and more businesses are using SaaS for mission-critical applications like e-mail, financial management, ERP, backup and disaster recovery, HR management, collaboration and CRM. But, by definition, SaaS solutions are accessed over a network – and therefore run into performance challenges, threatening service degradation and more often failure.

Most SaaS deployments are significantly more sensitive to network performance and availability than traditional, on-premise software or even transaction-based network applications like e-mail. This is equally true for a wide range of network-dependent services not generally considered “SaaS,” such as VoIP, video conferencing and virtual desktop infrastructure.

These mission-critical services depend on high-bandwidth, low-latency networks to deliver an acceptable user experience. Issues such as packet loss, latency and jitter can make today’s performance-dependent networked services fail – abruptly! When calls and video sessions are dropped, backups don’t happen and business software disappears into the “cloud,”  and business stops.

The quality of users’ experience is a defining factor in the success of a SaaS deployment. But how do you measure service quality or adherence to SLAs from a remote user’s viewpoint? How do you know when service is degraded or what to do about it? This lack of visibility into the remote user’s experience of hosted services is a major stumbling block preventing organizations, from SMBs to large enterprises, from enjoying the benefits of SaaS.

Apparent Networks’ “State of Cloud Services” survey of network managers released in March 2010, found that:

  • 50% of cloud consumers have no performance measurement SLAs.
  • 75% of cloud users are unable to measure network performance between their users and the cloud provider.

Remote Performance Management enables remote, real-time monitoring of whatever network(s) your SaaS applications or other IP-based services are running on, whether hosted remotely, in-house, or a combination of both.

With Remote Performance Management you can measure end-to-end service quality, troubleshoot performance degradation and pinpoint network problems. Available as a cloud-based service, it takes minutes to install, is highly scalable and cost-effective for organizations of any size.

Visit www.apparentnetworks.com for more information.

With the growing frenzy over cloud-based services and the recent announcements about managed cloud services, it is clear that we are looking at the future of business technology, one that merges critical business services with the cost efficiency, speed and easy integration of the internet.

But, as one article aptly noted, “The cloud just might be the biggest thing to hit the internet in years, but that doesn’t mean it’s a cinch to keep it running perfectly.”

Companies of all sizes are making this move to the cloud –20% – 35% of end users are already adopting SaaS solutions- whether they be cloud in Amazon EC2, Saas like SalesForce.com or hosted Exchange. It is critical to not only monitor the availability and performance of those hosted servers, they need to ensure that their employees and customers can access them continuously and efficiently.

Let’s take a look at a timely example. If any of you have tried to access your LinkedIn account in the past three days, you know how important it is to manage the performance of  your network, web-based applications, and business dependent websites, from any remote locations.

Many of us here at Apparent noticed that LinkedIn was experiencing frequent outages, so we decided to take a look at what was going. This is what we have found!

LinkedIn apparently was expanding its infrastructure with a new major datacenter in Los Angeles. While this should improve website performance long term, the network changes and application deployments impacted the end user experience from a variety of locations – the performance of LinkedIn varied DRASTICALLY from Boston, to London, to New Jersey and Maine. One interesting detail is that each of these remote sites was accessing LinkedIn.com from different circuits. And when LinkedIn was able to fix the AT&T circuit, it caused a 50% increase in latency from all other locations accessing LinkedIn.com on Verizon.

While we don’t want to pinpoint LinkedIn, it is critical for all organizations to have direct insight and understanding of their networks and their web-based applications, from any location and via any cloud-service provider.

As the move to the cloud expands across all business services, from hosted email, to CRM, to backup and recovery we are all becoming more and more vulnerable to service degradation and service failure – and many companies, like LinkedIn, cannot afford this impact on end-user experience.

Want to learn more about how to monitor your websites and web-based applications like www.salesforce.com, www.netsuite.com, or any internal custom site? Schedule a free 1:1 demo here.

The cloud is coming, the cloud is coming!!! Well guess what, if it’s not here now, it’s damn close. And if you think you can avoid it or beat it you’re dead wrong. As they say, if you can’t beat ‘em join ‘em. So what can you do?

1. Accept it. The 90’s economy is gone. No we aren’t in a recession, we’re in a correction, and the cloud is a perfect example of that. Technology has become dramatically less expensive, the internet works flawlessly, and hot stuff – productivity enhancing technology that was only enjoyed by the F500 – is now utilized by the SMB. And, it’s going to be delivered by the cloud. It all started with VARs becoming MSPs providing network management and services remotely, now it’s the technology delivered remotely. No-brainer.

2. Look in the mirror. What do you see? First, please don’t sell yourself short. Don’t tell yourself you are simply a VAR providing technology to the SMB or you’re an MSP providing technical Remote Performance Management. As critically important as these “technologies” are to your customer, you do more than that. You are the trusted advisor to your customers helping their business stay afloat and thrive. You have expertise they don’t have, but need, and you do have. It’s that simple. Translating your IP (Intellectual Property) to their business needs is the key. So what is your organization’s true, true value, your true core competency, the “something” that you provide your customers no one else can do as well as you can. In the cloud world you’ll need to learn how to compete like never before, sell and market like never before, deliver value like never before and exceed customers expectations like never before. To do all that need you to learn your organizations strengths and weaknesses like never before and capitalize on those strengths; turn those weaknesses into more strengths. It all starts in the mirror.

3. Adjust your business model. You will never see a more business model changing event in your life (than what?). Big ticket items, perpetual licenses, 18% maintenance, etc., poof! –“eventually” they all go away. Your new business model will be recurring revenue based and you’ll need to morph/adjust virtually every aspect of your business, from your cost structure, to your sales and marketing model including your COS (cost of sales), to your staff, and your accounting practices etc. But, it’s not a light switch; this must all be done in a dimmer switch approach.

4. Protect your customers. At all cost, protect your customers! If you’re a VAR with no or limited recurring revenue managed services, get into the MSP game ASAP, and start signing up your installed base to some form of multi-year managed services.

If you are an MSP, load up every customer with any and all managed services that are available. As soon as your competitor comes in and signs up one of your customers with even minimal managed services, their foot print can spread like wildfire and next thing you know, you’re out!

5. Transform your sales and marketing from an art to a science. The 90’s was nice wasn’t it? Back then you could be completely sales and marketing illiterate and still blow the cover off revenue ball. Now? Well that’s a different story. You need to create a very low COS sales and marketing machine, starting with a tip-of-the-arrow value prop that keeps your COS low, pricing and packing that promotes value, target your scarce resources on selected target markets you know extremely well, a sales and marketing team whose middle name is “accountability”, create a lead generation process that drives a quantity of high quality leads, and a self-qualifying sales process managed by simple MTM (Metrics-that-Matter) and RTM (Reports-that-Matter).

As companies turn to the Cloud as a low cost, highly scalable and agile way to save on their infrastructure cost or to host new applications, a number of cloud performance testing and monitoring solutions are appearing. There are applications popping up to perform load testing against your cloud infrastructure, some of which are even developed and hosted by the providers themselves, such as Grinder in the Cloud and Pylot,  as well as a number of commercial services.

While these tools are needed and can be quite valuable, they all miss the most critical aspect of cloud performance: the physical distance between your users and the cloud infrastructure.

The majority of applications that are run from the Cloud are running on top of TCP and because TCP is location dependent, the physical distance between your users and the Cloud Datacenter has a dramatic impact on the maximum achievable performance you will see from the cloud. As this example details the latency on a network connection can be a bigger factor in the end to end performance, so much so that a 1 Gbps WAN connection from New York to Chicago with latency of 30ms has a maximum achievable throughput of 17.4 Mbps. That distance between New York and Chicago just cost you 98% of the capacity you are paying for. Finding out where the Cloud Data Centers are should be one of the critical questions to be answered before signing on with any Cloud Provider.

There is now a way to see the performance to each of these cloud providers both in terms of the maximum achievable performance as well as the performance to major cities around North America. The Cloud Performance Center measures the performance from 10 major cities to leading Cloud Providers. The Cloud Performance Center provides real-time performance information including the Total and Available bandwidth, Latency, Packet Loss and Round Trip Time. All of this information is available free, and you can subscribe to be notified of performance issues in any provider you are interested in.

The low cost and high performance of the Cloud is changing the way IT is delivered, and now you are armed with the info needed to choose the right Cloud provider for you. Check out how your provider is performing at www.cloudperformancecenter.com


Follow us on Twitter!

Error: Twitter did not respond. Please wait a few minutes and refresh this page.

%d bloggers like this: