AppNeta | The Path

Posts Tagged ‘Cloud Computing

Organizations making the move to cloud services are well aware of the many benefits like lower startup costs, lower total cost of ownership and on-demand scalability. But many companies fail to take a hard enough look at the corresponding challenges associated with application availability and performance.

For many cloud services deployments, network performance will be the key to application performance. Why? Because every business service accessed over cloud infrastructure is by definition a remote, network-dependent service. Every user becomes a remote user and every office, even the corporate headquarters, becomes a remote office.

Even services-based applications will stumble, freeze and eventually disconnect users when network bandwidth, jitter, latency or packet loss metrics drop below acceptable thresholds. Many of the applications being moved to cloud infrastructure were not originally designed for remote access, making them even more susceptible to changes in network performance.

In short: a successful cloud services deployment depends on your ability to guarantee application performance from the user’s perspective. That means you need to manage network performance.

What are the key steps for managing network performance for cloud service deployments?

One: Network assessment

The first step in ensuring a successful cloud services deployment is to perform a comprehensive baseline assessment of network performance. When rolling out cloud services – or other IP-based services like VoIP, video conferencing, virtual desktop infrastructure (VDI) or IP storage – many organizations underestimate the operational and business risk associated with unplanned network impacts.

A network assessment is the only way to accurately know the scope of the project and its costs. It’s also the only way to know if your network is up to the task of carrying the extra traffic

Two: Continuous monitoring

Users need the performance of cloud-based services to be at least as good as what their current infrastructure delivers. To guarantee acceptable performance you need to be aware in real-time of the ever-changing status of the networks connecting remote users to applications. Latency, jitter and other key network performance metrics can fluctuate continuously in response to changes in traffic and other factors

Continuous monitoring capability is also required to give you real-time visibility into service levels. This is the basis for troubleshooting network problems, as well as for ensuring that cloud and other third-party providers are meeting their contractual commitments.

Further, the physical distance that data must take between the cloud and the user has a huge impact on performance, especially for TCP based applications. This parameter will vary among different public cloud providers, making monitoring performance over time an important step in evaluating cloud hosting options.

Three: Proactive troubleshooting

Today’s cloud-based services are business-critical and ensuring their availability is vital. Traditional break-fix approaches to network management are not adequate to meeting many of today’s SLAs.

Network engineers therefore need a way to pinpoint the exact cause and location of performance degradation — even within the virtual network and servers making up the cloud. This makes it possible proactively address network issues before they impact users. Likewise, if cloud services crash and a plethora of service providers start finger-pointing, you need to know, quickly and decisively, who is responsible for what.

Maximizing the business benefits of cloud deployments

How can organizations ensure that the performance of their network enables them to derive maximum benefits from cloud computing? AppNeta’s PathView Cloud technology enables you to guarantee application performance across all cloud, data center, remote office and mobile environments today. In addition to providing unmatched insight into network performance from the perspective of remote users, AppNeta’s cloud-based Pathview Cloud service delivers immediate time-to-value, affordable licensing and on-demand scalability.

Want to learn more? Join AppNeta on Thursday, September 8th, 2pm ET for the Preparing Your Enterprise Network for Cloud Services Deployment Webinar, or start your own free trial today!

Advertisements

Are you considering moving business-critical applications like e-mail, financial management, backup/recovery and CRM to the cloud? Or maybe you’re already on your way? 

It’s clear that cloud computing has arrived, and businesses of all sizes are taking advantage of it even faster than initially predicted. Gartner rates it as the top strategic technology for 2011, and surveys from MarketBridge, IBM and many others show that the great majority of small to midsize businesses (SMBs) plan to move key applications to the cloud in 2011. Cost savings, improved security and greater scalability are just a few of the reasons why transitioning to the cloud is appealing to so many organizations.

 But, the cloud is not a cure-all. While providing all of these benefits, it does not eliminate your application performance challenges!  Cloud-based application delivery depends entirely on high-bandwidth, low-latency networks.  When the networks are experiencing packet loss, jitter or excessive latency, these services will degrade rapidly and fail abruptly. 

 Isn’t that your cloud provider’s problem? Probably not. In most cases the provider is not responsible for anything beyond their data center and network. They are not obliged to monitor the delivery of hosted services end-to-end to your users, across networks they don’t control.

 Traditional network management tools for monitoring data center infrastructure cannot measure end-to-end performance of distributed, IP-based services.

For example: most cloud-based applications run on TCP, which is highly location dependent. The distance that packets of data traverse between your cloud provider and your users makes a significant contribution to overall performance.

 This is also one reason why performance management is entirely relative to location. To understand what level of service your distributed users are experiencing, and what to do if it’s degraded, you need the perspective of performance at and from that location. 

 

Remote Performance Management lets you remotely and continuously monitor whatever network infrastructure (WAN, remote LAN, VPN and/or web-based) is supporting the IP-based applications your users are consuming, whether those services hosted in a cloud, in your data center, or a combination of both.

 With Remote Performance Management you can accurately measure packet loss, jitter, latency and available bandwidth hop-to-hop and end-to-end, so you can monitor service quality and identify network problems – before they impact your end users! Delivered as a cloud-based service, the PathView Cloud service can be installed in minutes, uses almost no network bandwidth, and is scalable, secure and cost-effective for SMBs and enterprises alike.

 To learn how your organization remote manage and monitor your cloud-based services from any remote location, visit www.apparentnetworks.com.

The siren song of “The Cloud” draws the over-worked and under-appreciated I.T. leaders from around the globe thanks to the phenomenal, inherent benefits of scale (both up and down), cost and flexibility.  Yet, despite the obvious appeal, the practical realities of meshing internal, production-level networks and their related management systems with services offered from the public cloud are daunting to most — and feel completely out of reach to many.

Enter the hybrid cloud.

Ok, I can almost hear the “whatevers…” coming through my iPad’s speaker as I write this post – but this is real.  Similar to hybrid automobiles, hybrid cloud architectures tie together two different “engines” for one common result.  In the case of hybrid cloud, it’s a matter of intelligently and securely combining on-premise solutions with services that run in the public cloud. This  integration enables the local context and perspective of your own private network to be centrally managed from anywhere, anytime, without having to locally deploy complex and expensive management infrastructure.

Pathview Cloud is one example of a successful hybrid cloud model.  How so? From day one, it has leveraged a zero remote administration appliance that sits at your location and knows how to securely “phone home” to the public cloud-based distributed, multi-tenant back-end.  Our first PathView appliance – the PathView microAppliance – has already exceeded our wildest dreams with over 3000 of these tiny wonders serving several hundreds of our customers’ daily remote performance management needs.  The microAppliance’s zero-remote administration, very small size, super low power consumption and unmatched performance has clearly resonated well within our customer base.

Building on the same core elements that has made the microAppliance a hit, today we’re announcing the immediate availability of the microAppliance’s big brother – the PathView rackAppliance . Take a look at how they compare here: http://www.apparentnetworks.com/products/pathview-appliances/

The rackAppliance takes all of the goodness of the microAppliance and steps things up a notch by offering even higher capacity and scale along  with dedicated built-in Gigabit Ethernet ports (complete with full auto-bypass!) for our FlowView Plus packet capture add-on module.  This means that a single PathView appliance can run all of our integrated modules (PathView, AppView and FlowView Plus) from a single appliance and enable end-to-end Layer 1 thru Layer 7 network performance visibility without the need for any external taps or access to span/mirror ports.  In fact, there is no re-configuration of your existing network required at all – simply decide where in the network you want visibility into your network and the rackAppliance can drop right in.  With the built-in auto bypass functionality, should the rackAppliance be powered off or have any other unplanned interruption (power, hardware or software related…), the rackAppliance automatically removes itself from the network path and your network behaves as if the rackAppliance was never there.  Of course, you’ve probably already figured out that all this goodness comes in a standard 19″ width, rack mount form factor.  It’s not “micro” but it’s not very big either – its 1” high, 11.5″ deep and consumes about 45 watts of power on average (psssst, it’s not really about the hardware…).

So, like your favorite hybrid automobile, the PathView Cloud service (as delivered via a hybrid cloud architecture) will net you far more mileage per I.T. dollar spent and it is greener too.

We are all witnessing a dramatic shift from traditional network infrastructures to web-based, cloud based hosted services.  A rapidly growing number of organizations are taking the leap:  to hosted email, disaster recovery, exchange,  and CRM. By leveraging economies of scale, cloud based services can offer significant cost savings, versatile capabilities and low maintenance compared to traditional on -premise solutions.

With the growing number of IT services that can now be processed and delivered via the cloud, there is also an increased sense of insecurity within organizations looking to make the move.  The lack of visibility into cloud services and the infrastructure they run on create a level of risk that many organizations are not ready to manage.  However, this also creates a particular niche for service providers and IT resellers who can lead these organizations through the transition, and make a profit in the process! Traditional IT outsourcers and managed service providers are in a unique position to fill this requirement. MSPs have been serving as IT experts and advisors for small to medium sized businesses for some time. And because many companies lack traditional IT departments they a need a cloud services expert to mitigate the transition to cloud based services; doing so by addressing risk and assuring performance.

So, how do you become a provider of remote cloud service performance management?

  1. You need to have the right solutions and capabilities in place: A solution that can remotely manage WAN performance of your customer’s infrastructure is vital to create a scalable managed services business.  Another key feature to look for in a solution is the capability to remotely manage a multitude of customer sites from any location.
  2. You need to have a pricing model and service structure: Organizations looking to utilize Cloud Services do so because of the ease of use and low cost of these services.  To properly price a cloud assurance service, you should follow these same guidelines.  A simple and scalable pricing model that provides a fixed annual cost to your customer works best.  We recommend you create a user-based pricing model; many organizations are accustomed to paying for services on a per user basis. (i.e. software licensing, phone lines, bandwidth). The service structure can be an added line item for continuous monitoring with bundled service hours, or an ‘all you can eat’ service contract (which many MSPs and end users are moving towards.)
  3. You need to beat out your competition: This is all happening NOW. Organizations are starting to research, re-budget and implement cloud-based services while simultaneously moving away from traditional on premise devices and services. You need to bring your service to them first to get the new business and recurring revenue.

Please visit http://www.apparentnetworks.com/partners/build-your-business-with-cloudsmart/ to learn more about implementing a service around Remote Performance Management, or email partners@apparentnetworks.com to set up a call to discuss how to start to implement this service into your business.

It is a sign of the times that I need to clearly define the term “cloud services” if I am going to use it as an entry point to this blog. And since I wouldn’t dare assert my position to be expert enough to properly define this term (any attempt would surely bog down this entire effort), I will turn to the main sources of knowledge of our time…

If I type “Cloud services” into Google, the top response is of course a link to Wikipedia.  The Wikipedia search for “cloud services” gets redirected to “cloud computing” which is defined as:

Web-based processing, whereby shared resources, software, and information are provided to computers and other devices (such as smartphones) on demand over the Internet.

It is nice to see my opening premise is not far off the mark. Simply put, a cloud service is a web-based service that is delivered from a datacenter somewhere, be that the internet or a private datacenter to “computers.” For now, let’s leave the definition of an endpoint alone.  I know that is a big reach, but this is my blog, and it really isn’t the point.  The point is that for all of these services, they are generally delivered from a small number of centralized datacenters and consumed at some relatively large number of remote offices.

That is where things get interesting.

If we lived in a world where email and simple web page delivery was the state of the art, well, I wouldn’t have anything to write about, but we don’t.  The mainstream services that are being deployed in education, government, and enterprise accounts are ushering in a completely new level of performance requirements on the networks they depend upon.  Voice over IP (VoIP), video conferencing, IP based storage systems for file sharing, backup, and disaster recovery, and recently the deployment of virtual desktop services all bring with them new performance requirements.  Yes, that means more bandwidth, but that is just the tip of the iceberg.  All of these applications also have very real requirements on critical network parameters such as (packet) loss, end to end latency, and jitter.   Unlike simple transaction and messaging applications like HTTP delivery and email, when these new “performance sensitive” applications run into in appropriate loss, latency, and jitter, the result is application failure.  Dropped calls and video sessions.  Failed storage services including backup and recovery, and “blue-screens” where virtual desktop sessions belong.  What causes seemingly healthy networks to suffer from latency, loss, and jitter issues?  More on that in a later blog……

Successful cloud service delivery to remote sites is dependent on managing performance at that remote site.  Not datacenter application performance, or server performance, or network device performance.  Service level performance analysis from a remote site is a new topic, and we call it Remote Performance Management or RPM.

Let’s start with the basics, what do we know about RPM.

First, RPM is a location dependent topic.  Of course, the traditional datacenter performance management issues need to be dealt with.  That is part of datacenter service delivery 101.  No debate.  But if we care about the service quality that the users are experiencing, then we need to understand performance from the perspective of the end user, at the remote site.

Next, we need to address the complete performance management lifecycle.  Simply put, Assess the remote office performance PRIOR to service deployment; Monitor the remote office performance DURING service operations, Troubleshoot issues QUICKLY (like you’re there), and Report on the good, the bad, and the ugly.  When you add it all up, you need a broad set of capabilities to meet these needs

Finally, we need to keep it simple, affordable, and scalable.  The problem with most solutions around the remote office is not the device cost, but rather the administrative cost.

The bottom line is that if you are attempting to deliver today’s critical services for remote site consumption, you need to understand performance, so you’d better check your RPMs…….

The cloud is coming, the cloud is coming!!! Well guess what, if it’s not here now, it’s damn close. And if you think you can avoid it or beat it you’re dead wrong. As they say, if you can’t beat ‘em join ‘em. So what can you do?

1. Accept it. The 90’s economy is gone. No we aren’t in a recession, we’re in a correction, and the cloud is a perfect example of that. Technology has become dramatically less expensive, the internet works flawlessly, and hot stuff – productivity enhancing technology that was only enjoyed by the F500 – is now utilized by the SMB. And, it’s going to be delivered by the cloud. It all started with VARs becoming MSPs providing network management and services remotely, now it’s the technology delivered remotely. No-brainer.

2. Look in the mirror. What do you see? First, please don’t sell yourself short. Don’t tell yourself you are simply a VAR providing technology to the SMB or you’re an MSP providing technical Remote Performance Management. As critically important as these “technologies” are to your customer, you do more than that. You are the trusted advisor to your customers helping their business stay afloat and thrive. You have expertise they don’t have, but need, and you do have. It’s that simple. Translating your IP (Intellectual Property) to their business needs is the key. So what is your organization’s true, true value, your true core competency, the “something” that you provide your customers no one else can do as well as you can. In the cloud world you’ll need to learn how to compete like never before, sell and market like never before, deliver value like never before and exceed customers expectations like never before. To do all that need you to learn your organizations strengths and weaknesses like never before and capitalize on those strengths; turn those weaknesses into more strengths. It all starts in the mirror.

3. Adjust your business model. You will never see a more business model changing event in your life (than what?). Big ticket items, perpetual licenses, 18% maintenance, etc., poof! –“eventually” they all go away. Your new business model will be recurring revenue based and you’ll need to morph/adjust virtually every aspect of your business, from your cost structure, to your sales and marketing model including your COS (cost of sales), to your staff, and your accounting practices etc. But, it’s not a light switch; this must all be done in a dimmer switch approach.

4. Protect your customers. At all cost, protect your customers! If you’re a VAR with no or limited recurring revenue managed services, get into the MSP game ASAP, and start signing up your installed base to some form of multi-year managed services.

If you are an MSP, load up every customer with any and all managed services that are available. As soon as your competitor comes in and signs up one of your customers with even minimal managed services, their foot print can spread like wildfire and next thing you know, you’re out!

5. Transform your sales and marketing from an art to a science. The 90’s was nice wasn’t it? Back then you could be completely sales and marketing illiterate and still blow the cover off revenue ball. Now? Well that’s a different story. You need to create a very low COS sales and marketing machine, starting with a tip-of-the-arrow value prop that keeps your COS low, pricing and packing that promotes value, target your scarce resources on selected target markets you know extremely well, a sales and marketing team whose middle name is “accountability”, create a lead generation process that drives a quantity of high quality leads, and a self-qualifying sales process managed by simple MTM (Metrics-that-Matter) and RTM (Reports-that-Matter).

If you’re suffering from slow application performance, chances are you’re suffering from some degree of latency.  As everyone knows, each computer has its own performance limits.  With one too many applications, lag will result from the inability of the computer to process all of its inputs. If the application relies on connectivity with some type of remote device, such as a back-up service, an exchange e-mail server or even just video chatting with a colleague over the internet, the performance disruption or failure is a result of the network, rather than the hardware.

The latency on your network defines the minimum wait time before the person or service on the other end receives the packets you send. For a connection between New York and Los Angeles, the minimum possible latency is typically 40ms. In many cases, network traffic and misconfigurations can dramatically increase this time.

However, when it comes to cloud based applications, such as the CRM application we use here at Apparent Networks, both of these factors come into play and exponentially increase response time as they suffer performance losses. The local client for cloud based applications is usually a thin client, or even a web-browser client that only relays inputs to the actual application running on a machine on the cloud.  In this case, latency is the lag time for a signal to reach the server (for the NY to LA example, this would be at the lowest 40ms) the processing time for the server to create a response, and the time for it to be sent back to the client machine.

This increase in response time is also exacerbated when the application is an ‘on demand’, or live application, that requires packets to be sent upstream then received downstream whenever an action is taken.  An example of this is a virtualized desktop.  Because even the smallest action requires data to be sent, each individual action will be subject to the round trip latency on your network.

Latency is a problem – and one that we can’t afford to risk. So how do we ensure that latency stays at a minimum? For managed devices and services, optimization is up to the service provider. For localized devices and services, there are a plethora of tools that can determine and alert you on service quality drops.  However, with an increasingly network dependant world, how do we detect and address increases in response time on our carrier networks and externally managed services?

Without a network monitoring tool that can determine exactly what the problem is and pinpoint where on the network it is happening, troubleshooting is virtually impossible.  If the performance issue is occurring because of a lack of bandwidth, searching for lagging devices will not help.  If the latency is occurring across the WAN, you have no power to make optimizations without proof. PathView Cloud is one tool that manages network performance, including latency, up the path to remote applications and back to the source to detect real time changes in network performance. Learn more about PathView Cloud’s reporting capabilities here!


Follow us on Twitter!

Error: Twitter did not respond. Please wait a few minutes and refresh this page.

%d bloggers like this: