AppNeta | The Path

Posts Tagged ‘network visibility

I am sure that question keeps you awake at night.  For some of us, getting the most from 10G-connected hosts is a hot item on the agenda.  And, it is not entirely clear yet what we can reasonably expect.

Back when 1G was just becoming available for end-hosts, two things seemed certain: the NICs were expensive and filling the pipe was highly unlikely.  Certainly 1G was faster than Fast Ethernet – you could easily get 300-400 Mbps – which was a definite improvement.  However, in general the end-hosts were not able to put enough packets on the wire to use the full capacity.  Often it was the CPU or the size of the bus that was the capacity limiter.  Sometimes it was simply that the drivers were not sufficiently mature.  

Now that 1 Gigabit is mainstream, everybody and their laptop has a 1G NIC – probably a Broadcom.  They generally work well and, with a bit of tweaking, they can typically fill an end-to-end 1G path.

With 10G though, we are back where we were with 1G some years ago.  Many things are the same – but many things are different too…

For starters, a number of performance optimization mechanisms have become quite common-place in the typical 10G NIC.  Interrupt coalescence is a good example.  When a packet arrives, it is held briefly in case others follow right after.  Once enough packets have arrived, or enough time has passed, a single interrupt is generated – instead of one for each packet.  This reduces the load on the CPU and bus in cases where very large flows might otherwise be generating storms of interrupts, one for each packet.

Another example is segmentation offload.  It also helps decrease the load on the CPU by transferring large amounts of outbound data straight to the NIC where it is subsequently broken down into chunks of the appropriate size to be sent as packets.  Typically the CPU has been responsible for this segmentation – so offloading it to the NIC make the transmission more efficient overall.  This mechanism is sometimes referred to as “large segmentation offload” (LSO), or when applied specifically to the TCP stack as “TCP segmentation offload” (TSO), or “generic segmentation offload” (GSO) when used for all IP packets.

While LSO and TSO are quite common in 10G NICs, “large receive offload” or LRO is less typical, although it is starting to be offered as well.

Jumbo packets (or rather frames) are quite often part of the 10G picture too.  “Jumbo” refers to the maximum transmission unit (MTU) size when it is larger than the Ethernet and Fast Ethernet standard maximums of 1500 bytes.  1G and 10G do not have a standard maximum – so different NICs have different maxima, ranging from 8000-16,000 in size.  As a convention though, 9000 bytes is often used when a network is designed for jumbo packets.  The benefit of jumbo is that there are fewer IP packets to handle as their payload is so much larger.  This reduces the stress on the NIC and on mid-path devices that inspect IP headers.  The requirement for jumbo use is that the MTU has to be at least that large along the entire end-to-end path.

Jumbo packets have been around since 1G – but they haven’t been as broadly used simply because it requires that all the mid-path networks support them as well.   In addition, today most 1G NICs work at full capacity using just the smaller 1500 byte packets.  So 1G never really took advantage of the extra benefit of jumbo.

10G on the other hand needs all the help it can get.  And since the prospects of jumbo for 1G have been around for while, network engineers are ready to work with the larger packets.  It is doubtful that there exists a 10G interface anywhere in the typical 10G mid-path that cannot support jumbo.  So jumbo will very likely be a consideration in a 10G end-to-end path.

Finally, with all of these mechanisms and features in place, the NIC and driver need to be implemented with extra-large buffers to hold all of the data that is being operated on.   It is not unusual to find that default settings are much too low for efficient 10G operation.  So those have to be built out as well.  Not complicated but not something to overlook.

Properly implemented, it is quite reasonable to see 98% of capacity in a one-way TCP flow between two 10G end-hosts (assuming LAN and zero loss).  Duplex operation (flows going in both directions simultaneously) may not see full capacity in each direction simply due to the limitations of NIC design – this is one of the most severe tests of a NIC’s performance – however 70-75% of two-way capacity is relatively easy to achieve.

But all of this assumes that what you care about most is bulk data throughput.  For some of us, network performance is measured in fractional reductions in overall packet latencies.  

More on the low latency implementations of 10G to come…

For more information about end-to-end network visibility visit www.apparentnetworks.com.

Today, Apparent Networks introduced “drop-ship” network management. PathView microAppliance is continuing to make it dramatically easier for network managers to check the performance of their networks — especially for a number of increasingly common network issues. There isn’t an easier or more convenient approach on the market.
 
Imagine the following set of circumstances. A company is going to install a server at the remote office for a few of its workers. The IT infrastructure for this office is fairly straightforward; however, administrators at the main office are concerned about network bandwidth — especially for mission critical applications delivered via SaaS.
 
In order to monitor network bandwidth, the administrator typically would need to set up and manage a server at the remote location on which to run monitoring software. 

With the PathView microAppliance, the administrator can eliminate the need to provision and manage a server.  The administrator can simply ship a PathView microAppliance to the remote office. Someone in the remote office plugs the device into the wall and hooks up an Ethernet cable.  Within seconds, the administrator can see the full picture of the remote office’s network performance, including portions of the network that traverses third-party or public infrastructures.
 
The PathView microAppliance requires only a power source and a network connection. That’s it. No configuration or software is required.
 
A second scenario involves locations where installing a server is not a good idea. For example, at a retail location. Many points-of-sale have purpose-built computing equipment that connects to offsite infrastructure. Installing a server in these settings could lead to problems, especially if there is not enough physical space in the retail location In addition, there are costs associated with setting up the server and maintaining it. The PathView microAppliance eliminates these concerns. A device can be sent to each location to easily determine the performance of the network leading to the purpose-built equipment.
 
Ok, one final scenario – consider a managed service provider who wants to test a customer’s network before rolling out an offering. As of today, engineers from the service provider can install a PathView microAppliance at the customer site, or they can simply ship the device to the customer site and have them install it. The microAppliance eliminates the need to travel to multiple sites for each client. What previously may have taken weeks or months can be completed in a day. Service providers can then use PathView or PathView Cloud to evaluate the customer network and the connection between the service and the customer.
 
No doubt there are many other situations and network management challenges where the PathView microAppliance saves time, money and resources, while establishing methods to quickly locate and remedy network concerns. In fact, if you have any you would like to share, drop us a line or post a comment and let us know!

I’m barely off the plane from Portland and my head is still spinning.  A couple of us just returned from the SuperComputing 2009 conference where we were part of the SCinet team.  SCinet is a group of professionals who volunteer to run the NOC and networks at SC – the membership changes somewhat for each event but many people readily return year after year.

This year there were over 120 network engineers, system administrators and a wide variety of other network professionals.  SCinet members come from academic networks and institutions such as Internet2, ESnet, and University of Illinois, federal institutions such as NOAA, Sandia, and NCSA, the military such as the Army and the Air Force, and many others are from commercial companies such as Apparent Networks, InMon, Gigamon, and Solera Networks.  And this is nowhere close to a comprehensive list.  The diversity of talent on the SCinet team was impressive.

And what SCinet builds for SuperComputing each year is similarly impressive.  Within a week, 300+ Gbps of data transfer capacity piped into one building and distributed to 100s of booths to support an amazing array of live technology demonstrations.  The connections include links out to academic networks, the commodity networks, and dedicated links out to various sites literally around the world – notably end-points included locations in Slovenia, Brazil, Japan, and the LHC facility at CERN.

With a name like “Super Computing”, it would seem that the exhibitors, the technology, and the talks would all be esoteric and exotic.  But almost every major name in computing hardware, software, operating systems, networking gear and communications attend.  The focus is definitely on next-generation and bleeding edge systems – but the relevance to everyday technology is obvious.   And high performance networks in particular are clearly key to the supercomputing paradigm.  In fact, the history of the Internet is rooted in the push by the NSF in the 80s and 90s to inter-connect super computers around the world.

Besides the aggregate capacity of the show floor, the carefully architected core networks included a wide variety of technologies that supported everything from 802.11 a/b/g/n to 1G and 10G Ethernet to InfiniBand.  Although talked about, 40G and 100G connections were no present and will have to wait for another year.

We participated in the Measurement team – a group within SCinet made up of about 20 people, around 12-14 who were present at the show.  Our job was to provide visibility into the LAN and WAN networks for the purposes of ensuring and troubleshooting performance, as well as for historical record.  PathView Cloud, the new free hosted service for network management developed by Apparent Networks, was deployed on a number of measurement hosts within the core using interfaces with 1G and 10G interfaces, providing views in both IPv4 and IPv6 along LAN paths within the core, across wireless connections, and out along WAN paths.   As well, the new PathView microAppliance was used to drop PathView Cloud into various locations that needed monitoring – particularly booths such as those used by the Bandwidth Challenge participants that were experiencing network issues.

Nearly 400 paths were continuously monitored with several thousands of diagnostic tests run.  Paths were bundled into distinct groups according to the networks used, the capacities, and the projects that they were associated with.  Graphs of the cumulative performance of the path bundles were shown in regular rotations (with other views like Inmon’s Traffic Sentinel, Graphite, Bro wireless security, router reports) on the dozens of large screen monitors that dotted the enormous show floor.

The show itself was impressive – both in terms of its size and as well the projects and technologies that were presented. It was a pleasure to be behind the scenes to see it built from the ground up, and run with such diligence and commitment – and an even greater pleasure to have been a part.


Follow us on Twitter!

Error: Twitter did not respond. Please wait a few minutes and refresh this page.

%d bloggers like this: