Tag Archives: sdn

NFD8 Recap: Nuage Networks – One to Watch

Last fall, I attended the Tech Field Day NFD8 event, and one of the presenting companies was Nuage Networks. This was actually the second time I’d seen Nuage present at an NFD event, the first one being NFD6 a year earlier. Upon my return from NFD8, I did a short write-up of each presenting sponsor for my coworkers at H.A. Storage Systems to keep them informed. The following is my recap of Nuage Networks after their presentation in which I explain why I think Nuage is really on-target with their SDN solution and is definitely a solution to keep an eye on.

Continue reading

Advertisements
Tagged , ,

Managing the Network as a Fabric — About Time!

Earlier this September, I attended the Tech Field Day Networking Field Day 8 event. Over the course of three days, we saw presentations from many very interesting vendors including a mix of startups and established market leaders. One trend that really stuck out to me more this time around than at any previous NFD event was a nearly ubiquitous emphasis on data center network fabric management. In other words, truly managing an entire data center network (or at least a sub-block of it) as a single unit.

Continue reading

Tagged , , , , ,

Server Brawn + Switch Brains = Infrastructure Fabric

Last week I attended Networking Field Day 7, and was introduced to Pluribus Networks. Pluribus is taking an interesting approach to building the data center fabric, by combining high-performance data center top-of-rack (ToR) switching with powerful server internals in a platform they’ve dubbed the Freedom Server-Switch.

Source: pluribusnetworks.com

Source: pluribusnetworks.com

The Freedom platform can be loaded to bare with RAM and storage along with some pretty powerful CPUs (this data sheet provides all the details), which enables embedding various network (and not-so-network) services right in the network at every edge. The platform runs the NetVisor operating system, based on BSD. This software can be had in various feature levels:
Source: pluribusnetworks.com

Source: pluribusnetworks.com

Various services that can be enabled beyond typical L2/L3 network services include DHCP, DNS, PXE, load balancing, CDN functions, NAT, NAS (yes, really), and traffic analytics. Since these switches are designed for deployment as leaf nodes in leaf-spine architecture datacenters, this embeds these services right at the network ingress point for each connected device.
You may be thinking about the potential administrative overhead included with performing advanced network services on each ToR switch, but that burden is eased with fabric-wide management features that allow an administrator to interact with any node in the fabric and issue commands that can affect a subset of fabric nodes, or the entire fabric at once.
During the NFD7 demonstration, Pluribus Networks CTO Sunay Tripathi showed us the ease with which the entire fabric (the Fabric Cluster, as they called it) could be programmed to single out a specific traffic flow (based on any number of parameters), and perform some operation on it such as redirecting it to a specific port, a service running on the Freedom platform, or copy the traffic to local storage. With a couple commands, he was able to intercept and store traffic matching the flow parameters from anywhere on the network the flow may appear. This was powerful stuff. And of course, since Pluribus exposes APIs for accessing these features, one can imagine the ability to automate various network service functions from external applications. In fact, Pluribus provides an SDK for “bare metal” access to the switch so that future applications could potentially extend functions beyond anything that’s been thought up so far. Additionally, VMs can actually run on the platform, so perhaps other functions traditionally centralized in the network (IDS/IPS, anyone?) can be embedded right at the network edge.
api
Something that really struck me about the Pluribus NetVisor software was that the fabric was equally manageable from a Unix command line, a rich switch CLI (although the syntax looked quite a bit different from anything I’ve ever used, so there’d be some learning curve there), a web-based GUI called vManage, and a variety of API interfaces. Lately, the industry has been laser-focused on APIs, APIs, APIs. I thought Pluribus struck a good balance with their approach recognizing that the CLI is not dead, and APIs provide another, but not exclusive, vector for network management. These various tools could be leveraged by network administrators that are comfortable and adept with different administration models and none appears to be handicapped by their choice.
More than that, though, what I saw in Pluribus’ platform was a bold attempt to move toward what may well be an inevitable future. I’ve been thinking for some time about how in the not-so-distant future as network, compute, and storage facilities coagulate we’ll not have many “network engineers” or “server engineers”, but rather “infrastructure engineers” who know how to work everything. Sure, we may still retain a focus or specialty, but it’s going to become very difficult to claim “I’m a network engineer. I just provide the network. Servers and storage aren’t my thing.” At least, if you want to stay relevant and have a job, it will be difficult.
The Pluribus Freedom Server-Switch really embodied that notion. Rather than building a high-speed switching fabric that has services blocks hanging off of it to provide network services, application services, storage, security, monitoring, and even applications themselves, the Pluribus solution struck me as an infrastructure fabric, providing many of those services right in the fabric, at every point of ingress and egress. Surely Pluribus is not trying to replace enterprise or tenant servers themselves, but moving the various utility services into that infrastructure fabric consolidates the deployment, administration, and management of those infrastructure support services allowing the servers and storage attached to the fabric to be used for what they’re intended for — applications.
While I saw a lot of promise in the Pluribus Networks offering, I do think they will have a bit of an uphill battle in many shops that have not yet moved to a more consolidated “infrastructure team” approach (which is most environments I see), as the server and storage teams may feel threatened by the idea of “the network” running various services and even hosting storage. I suspect this technology will be a better fit in more agile environments that have embraced a holistic approach to infrastructure services.
I strongly recommend watching these videos from Networking Field Day 7 as they really demonstrate the fascinating approach Pluribus Networks has brought to the table. Pluribus Networks also has some good whitepapers sprinkled around their site that are worth a read as they present some good technical detail rather than just marketing fluff.
Disclaimer:
Pluribus Networks was a sponsor of Networking Field Day 7. At no time did they ask for, nor where they promised any kind of consideration in the writing of this review. The opinions and analysis provided within are my own and any errors or omissions are mine and mine alone.
Tagged , ,

Cisco Application Centric Infrastructure: Nexus 9000

On November 6, I was fortunate to attend the Cisco Application Centric Infrastructure launch event in New York City as part of the Tech Field Day blogger delegation. This event was the much-anticipated unveiling (and acquisition announcement) of Insieme Networks, Cisco’s “SDN Spin-in” which maintained a pretty impressive amount of secrecy over its relatively short existence. The main keynote/announcement event consisted of a lot of flashy marketing videos and various tech executives praising each others’ companies. The tech press has been atwitter with coverage on Cisco’s ACI strategy and various components. I’m not going to try to recap the entire announcement, as others have done a much better job of that than I could, but I’m going to provide my take on each of what I considered to be four related, but somewhat distinct announcements that day. In this post, I cover the Nexus 9000 line of switches.
Tagged , , ,

Big Switch Networks and the (possible) Future of Networking Hardware

BSN-SDN-approach

Over the last couple of years, two major philosophies for SDN have evolved which I will call the overlay model, and the flow programmability model. Overlay networks are the notion of building multiple virtual networks in parallel on top of a physical network fabric, using some means of separating the virtual networks — typically an encapsulation method like VXLAN or NVGRE. Then we have the “flow programmability” model, based on the idea of programming SDN behaviors on a flow-by-flow basis into your existing (or new) physical and virtual network switches using a protocol like OpenFlow.

Continue reading

Tagged , , ,

Those Slow-Poke Network Engineers

This year (and especially in the past few months) there have been a lot of new solutions announced in the network virtualization and network overlay platform arenas. These solutions hold great potential, but in my opinion the vendors of these solutions need to get on board with a team approach to IT and avoid marketing to server engineers by throwing the networking team under the bus.

Continue reading

Tagged , ,
@greatwhitetec

Virtualization, Storage, and other techy stuff

The Stupid Engineer

I ask those questions you're too clever to.

Sunay Tripathi's Blog

Pluribus Networks Founder's Blog on OS, Networking, Virtualization, Cloud Computing, Solaris Architecture, etc

Ed Koehler's Blog

Just another WordPress.com weblog

JGS.io

Data networking, stray thoughts, nerdy fun...

Network Heresy

Tales of the network reformation

The Borg Queen

Jottings on the intersection of tech and humanness

Networking From The Trenches

Ramblings about my thoughts, experiences, and ideas.

In Search of Tech

Looking for the next big thing.

Packet Maniac

A day in the life of a maniac packet

Fryguy's Blog

A Network Blog by a Network Engineer

Networking 40,000

Attaining my CCIE with the help of Warhammer 40k

stubby router

just another networking blog

Ronnie Angello

Network Architect . CCIE 17846 . CCDE 2012::1

The Peering Introvert

The sundry interests of Ethan Banks including books, cars, hiking in New Hampshire, religion, music, home theater, technology, geek culture, and social media. And maybe cats.