
With VMworld quickly approaching, there are a lot of hot technologies out there to try and wrap your head around, far and beyond the staple of server virtualization. Attendees will see and hear a lot about Software-Defined Datacenter, Software-Defined Networking, and VMware NSX at the show. But to get a jump start at learning more about these technologies, I went to one of the experts, Gregg Holzrichter, Chief Marketing Officer at Big Switch Networks.
VMblog: What is Big Cloud Fabric (BCF) and how is it
differentiated from traditional data center networking solutions?
Big Switch: In order to meet business demands for
faster, more flexible and agile infrastructure for applications, the concept of
an Software Defined Datacenter (SDDC) is very appealing, however traditional
box-by-box, hardware-defined networks have proven to be a complete mis-match to
modern SDDC requirements. Using the traditional gear and way of networking,
network admins still have to manually log in and administer physical hardware
with proprietary CLIs and management consoles on a per-switch basis, because
legacy network management tools have not evolved from the CLI over the past 20
years, and network designs are often overly complicated at the physical and
logical levels. This is both inefficient and inflexible.
Big Cloud Fabric (BCF) in contrast is
a highly differentiated data center switching fabric that delivers zero-touch
operations and network automation via an agile, SDN-based approach to physical
network management. BCF incorporates network design principles that
hyperscalers like Google and Facebook pioneered to build agile and flexible
network architectures, via open networking hardware, core-and-pod design and
SDN management to achieve dramatic improvements in network operational
efficiency. BCF offers the same order of magnitude gains in network agility and
ease of network operations to any organization, regardless of scale. BCF
software is deployed on industry-standard open networking hardware from Dell
EMC, and HPE Altoline.
BCF excels in ease of use and
manageability by making it 8x faster for initial setup for VMware networks, 12x
faster to configure, deploy applications & troubleshoot, and 30x faster to
upgrade than traditional networks, per ACG Research analysis. BCF is the first
solution that delivers networking at the speed of virtualization.
VMblog: How does BCF
integrate into a VMware environment?
Big Switch: In VMware environments, BCF connects with
the VMware vCenter API to provide physical network automation and end-to-end
network visibility for VMware vSphere. BCF is an ideal SDN-based fabric underlay
for VMware NSX network virtualization deployments, and provides networking
automation and simplicity to VMware vSAN environments.
The BCF controller acts as a single
point of API integration with vCenter versus many per-box API integration in
traditional box-by-box networking. This
highly optimized approach ensures performance-centric API responsiveness and
scaling. The BCF controller integration scales to many vCenters simultaneously
without conflict. This unique capability enables vSphere, NSX, vSAN and VIO
environments to coexist within the same BCF pod, allowing multiple logical
tenants managed by a single SDN controller, supporting multiple orchestrators
on a single physical network.
With BCF installed on the physical
hardware creating a modern leaf-spine network, both network and virtualization
admins benefit from unprecedented visibility and advanced analytics, which
enable fabric-wide troubleshooting-offering operational simplicity compared to
legacy approaches.
Big Cloud Fabric manages all L2 and
L3 networking components that constitute the fabric as "one logical Big
Switch." In this way, network resources are seamlessly delivered at the same
speed and in the same vCenter workflow that vAdmins provision VMs, all with
high performance and scale across orchestration environments.
VMblog: Why would someone
use BCF in conjunction with NSX and do they need both? What are the benefits?
Big Switch: When deploying NSX-v based overlay
for network virtualization and/or micro-segmentation, network teams are often concerned
about box-by-box physical networks being opaque to overlays. Architecturally,
NSX, being an SDN overlay operating as one logical v-switch, is best served by
an SDN underlay operating as one logical p-switch like BCF. When BCF is deployed with vCenter and VMware
NSX for network virtualization, all the advanced automation benefits from the
BCF integration with vCenter are available to the network. When NSX creates a
virtual switch port-group with an assigned transport VLAN for the VTEPs on each
of the ESXi hosts, BCF automates the provisioning of the corresponding logical
segment for the transport VLAN to enable VTEP communication. It also
auto-learns all the VTEP endpoints and the VMs behind the VTEPs. BCF is the
ideal SDN underlay and physical networking layer for all VMware workloads.
VMblog: What is the
advantage to deploying BCF in vSAN environments?
Big Switch: With the distributed nature of
hyper-converged solutions and an increase in east-west traffic between storage
nodes, the role of physical network becomes extremely critical. During
deployment and operation, vSAN and network admins will face cross-silo
interaction challenges that will ultimately slow the project down. If you're an
admin trying to deploy vSAN, the last thing you want to worry about is your
network configuration. Typical tasks like attaching vSAN nodes (ESXi hosts in
vSAN cluster) to leaf switches, configuring VLANs and enabling multicast take
time and are error-prone when done manually. If you are a network admin, you
are probably tired of responding to tickets asking to provision the "plumbing."
We often hear that network admins just want to respond with "don't come to me
for VLANs!" While some of the tasks may be automated using scripts or templates
inside a bolted-on management solution, they take time to validate and have to
be maintained.
With vSAN running on top of
traditional box-by-box networks, vSAN admin is left to hope and pray that
network admin did not mis-configure anything. How does a network admin prove
that network is not the culprit? It often takes days of back and forth
communication between network and storage teams to root cause the problem.
VMware recently released vSAN 6.2 with many useful cluster troubleshooting
tools. BCF + vSAN solution builds on these tools to further reduce
troubleshooting time. With controller providing full network visibility, vSAN
admin can use BCF plug-in for vSphere Web Client to zero-in on exact problem
area instead of simply knowing that there is a problem.
VMblog: What does the future look like for
Big Switch and the networking industry?
Big Switch: Big Switch recently announced its
fiscal year results, growing over 100% year over year, closing an additional
$30m in funding, and announced a new reseller partnership with HPE, delivering
next-generation SDN software on the Altoline series of switches. This strategic
relationship will dramatically expand the reach of these production ready,
modern networking solutions. Big Switch has continued to drive significant
revenue growth in both enterprise and service providers with its partnership
with Dell EMC, inked 2½ years ago and continuing to gain momentum globally.
Big Switch has continued to build a
channel to resell a "Google Switch" version of its solution, powered by Edge
Core white box switches and Big Switch software. All three partnerships provide
additional validation for the strengthening trend around network disaggregation
- the separation of industry standard switching hardware from differentiated
Network OS and SDN controller software. Big Switch partners closely with VMware
and Red Hat to support OpenStack and container networking with OpenShift.
Big Switch plans to continue to
disrupt the traditional datacenter switching market as an open, simple to
deploy and operate alternative to Cisco ACI, while taking additional market
share from the traditional network packet broker (NPB) monitoring market with its
next generation visibility and security fabrics, providing per VM and
tap-every-rack visibility for the SDDC.
##