Compare and contrast vPC options

Here’s some good info about vPC options/topologies as found on Cisco’s vPC design guide.

vPC Basics 

The fundamental concepts of vPC are described at http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9402/white_paper_c11-516396.html.

vPCs consist of two vPC peer switches connected by a peer link. Of the vPC peers, one is primary and one is secondary. The system formed by the switches is referred to as a vPC domain.

Following is a list of some possible Cisco Nexus vPC topologies:

● vPC on the Cisco Nexus 7000 Series (topology A): This topology consists of access layer switches dualhomed to the Cisco Nexus 7000 Series with a switch PortChannel with Gigabit Ethernet or 10 Gigabit

Ethernet links. This topology can also consist of hosts connected with virtual PortChannels to each Cisco Nexus 7000 Series Switch.

● vPC on Cisco Nexus 5000 Series (topology B): This topology consists of switches dual-connected to the

Cisco Nexus 5000 Series with a switch PortChannel with 10 Gigabit Ethernet links, with one or more links to

each Cisco Nexus 5000 Series Switch. Like topology A, topology B can consist of servers connected to each Cisco Nexus 5000 Series Switch via virtual PortChannels.

● vPC on the Cisco Nexus 5000 Series with a Cisco Nexus 2000 Series Fabric Extender single-homed (also called straight-through mode) (topology C): This topology consists of a Cisco Nexus 2000 Series Fabric Extender single-homed with one to eight 10 Gigabit Ethernet links (depending on the fabric extender model) to a single Cisco Nexus 5000 Series Switch, and of Gigabit Ethernet or 10 Gigabit Ethernet-connected servers that form virtual PortChannels to the fabric extender devices. Note that each fabric extender connects to a single Cisco Nexus 5000 Series Switch and not to both, and that the virtual PortChannel can be formed only by connecting the server network interface cards (NICs) to two fabric extenders, where fabric extender 1 depends on Cisco Nexus 5000 Series Switch 1 and fabric extender 2 depends on Cisco Nexus 5000 Series Switch 2. If both fabric extender 1 and fabric extender 2 depend on switch 1 or both of them depend on

switch 2, the PortChannel cannot be formed.

● Dual-homing of the Cisco Nexus 2000 Series Fabric Extender (topology D): This topology is also called Cisco Nexus 2000 Series Fabric Extender (FEX for brief) Active/Active. In this topology each FEX is connected to each Cisco Nexus 5000 Series device with a virtual PortChannel. With this topology, the server cannot create

a PortChannel split between two fabric extenders. The servers can still be dual-homed with active-standby or

active-active transmit-load-balancing (TLB) teaming.

Note:   Topologies B, C, and D are not mutually exclusive. You can have an architecture that uses these three topologies concurrently. Design Guide © 2010 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.  Page 4 of 38

Figure 1 illustrates topologies A and B. Figure 2 illustrates topologies C and D.

Figure 3 illustrates the main vPC components. Switches 1 and 2 are the vPC peer switches. The vPC peer switches are connected through a link called a peer link, also known as a multichassis EtherChannel trunk (MCT).  Figure 3 shows devices (switch 3, switch 4, and server 2) that are connected to the vPC peers (which could be Cisco Nexus 7000 or 5000 Series Switches). Switches 3 and 4 are configured with a normal PortChannel configuration, switches 1 and 2 are configured with a virtual PortChannel

vPC Peer Link

The vPC peer link is the most important connectivity element in the vPC system. This link is used to create the illusion of a single control plane by forwarding Bridge Protocol data units (BPDUs) or Link Aggregation Control  Protocol (LACP) packets to the primary vPC switch from the secondary vPC switch.

The peer link is used to synchronize MAC addresses between aggregation groups 1 and 2, to synchronize IGMP entries for the purpose of IGMP snooping, it provides the necessary transport for multicast traffic and for the communication of orphaned ports. The term “orphaned ports” refers to switch ports connected to single-attached hosts, or vPC ports whose members are all connected to a single vPC peer. In the case of a vPC device that is also a Layer 3 switch, the peer link also carries Hot Standby Router Protocol (HSRP) frames.

For a vPC to forward a VLAN, that VLAN must exist on the peer link and on both vPC peers, and it must appear in the allowed list of the switch port trunk for the vPC itself. If either of these conditions is not met, the VLAN is not displayed when you enter the command show vpc brief, nor is it a vPC VLAN.

When a PortChannel is defined as a vPC peer link, Bridge Assurance is automatically configured on the peer link. vPC Peer-Keepalive or Fault-Tolerant Link.  A routed “link” (it is more accurate to say “path”) is used to resolve dual-active scenarios in which the peer link connectivity is lost. This link is referred to as a vPC peer-keepalive or fault-tolerant link. The peer-keepalive traffic is often transported over the management network through the management 0 port of the Cisco Nexus 5000 Series Switch or the management 0 ports on each Cisco Nexus 7000 Series supervisor. The peer-keepalive traffic is typically routed over a dedicated Virtual Routing and Forwarding (VRF) instance (which could be the management VRF, for example).

The keepalive can be carried over a routed infrastructure; it does not need to be a direct point-to-point link, and, in fact, it is desirable to carry the peer-keepalive traffic on a different network instead of on a straight point-to-point link.

vPC Ports, and Orphaned Ports

A vPC port is a port that is assigned to a vPC channel group. The ports that form the virtual PortChannel are split between the vPC peers and are referred to as vPC member ports. A non-vPC port, also known as an orphaned port, is a port that is not part of a vPC.

Figure 4 shows different types ports connected to a vPC system. Switch1 and Host 3 connect via vPCs. The ports connecting devices in a non-vPC mode to a vPC topology are referred to as orphaned ports. Switch 2 connects to the Cisco Nexus Switch with a regular spanning-tree configuration: thus, one link is forwarding, and one link is blocking. These links connect to the Cisco Nexus Switch with orphaned ports.  Server 6 connects to a Cisco Nexus Switch with an active-standby teaming configuration. The ports that server 6 connects to on the Cisco Nexus Switch are orphaned ports.

Server 6 connects to a Cisco Nexus Switch with an active-standby teaming configuration. The ports that server 6 connects to on the Cisco Nexus Switch are orphaned ports.

vPC Topology with Fabric Extenders

Figure 5 illustrates another vPC topology consisting of Cisco Nexus 5000 Series Switches and Cisco Nexus 2000 Series Fabric Extenders (in straight-through mode: that is, each fabric extender is single-attached to a Cisco Nexus 5000 Series Switch).

Figure 5 shows devices that are connected to the vPC peer (Cisco Nexus 5000 Series Switches 5k01 and 5k02) with a PortChannel (a vPC); for example, server 2, which is configured for NIC teaming with the IEEE 802.3ad option.

Servers 1 and 3 connect to orphan ports.


To summarize, a vPC system consists of the following components:

● Two peer devices: the vPC peers, of which one is primary and one is secondary; both are part of a vPC domain
● A Layer 3 Gigabit Ethernet link called a peer-keepalive link to resolve dual-active scenarios
● A redundant 10 Gigabit Ethernet PortChannel called a peer link which is used to carry traffic from one system to the other when needed and to synchronize forwarding tables
● vPC member ports forming the virtual PortChannel

Advertisements

NX-OS Features

Quickly describing features seen on NX-OS, Cisco’s next generation data center network operating system:

NX-OS uses a feature-based license model.  With this change, you are only buying the feature packages your device needs.

120 grace period for testing features – you can try it all for 4 months or so.  This is as problematic as you are probably imagining ; -)

Modular features (you can enable disable protocols like OSPF, BGP, LACP, CDC) allow you to only run portions of code/processes as needed.

Interfaces are all Ethernet, there are no longer speed designations associated with the interfaces in NX OS.

SSHv2 is enabled by default

NX-OS uses a kickstart and a system image.

Rapid PVST+ is default.

Configuration rollback allows you to take a snapshot or “checkpoint” of the config before applying new configuration.

Cut-through switching is standard except where transmission speeds vary.

NX-OS supports Fabric Extension.  Fabric extension allows the collapsing of the switching layers and for less management points.

NX-OS supports vPCs – Virtual Portchannels / multichassis link aggregation, with allows active active forwarding (no spanning tree blocked ports).

Native FCoE

Cisco NX-OS Innovations featured on Cisco.com:

Cisco FabricPath

Get a highly scalable, high-bandwidth and resilient Layer 2 multipath network without a spanning tree

Cisco Overlay Transport Virtualization

Extend Layer 2 networks across geographically distributed data centers for transparent workload mobility.

Locator/ID Separator Protocol (LISP)

Enable a new routing architecture with global IP address portability.

Fibre Channel over Ethernet (FCoE) and multi-hop FCoE

Converge LAN and SAN traffic over a single Ethernet unified fabric.

Virtual Device Context (VDC)

Virtualize a physical switch into multiple logical switches for infrastructure consolidation and segmentation (PCI certified).

Cisco Fabric Extender Link

Manage your network from a single point.

Unified Ports

Configure a single port for various data center connectivity requirements.

Cisco In-Service Software Upgrade (ISSU)

Deploy new features and services without any network outage or service disruption.

Cisco VN-Link

Extend network security policies and quality of service to the virtual machine level, based on 802.1Q standards.

I/O Accelerator

Accelerate replication and backup applications over extended distances.

Cisco Storage Media Encryption (SME)

Encrypt storage media (tape or disk) to meet compliance and regulatory requirements.

Cisco Data Mobility Manager (DMM)

Deploy online migration of data across heterogeneous storage methods to get workload mobility in a cloud.

Virtual Storage Area Networks (SANs)

Isolate traffic and management to consolidate and segment SANs.

 

These are only some of the many features that immediately come to mind.  Most of these I will eventually cover in greater detail.

-Gabe@networkdojo.net

 

 

 

 

Cisco NX-OS Architecture

This CCIE Data Center topic is pretty much glazed over in most sources, including the Cisco Press book “NX-OS and Cisco Nexus Switching”.

There is a Cisco white paper that covers the OS architecture in much better detail.  This content is right out of the white paper ” Cisco NX-OS Software:  Business-Critical Cross-Platform Data Center OS ” I’m sharing it here for educational purposes.  It’s a lengthy read.  Enjoy!

What You Will Learn

Modern data centers power businesses through a new generation of applications and services. Virtualization, cloud computing, high-performance computing, data warehousing, and disaster recovery strategies, among others prevalent in the current environment, are prompting a whole new set of requirements for network infrastructure. To meet the needs of the modern data center. a network device – or more particularly, the operating system that powers that device – must be:

• Resilient: To provide critical business-class availability

• Modular: To be capable of extension to evolve with business needs and provide extended lifecycles

• Portable: For consistency across platforms

• Secure: To protect and preserve data and operations

• Flexible: To integrate and enable new technologies

• Scalable: To accommodate and grow with the business and its requirements

• Easy to use: To reduce the amount of learning required, simplify deployment, and ease manageability

Cisco® NX-OS Software is designed to meet all these criteria across all Cisco platforms that run it. This document describes the features of Cisco NX-OS operating system to help you understand how it can meet the needs of your organization.

Building on a Proven Foundation

Cisco NX-OS is a highly-evolved modular operating system that builds on more than 15 years of innovation and experience in high-performance switching and routing. Cisco NX-OS finds its roots in the Cisco SAN-OS operating system used worldwide in business-critical loss-intolerant SAN networks. As a direct result of having been deployed and evolving from nearly a decade in the extremely critical storage area networking space, NX-OS can deliver the performance, reliability, and lifecycle expected in the data center.

Cisco NX-OS is built on a Linux kernel. By using a version 2.6 based Linux kernel as its foundation, Cisco NX-OS gains the following benefits:

• Established and widely field-proven core kernel code

• Efficient multithreaded preemptive multitasking capabilities

• Native multiprocessor and multicore support

The choice of Linux over other operating systems and of the Linux 2.6 kernel over other versions of Linux was strategic. The following are some of the reasons for this choice:

• By inheritance, this implies that NX-OS shares the largest installed base of any UNIX-like operating system in existence. The community development model and widespread use of Linux provide the benefits of rigorous code review, rapid community-based defect resolution, and exceptional real-world field testing.

• The kernel is a near-real-time OS kernel, which is actually preferred over a true-real-time OS for applications such as networking, which may have many parallel critical activities running on a platform.

• This particular kernel version currently provides the best balance of advanced features and maturity and stability. It is currently the most widely deployed version of the Linux kernel.

• As of version 2.6 the kernel introduced an advanced and scalable kernel architecture leveraging multiple run-queues for handling multi-core and multiple-CPU system configurations.

These characteristics provide the solid foundation of resilience and robustness necessary for any network device OS powering the mission-critical environment of today’s enterprise-class data centers.

The multithreaded preemptive multitasking capability provides protected fair access to kernel and CPU resources. This approach helps ensure that critical system processes and services are never starved for processor time. This feature, in turn, helps preserve system and network stability by helping ensure that routing protocols, spanning tree, and internal service processes get access to the CPU cores as needed.

Scalability for Future Growth

With Cisco NX-OS, scalability is integral and effectively built-in. As environments grow, the software’s native support for multiprocessor and multicore hardware platforms helps simplify scalability through the effective use of current and future hardware.

With its pre-emptive multitasking multi-threaded kernel, Cisco NX-OS also provides advanced multicore and multiple-CPU processing. NX-OS incorporates a highly scalable CPU queue and process management architecture which employs multiple processor thread run-queues. This in turn enables more efficient use of modern multi-core CPUs. Combined with its memory mapping techniques and path to 64-bit, NX-OS provides simplified scalability both upwards and downwards to accommodate both control plane growth and multiple platforms.

Modular Code Base

Several categories of modular system code are built on top of the Linux kernel (Figure 1). These can be generally described as:

• Platform-dependent hardware-related modules

• System-infrastructure modules

• Feature modules

Figure 1. Cisco NX-OS Employs a Highly Granular Modular Architecture

The platform-dependent hardware-related modules consist of subsystems such as hardware and chipset drivers specific to a particular hardware platform on which Cisco NX-OS runs. This portion of the OS is the part that must be ported across hardware platforms and allow the other subsystems within the OS to communicate with and tie into the specific hardware features of a platform. The platform-dependent modules typically provide standardized APIs and messaging capabilities to upper-layer subsystems. The modules essentially constitute a hardware abstraction layer to enable consistent development at higher layers in the OS, improving overall OS portability. The defined nature of the platform-dependent modules enables the overall reduction of the code base that specifically requires porting to deliver Cisco NX-OS on other hardware platforms. The result is greater consistency in implementation, reduced complexity in defect resolution, and faster implementation of cross-platform features across the various Cisco NX-OS platforms.

The system infrastructure modules provide essential base system services that enable system process management, fault detection, fault recovery, and interservice communication. The system management component of the system infrastructure provides service management for other features of the OS. It is also the component responsible for fault detection for the feature services, and it is fully capable of performing fault recovery of a feature service as needed. Working together with other infrastructure modules services, it can provide stateful fault recovery of a feature, enabling recovery of a fault within a specific feature in less than a second, while preserving the runtime state of that feature. This capability enables transparent and nondisruptive fault recovery within the system, increasing overall network stability and service uptime.

The individual feature modules consist of the actual underlying services responsible for delivering a particular feature or protocol capability. Open Shortest Path First (OSPF), Enhanced (Interior Gateway Routing Protocol (EIGRP), Intermediate System-to-Intermediate System (IS-IS) Protocol, Border Gateway Protocol (BGP), Spanning Tree Protocol, Fibre Channel over Ethernet (FCoE), the routing information base (RIB), Overlay Transport Virtualization (OTV), and NetFlow export are all examples of system-level features embodied in modular components.

Each feature is implemented as an independent, memory-protected process spawned as needed based on the overall system configuration. This approach differs from that of traditional network operating systems in that only the specific features that are configured are automatically loaded and started. This highly granular approach to modularity enables benefits such as:

• Compartmentalization of fault domains within the OS and its services, resulting in significantly improved overall system resiliency and stability

• Simplified portability for cross-platform consistency through reusable components, or building blocks. and little use of platform-specific code

• More efficient defect prioritization and repair through the isolation of specific functions to particular modules

• Improved long-term platform extensibility through the capability to easily integrate new feature modules into the OS infrastructure through established and consistent OS interfaces

• More efficient resource utilization because only features specifically enabled through configuration are loaded into memory, present command-line interface (CLI) elements, and consume CPU cycles

• Improved security because features that are not configured or enabled do not run, thus reducing the exposure of the OS to attacks

Intelligent Fault Detection and Recovery

In addition to the resiliency gained from architectural improvements, Cisco NX-OS provides internal hierarchical and multilayered system fault detection and recovery mechanisms. No software system is completely immune to problems, so an effective strategy for detecting and recovering from faults quickly and with as little effect as possible is essential. Cisco NX-OS is designed from the start to provide this capability.

Individual service and feature processes are monitored and managed by the Cisco NX-OS System Manager, an intelligent monitoring service with integrated high-availability logic. The system manager can detect and correct a failure or lockup of any feature service within the system. The system manager is in turn monitored and managed for health by the Cisco NX-OS kernel. A specialized portion of the kernel is designed to detect failures and lockups of the Cisco NX-OS System Manager. The kernel itself is monitored through hardware. A hardware process constantly monitors the kernel health and activity. Any fault, failure, or lockup at the kernel level is detected by hardware and will trigger a supervisor switchover. Figure 2 shows the fault detection and recovery process.

Figure 2. Cisco NX-OS Provides Multilevel Hierarchical Fault Detection and Recovery

The combination of these multilevel detection and health monitoring systems provides creates a robust and resilient operating environment that can reduce the overall effect of internal faults and, more importantly, preserve the stability of the overall network by internalizing these types of events.

Continuous Operation and High Availability

Cisco NX-OS is designed from the start to provide consistent, predictable, and reliable high availability. The design goal for the data center is continuous operation: no service disruption. Cisco NX-OS provides a high-availability architecture that moves toward this goal with fully nondisruptive stateful supervisor switchover (SSO) for control-plane redundancy in modular platforms, and nondisruptive In-Service Software Upgrade (ISSU) for all Cisco Nexus® platforms.

When running on platforms that offer redundant control-plane hardware, Cisco NX-OS is designed to provide efficient event-based state synchronization between active and standby control-plane entities. This approach allows the system to rapidly perform a fully stateful control-plane switchover with little system disruption and no service disruption.

For platforms without redundant control planes, Cisco NX-OS can implement ISSU by retaining the software state throughout the upgrade process and retaining packet-forwarding intelligence through its hardware subsystems, preventing service disruption.

Cisco NX-OS is also designed to take full advantage of the distributed environment on platforms with distributed hardware forwarding, so that data-plane forwarding is not affected during redundant control-plane switchover. This architecture effectively delivers true nondisruptive control-plane failover that has been verified to date by several independent third-party sources.

At its core, Cisco NX-OS is designed to take advantage of distributed platforms to reduce any effects on the data plane during control-plane operations, including software upgrades. Again, on platforms designed to be highly distributed, it uses this same high-availability infrastructure and distributed architecture to deliver fully non disruptive ISSU. The control planes of a distributed system are upgraded without affecting data-plane forwarding, and after the control planes are successfully upgraded, the control-plane portions of any forwarding hardware that can be nondisruptively upgraded are serviced. This approach effectively transforms planned maintenance windows so that they longer automatically imply a service outage. This increased level of continuous operation successfully accommodates business-critical environments, in which little downtime or degradation of service is essential.

Enhanced Usability and Familiar Operation

Cisco IOS® Software is already the recognized leader in internetworking device operating systems. For decades, Cisco IOS Software has been the foundation for routing and switching configuration in all environments. The Cisco IOS CLI has essentially become the standard for configuration in the networking industry.

To reduce the amount of time needed to learn Cisco NX-OS and to accelerate adoption, Cisco NX-OS maintains the familiarity of the Cisco IOS CLI. Users comfortable with the Cisco IOS CLI will find themselves equally comfortable with Cisco NX-OS. In addition, Cisco NX-OS has integrated numerous user interface enhancements on top of the familiar Cisco IOS CLI to make configuration and maintenance more efficient. These are just some of the simple but effective UI enhancements found in Cisco NX-OS:

• Nonhierarchical CLI: Almost any command can be run from any mode. You can run show commands from the interface and global configuration modes. Global commands can be run from the interface configuration mode. The system is intelligent enough to determine the nature of a command and process it regardless of whether the current mode the configuration or execution mode.

• Configuration mode contextual CLI history: Separate CLI command histories are maintained for each configuration mode. Reentry of commands in a given configuration mode is simplified; you can use the up and down arrow to cycle through the command history stored for that particular configuration mode.

• Advanced multilevel and multicommand output piping: The capability to stream CLI output through advanced filters and parsing commands enables complex formatting and manipulation of information for easier parsing and processing.

• More verbose and descriptive status output: The show command output tends to be more informative and less obscure or opaque in Cisco NX-OS, allowing more effective troubleshooting and status monitoring.

Cisco IOS Software users will quickly find themselves familiar with the Cisco NX-OS CLI and its enhancements. Typically, most networking professionals will also quickly find themselves seeking those functional enhancements in other operating systems.

Virtual Device Contexts

Cisco NX-OS also provides the capability to virtualize the platform on which it is running. Using Cisco NX-OS virtual device contexts (VDCs), a single physical device can be virtualized into many logical devices, each operating independently, effectively approximating separate physical devices (Figure 3).

Figure 3. VDCs Enable One-to-Many Virtualization of Logical Devices from a Single Platform

Cisco NX-OS VDCs differ from other implementations of virtualization found in most networking devices by applying in-depth virtualization across multiple planes of operation:

• Virtualization at the data plane: Physical interfaces are associated with a specific VDC instance. Data-plane traffic transiting the physical device can be switched only from one interface in a VDC to another within the same VDC. This virtualization is internalized in the switching system and is not subject to external influence, providing very strong data-plane separation between VDCs. The only means of integrating traffic between VDCs is through a physical cross-connection of ports between two or more VDCs.

• Virtualization at the control plane: All control-plane functions are virtualized within the operating system at the process level. This approach effectively creates separate failure domains between VDCs, reducing the fate-sharing between them. Network or system instability within the domain of a single VDC does not affect other VDCs or the network domain in which they are operating.

• Virtualization at the management plane: Virtualization of the management plane is where VDCs truly stand out compared to other network device virtualization solutions. VDCs virtualize the configuration of each logical device, and they also virtualize all supporting management environment services and capabilities. Each VDC maintains separate configurations and operational relationships with typical common support and security services such as:

– Separate independent syslog servers configurable per VDC

– Separate independent authorization, authentication, and accounting (AAA) servers configurable per VDC

– Independent addressable management IP addresses per VDC

– Separate independent NetFlow export targets per VDC

– Independent per-VDC local authentication user lists with per-VDC role-based access control (RBAC)

The end result is extremely effective separation of data traffic and operational management domains suitable for cost-effective infrastructure consolidation in security-sensitive environments.

Security

Cisco NX-OS provides the tools required to enable the advanced security features needed to protect the network infrastructure as well as the actual platform on which it is running.

Security is designed using a two-pronged approach: security at the data plane and security at the control plane. This approach effectively secures transient traffic passing through a Cisco NX-OS device as well as the traffic destined for the device itself.

Cisco NX-OS currently enables the deployment of both common and more advanced data-plane and infrastructure-level security features, including:

• IEEE 802.1ae and Cisco TrustSec platform

• IP source guard (IPSG)

• Dynamic Host Control Protocol (DHCP) snooping

• Unicast reverse-path forwarding (uRPF)

• Hardware-based IP packet validity checking

• Port, router, and VLAN access control lists (ACLs)

• IEEE 802.1x

• Bridge Protocol Data Unit (BPDU) guard

These controls enable effective protection against most man-in-the-middle, common resource, and spoofing attacks.

At the control plane, Cisco NX-OS provides a robust security toolset to prevent attacks directed at the Cisco NX-OS device itself or active sessions on the device. Tools include capabilities such as:

• Control-plane policing (CoPP)

• RBAC with RADIUS and TACACS+ integration

• Strong password checking

• Secure Shell (SSH) Protocol and Secure FTP (SFTP)

More important, the overall Cisco NX-OS system architecture provides the interfaces and structures necessary to easily implement future security features consistently and cleanly.

Unified I/O and Unified Fabric

Cisco NX-OS delivers unified I/O and unified fabric architecture capabilities, introducing a new model of data center design. Cisco Unified Fabric is a critical building block for traditional and virtualized data centers, unifying storage networking, data networking, and services to achieve transparent multiprotocol convergence, multidimensional scale, and distributed intelligence and enabling customers to derive greater value from their network platform investments. Complementing the Cisco Unified Computing and Unified Network Services, Cisco Unified Fabric is a foundational element of the Cisco Data Center Business Advantage architectural framework.

Cisco Unified Fabric provides the flexibility to run Fibre Channel, IP-based storage such as network-attached storage (NAS) and Small Computer System Interface over IP (iSCSI), or FCoE, or a combination of these technologies, on a converged network. Providing the best of both LAN and SAN capabilities, Cisco Unified Fabric enables storage network users to take advantage of the economies of scale, robust vendor community, and aggressive roadmap of Ethernet while providing high-performance, lossless characteristics of a Fibre Channel storage network. Cisco Unified Fabric deployment can easily be implemented through a phased approach; because FCoE is fully interoperable with Fibre Channel, existing networks can gradually evolve to unified fabrics.

Cisco Nexus 5000 and 4000 Series Switches and Cisco Nexus 2000 Series Fabric Extenders enable a single-hop FCoE architecture at the access layer. This capability combined with Cisco Fabric Extender Link (FEX-Link) technology provides a nearly immediate, low-cost high-density entry point into unified I/O with relatively few overall network design changes.

Cisco Nexus 7000 Series Switches and Cisco MDS 9000 Family products are also integral parts of this architecture, with the introduction of the Cisco Nexus F-Series Modules, enriching the platforms that supporting Cisco Unified Fabric and the design possibilities, including multihop FCoE. The FCoE capabilities on the Cisco Nexus 7000 Series support a number of flexible designs, enabling unified network fabric deployment benefits from the access layer all the way through the aggregation layer and core of the data center network.

Cisco Unified Fabric delivers reliable, agile, and cost-effective network services to servers, storage, and applications while improving the user experience across the distributed enterprise. It provides many benefits for users, reducing capital expenditures (CapEx) through infrastructure reduction and operating expenses (OpEx) through network simplification, saving power, cooling, and space costs while protecting the organization’s existing investment in tools, training, and infrastructure.

Cisco FabricPath

The ability to build larger, more scalable, Layer 2 domains while preserving stability, resiliency, and robustness is becoming a crucial criterion for modern data center network design. Another key demand on the network infrastructure, stemming from the drive towards large-scale virtualization, is flexibility. Business applications require flexibility of provisioning any workload anywhere in the data center; private “cloud” type environments require the flexibility of transparent resource allocation.

Cisco NX-OS is architecturally designed and ready to support the evolution in Ethernet topology management and forwarding logic needed to deliver these capabilities. This evolution encompasses an industry wide shift away from traditional Spanning-Tree protocol to Link-State routing at the link layer for topology and forwarding management. By building on the key benefits offered by technologies based on decades of research and development in IP routing, Cisco FabricPath seeks to deliver the infrastructure required to form the foundation of the data center “cloud” that powers all enterprise applications.

Multiple components contribute to this capability:

• An advanced approach to topology management that leverages link-state routing at the Ethernet link layer to provide advanced multi-pathing and network resilience, providing some of the benefits obtained in IP routing.

• The introduction of hierarchical abstraction in the Layer 2 infrastructure to reduce the amount of state that needs to be stored across devices in order to improve scalability.

• More intelligent source and destination MAC address learning and forwarding – allowing greater scalability while utilizing resources more efficiently.

• Effective traffic load balancing and engineering capabilities to enable more efficient utilization of already deployed bandwidth in the infrastructure.

Figure 4. Cisco FabricPath Architecture Provides Freedom from Typical Layer 2 Spanning-Tree Design Constraints, Enabling Network, Application, and Business Flexibility Across the Data Center

Cisco vPC and vPC+ Technology

Cisco vPC technology enables the deployment of a link aggregation from a generic downstream network device to two individual and independent Cisco NX-OS devices (vPC peers). This diverse, multichassis link aggregation path provides both link redundancy and active-active link throughput scaling with high-performance failover characteristics. vPC is delivered in the form of an industry standard IEEE 802.3ad Link Aggregation Control Protocol (LACP) PortChannel interface. No special handling or intelligence is required by the generic downstream device other than support for IEEE 802.1ag LACP. The logical link bundling across two nodes is handled by Cisco NX-OS at the pair of upstream Cisco NX-OS devices that provide vPC capabilities.

A significant benefit to vPC is reduced reliance on the Spanning Tree Protocol to provide topology redundancy and loop management. Since the virtual PortChannels are presented as a single logical link, the actual spanning-tree topology is logically loop free, thereby reducing the number of links that are blocked by spanning tree. All link failures are rerouted to redundant active paths using PortChannel hashing logic instead of spanning tree, which results in much faster failover times. The reduction in logical looping also reduces the complexity of the overall spanning-tree domain.

vPC, when coupled with Configuration Sync, a feature that allows intelligent synchronization of configurations between the vPC peers, drastically reduces management complexity.

Cisco FEX-Link Technology

Cisco FEX-Link technology enables data center architects to gain new design flexibility while simplifying cabling infrastructure and management complexity. Cisco FEX-Link uses the Cisco Nexus 2000 Series Fabric Extenders to extend the capacities and benefits offered by upstream Cisco Nexus switches.

Fabric extenders are essentially extensions of the parent Cisco Nexus switch fabric, with the fabric extenders and the parent Cisco Nexus switch together forming a distributed modular system. This architecture enables flexible physical topologies, combining the flexibility and benefits of both top-of-rack (ToR) and end-of-row (EoR) deployments.

Cisco FEX-Link provides a technology platform for highly scalable unified server access across a range of 100 Megabit Ethernet, Gigabit Ethernet, 10 Gigabit Ethernet, unified fabric, copper, and fiber connectivity rack and blade server environments. The platform is well suited to support today’s Gigabit Ethernet and 10 Gigabit Ethernet environments and allows transparent migration to 10 Gigabit Ethernet, virtual machine-aware unified fabric technologies (Figure 4).

Figure 5. Cisco FEX-Link Architecture Provides Highly Scalable Unified Server Access Connectivity

Cisco FEX-Link architecture provides the following benefits:

• Architecture flexibility: Common, scalable, and adaptive architecture across data center racks and points of delivery (PoDs)1supports a variety of server options, connectivity options, physical topologies, and evolving needs.

• Simplified operations: Simplified operations are provided through a single point of management and policy enforcement using Cisco Nexus switches. This feature eases the commissioning and decommissioning of server racks with zero-touch installation and automatic configuration of fabric extenders.

• Breakthrough business benefits: Scalable 10 Gigabit Ethernet provides 10 times the bandwidth for approximately twice the price of Gigabit Ethernet. Consolidation, cabling reduction, rack-space reduction, reduced power and cooling, investment protection through feature inheritance from the parent switch, and the capability to add functions without the need for a major equipment upgrade of server-attached infrastructure all contribute to reduced OpEx and CapEx.

Virtualization Ready

A network infrastructure that is more integrated and aware in environments that use server virtualization is a critical element of the modern data center. Cisco NX-OS incorporates several technologies and an effective roadmap for support of virtualization-aware networking. This approach allows Cisco NX-OS to enable tighter integration of the network into virtualized server environments to simplify the management, orchestration, and provisioning of data center resources.

The Cisco VN-Link technology for virtualization awareness in the network delivers several features to provide flexible methods of coupling the network configuration with virtualized endpoints. The portfolio of Cisco VN-Link products provides a variety of options that satisfy a range of customer needs. Cisco VN-Link provides advanced hypervisor switching as well as high-performance hardware switching; it is flexible, extensible, and service enabled (Figure 5).

Figure 6. Cisco VN-Link Architecture Provides Virtualization-Aware Networking and Policy Control

The technologies that constitute the building blocks of Cisco VN-Link include:

• Port profiles: These intelligent configuration templates allow dynamic provisioning of relevant configuration parameters to affected interfaces or ports.

• Virtual Ethernet interfaces: These soft interfaces can be associated with a virtualized endpoint, allowing interface parameters associated with that endpoint to fluidly move with the endpoint as needed.

• Virtual Ethernet module (VEM): This lightweight software component represents the data plane and runs inside the hypervisor. It enables advanced networking and security features and performs switching between directly attached virtual machines.

• Virtual supervisor module (VSM): This Cisco NX-OS software-based physical or virtual appliance provides command and control along with management and monitoring of virtual Ethernet interfaces through a traditional network CLI. The VSM also provides integration with the hypervisor management tools.

• VN-tag: Virtualized endpoints need to be identified with a tag when an external hardware switch is used for virtual machine traffic forwarding instead of the software switch within the hypervisor. Such an identifier is used inline to granularly identify specific traffic destined for or originated by a given virtualized endpoint. This approach allows specific streams of traffic within an aggregate to be identified and associated with a specific virtualized endpoint for more granular and efficient application of network-based services and policies.

While Cisco actively provides contributions and feedback to standards and industry bodies to improve virtualization awareness in the network, Cisco NX-OS remains a proactive leader in this space by providing capabilities today across the Cisco Nexus Family of products to help provide customers with solutions that satisfy immediate near-term requirements.

Overlay Transport Virtualization

OTV enables high-performance with simplified and scalable multipoint extension of Layer 2 domains across underlying segments of Layer 3 routed networks. This feature allows the network to retain the fault isolation and scalability characteristics of Layer 3 routing in the core of the network, but still enables Layer 2 adjacency for applications that require it (Figure 7).

Figure 7. Cisco Overlay Transport Virtualization Provides Simplified Yet Scalable Layer 2 Extension for Applications That Require Layer 2 Adjacency

OTV intelligently connects Layer 2 domains without unnecessarily extending and joining spanning tree domains. This feature preserves individual fault domains and improves overall scalability. Additionally, OTV can provide intelligent conversation-based learning of link layer addresses (MAC addresses) without flooding participating domains, reducing the amount of traffic sent to and from the core and the amount of traffic that must be processed by OTV participating nodes.

Comprehensive IPv6 Support

From the start, Cisco NX-OS was designed to comprehensively support IPv6. Its broad portfolio of IPv6 features and extensible architecture allows it to integrate flexibly into world-class enterprise and service provider IPv6 environments.

Support for common IPv6 routing protocols, features, and functions and the capability to easily add more make Cisco NX-OS an excellent OS for IPv6 critical deployments.

For detailed IPv6 feature support information, please refer to the data sheet for the specific device supported by the Cisco NX-OS platform.

Advanced System Management

The operation, administration, and monitoring (OAM) of network elements are critical to long-term data center infrastructure sustainability. Cisco NX-OS includes a number of traditional and advanced features to ease OAM in the data center.

• Simple Network Management Protocol (SNMP): Traditional SNMP Versions 1, 2c, and 3 are supported for read operations, allowing integration into existing systems for effective monitoring. Cisco NX-OS on the Cisco Nexus Family is also certified in the latest versions of the EMC Ionix management platform.

• NETCONF and XML: Cisco NX-OS provides integration with IETF NETCONF-compliant systems through XML transactions over a secure SSH interface.

• Cisco Generic Online Diagnostics (GOLD): Cisco NX-OS supports Cisco GOLD online diagnostics for active component and subsystem testing.

• Cisco Embedded Event Manager: Cisco NX-OS provides a scripted interface that enables the configuration of automated event-triggered actions to be run by the system autonomously.

• Single-image download: Because of its simplified licensing and image structure, every image of Cisco NX-OS contains all the features of Cisco NX-OS available at that release. Individual features are loaded, enabled, and made available to the system platform based on Cisco NX-OS electronic licensing. Therefore, only a single image is available for download for a given version on a given platform. No decoder or version and feature chart is required to determine which image is appropriate for a given environment.

• Scheduler: Cisco NX-OS includes a generic system scheduler that can be configured to run CLI-based system commands at a given time, on a one-time or recurring basis.

• CallHome: The CallHome feature in Cisco NX-OS allows an administrator to configure one or more contact email addresses that are notified when the CallHome function is triggered. The notification process is triggered during certain events that are configurable, such as Cisco GOLD test results and scripts that run based on Cisco EEM events. CallHome enables rapid escalation of events and proactive prevention of a pending failure.

• Configuration checkpoint and rollback: Cisco NX-OS incorporates an advanced configuration and rollback facility to preserve and protect the configuration state. Configuration snapshots, or checkpoints, can be created manually at the CLI or initiated automatically by the system at major configuration events (such as the disabling of a feature). Checkpoints can be stored locally on the device in the local checkpoint database or in a file in integrated or removable storage. Using the rollback capability, the current running configuration can be restored to a particular state stored in a checkpoint.

Conclusion

The architecture and features discussed here are only some of the characteristics that make Cisco NX-OS the most advanced data center device OS available. The reliability, resiliency, availability, and extensibility of Cisco NX-OS provide a solid foundation on which to build business-critical data center environments. Using that foundation to advance technologies in the data center to meet the requirements of current and future generations of applications and services effectively positions Cisco NX-OS as the internetworking device operating system for the next decade.

Adios!

-Gabe@networkdojo.net

Implement jumbo frames end-to-end in a data center

==================================================================

Configuring Jumbo Frames on a Catalyst switch
==================================================================

Configure in CatOS

Cat6509≶ (enable) set port jumbo 
Usage: set port jumbo <mod/port> <enable|disable>
Cat6509> (enable) set port jumbo 1/1 enable 
Jumbo frames enabled on port  1/1. 
Cat6509> (enable) 2002 May 29 12:34:35 %PAGP-5-PORTFROMSTP:
Port 1/1 left bridge port 1/1 
2002 May 29 12:34:38 %PAGP-5-PORTTOSTP:Port 1/1 joined bridge port 1/1

Verify in CatOS

Cat6509> (enable) show port jumbo 
Jumbo frames MTU size is 9216 bytes. 
Jumbo frames enabled on port(s) 1/1,9/1.

Configure Native IOS

7609(config)#int gigabitEthernet 1/1 
7609(config-if)#mtu ? 
  <1500-9216>  MTU size in bytes 

7609(config-if)#mtu 9216

Verify in Native IOS

7609#show interfaces gigabitEthernet 1/1 
GigabitEthernet1/1 is up, line protocol is up (connected) 
  Hardware is C6k 1000Mb 802.3, address is 0007.0d0e.640a (bia 0007.0d0e.640a) 
  MTU 9216 bytes, BW 1000000 Kbit, DLY 10 usec, 
  reliability 255/255, txload 1/255, rxload 1/255

======================================================================

Configuring Jumbo Frames on the Nexus switch
======================================================================

!— You can enable the Jumbo MTU
!— for the whole switch by setting the MTU
!— to its maximum size (9216 bytes) in
!— the policy map for the default
!— Ethernet system class (class-default).

switch(config)#policy-map type network-qos jumbo
switch(config-pmap-nq)#class type network-qos class-default
switch(config-pmap-c-nq)#mtu 9216
switch(config-pmap-c-nq)#exit
switch(config-pmap-nq)#exit
switch(config)#system qos
switch(config-sys-qos)#service-policy type network-qos jumbo

======================================================================

Enable Jumbo frames in ESX and ESXi
======================================================================

1. Enable jumbo frames on the virtual switch (set the MTU on the uplinks/physical NICs)

  • For vSS (standard vSwitch) you need to use the vSphere cli.  For example, this cli command will set the MTU to 9000 bytes for the vSS named “vswitch0”:
    vicfg-vswitch –m 9000 vswitch0
    Use “vicfg-vswitch –l” to list the vswitches and their properties
  • For vDS (vNetwork Distributed Switch), you can set the MTU via the vSphere Client UI. From the Networking inventory menu, select the vDS and then “Edit Settings”. Set the “Maximum MTU” to the desired MTU (e.g. 9000B is most likely for jumbo).

2.  Enable jumbo frames on the vmkernel ports

  • Use the esxcfg-vmknic command to delete and then add a vmkernel interface with an MTU of 9000. On ESXi, there seems to be a glitch in creating a vmkernel port on a vDS through the vcli, so the workaround is to create a vmkernel interface with MTU 9000 on a standard switch and then migrate it over to the vDS through the vSphere Client.You can get the status (name/address/mask/MAC addr/MTU) of the vmkernel interfaces via
    esxcfg-vmknic -l


=====================================================================
Configuring MTU in the UCS GUI
=====================================================================

1) Configure System Classes

2) Configure the MTU – it is a QoS System Classes property

Procedure

 

Step 1 In the Navigation pane, click the LAN tab.
Step 2 In the LAN tab, expand LAN > LAN Cloud.
Step 3 Select the QoS System Class node.
Step 4 In the Work pane, click the General tab.
Step 5 Update the following properties for the system class you want to configure to meet the traffic management needs of the system:

Note    Some properties may not be configurable for all system classes.
Name Description
Enabled check box If checked, the associated QoS class is configured on the fabric interconnect and can be assigned to a QoS policy.If unchecked, the class is not configured on the fabric interconnect and any QoS policies associated with this class default to Best Effort or, if a system class is configured with a Cos of 0, to the Cos 0 system class.

Note    This field is always checked for Best Effort and Fibre Channel.
CoS field The class of service. You can enter an integer value between 0 and 6, with 0 being the lowest priority and 6 being the highest priority. We recommend that you do not set the value to 0, unless you want that system class to be the default system class for traffic if the QoS policy is deleted or the assigned system class is disabled.

Note    This field is set to 7 for internal traffic and to any for Best Effort. Both of these values are reserved and cannot be assigned to any other priority.
Packet Drop check box If checked, packet drop is allowed for this class. If unchecked, packets cannot be dropped during transmission.This field is always unchecked for the Fibre Channel class, which never allows dropped packets, and always checked for Best Effort, which always allows dropped packets.

MTU drop-down list The maximum transmission unit for the channel. This can be one of the following:

  • An integer between 1500 and 9216. This value corresponds to the maximum packet size.
  • fc—A predefined packet size of 2240.
  • normal—A predefined packet size of 1500.
Note    This field is always set to fc for Fibre Channel.
Multicast Optimized check box If checked, the class is optimized to send packets to multiple destinations simultaneously.

Note    This option is not applicable to the Fibre Channel.
Step 6 Click Save Changes.Enabling a QoS System Class

The Best Effort or Fibre Channel system classes are enabled by default.

Procedure

Step 1 In the Navigation pane, click the LAN tab.
Step 2 In the LAN tab, expand LAN > LAN Cloud.
Step 3 Select the QoS System Class node.
Step 4 In the Work pane, click the General tab.
Step 5 Check the Enabled check box for the QoS system that you want to enable.
Step 6 Click Save Changes.

 

Example CLI configuration of class based policy enabling Jumbo MTU:

policy-map type network-qos system_nq_policy

  class type network-qos class-platinum

    mtu 9000

    pause no-drop

  class type network-qos class-gold

    mtu 9000

  class type network-qos class-fcoe

    pause no-drop

    mtu 2158

  class type network-qos class-default

    mtu 9000

system qos

  service-policy type network-qos system_nq_policy

 

=========================================================================== 

Configuring Jumbo Frames on the Nexus 1000v Virtual Distributed Switch
===========================================================================

MTU can only be configured for uplink, Ethernet type port profiles.

CSCtk05901 If you configure MTU for an Ethernet port profile, your ESX host may generate the following error:

2010 Nov 15 04:35:27 my-n1k %VEM_MGR-SLOT3-1-VEM_SYSLOG_ALERT: vssnet : 
sf_platform_set_mtu: Failed setting MTU for VMW port with portID 33554475.

 In this case, the MTU value you have set is not supported by the VEM physical NIC. See your VMware documentation for more information about supported MTU for PNIC.

Creating a System Port Profile

You can use this procedure to configure a system port profile for critical ports.

BEFORE YOU BEGIN

Before beginning this procedure, you must know or do the following:

You are logged in to the CLI in EXEC mode.

The VSM is connected to vCenter server.

You have configured the following:

Port admin status is active (no shutdown).

Port mode is access or trunk.

VLANs that are to be used as system VLANs already exist.

VLANs are configured as access VLANs or trunk-allowed VLANs.

A system port profile must be of the Ethernet type because it is used for physical ports. This procedure configures the Ethernet type.

In an installation where multiple Ethernet port profiles are active on the same VEM, it is recommended that they do not carry the same VLAN(s). The allowed VLAN list should be mutually exclusive. Overlapping VLANs can be configured but may cause duplicate packets to be received by virtual machines in the network.

Once a port profile is created, you cannot change its type (Ethernet or vEthernet).

The MTU size you set must be less than or equal to the fixed system jumbomtu size of 9000.

For more information, see the Cisco Nexus 1000V Interface Configuration Guide, Release 4.2(1)SV1(4a).

The MTU configured on an interface takes precedence over the MTU configured on a port profile.

For more information, see the Cisco Nexus 1000V Interface Configuration Guide, Release 4.2(1)SV1(4a).

SUMMARY STEPS

1. config t

2. port-profile type ethernet profilename

3. description profiledescription

4. switchport mode trunk

5. switchport trunk allowed vlan vlan-id-list

6. no shutdown

7. system vlan vlan-id-list

8. (Optional) mtu mtu-size

9. show port-profile [brief | expand-interface | usage] [name profilename]

10. copy running-config startup-config

Not too bad right? 


– Gabe@networkdojo.net

Describe the products used in the Data Center Architecture

I am fortunate enough to be going to Cisco Live this year!  So I will jump on the frantic studying bandwagon for the CCIE Data Center beta.

I’ll skip right to one of the easy knowledge bullets – the data center products.  I won’t describe each of these, but instead provide perfectly functional linkage.  Happy clicking!

Unified Computing and Server (UCS) Products

 

Application Networking Services (ANS) Products

Application-Oriented Networking

Back to Top

Data Center Application Services

Wide Area Application Services

 

Storage Networking Products

Cisco MDS 9000 Multilayer Directors and Fabric Switches

 

Data Center Switches

 

I leave  you with an encouraging quote from who may possibly the coolest guy ever.

“He was so learned that he could name a horse in nine languages; so ignorant that he bought a cow to ride on.”  – Benjamin Franklin

Gabe@networkdojo.net

CCIE Data Center Written Exam Topics are out!

Here they are – very exciting times indeed 🙂

CCIE ® Data Center Written Exam Topics

The topic areas listed are general guidelines for the type of content that is likely to appear on the exam. Please note, however, that other relevant or related topic areas may also appear.

All exam materials are provided and no outside reference materials are allowed.

Exam Sections and Sub-task Objectives

Cisco Data Center Architecture

  • Describe the Cisco Data Center Architecture
  • Describe the products used in the Cisco Data Center Architecture
  • Describe Cisco unified I/O solution in access layer
  • Determine which platform to select for use in the data center different layers

Cisco Data Center Infrastructure—NX-OS

  • Describe NX-OS features
    Describe the architecture of NX-OS
    Describe NX-OS Process Recovery
    Describe NX-OS Supervisor Redundancy
    Describe NX-OS Systems file management
    Describe Virtual Output Queuing (VoQ)
    Describe Virtual Device Contexts
    Configure and Troubleshoot VDCs
    Describe fabric extension via the nexus family
  • Design and implement NX-OS Layer 2 and Layer 3 functionality
    Describe VLANs
    Describe PVLANs
    Describe Spanning-Tree Protocols
    Describe Port-Channels and Virtual Port Channels
    Compare and contrast VPC options
    Describe basic features of routing protocols in a data center environment
    Implement jumbo frames end-to-end in a data center
    Describe FabricPath
    Describe VRF lite in a data center environment
    Validate configurations and troubleshoot problems and failures using command line, show and debug commands.
  • Describe Multicast
    Describe Multicast Operation in a data center environment
    Describe Basic PIM configuration
    Describe IGMP operation and configuration on the Nexus Platform
    Validate Configurations and troubleshoot problems and failures using command line, show and debug commands
  • Describe basic NX-OS Security features
    AAA Services
    RBAC, SSH, and SNMPv3
    Control Plane Protection and Hardware Rate Limiting
    IP ACLs, MAC ACLs, and VLAN ACLs
    Port Security
    DHCP Snooping, Dynamic ARP Inspection, and IP Source Guard
    Validate configurations and troubleshoot problems and failures using command line, show and debug commands
  • Implement NX-OS high availability features
    Describe First-Hop Routing Protocols
    Describe Graceful Restart and nonstop forwarding
    Describe OTV
    Describe the ISSU process
    Validate configurations and troubleshoot problems and failures using command line, show and debug commands
  • Implement NX-OS management
    Describe DCNM LAN features
    Implement SPAN and ERSPAN
    Implement embedded Ethernet analyzer and Netflow
    Describe XML for network management and monitoring
    Describe SNMP for network management and monitoring
    Describe Implement Embedded Event Management
    Describe configuration management in Data Center Network Manager
    Describe Smart Call Home
    Detail connectivity and credentials required for Data Center Network Manager
    Validate configurations and troubleshoot problems and failures using command line, show and debug commands

Cisco Storage Networking

  • Describe Standard-based SAN Protocols
    Describe Fiber Channel Standards and protocols
    Describe SCSI standards and protocols
    Describe iSCSI standards and protocols
    Describe FCIP standards and protocols
  • Implement Fiber Channel Protocols features
    Describe Port Channel, ISL, trunking and VSANs
    Design basic and enhanced zoning
    Describe FC domain parameters
    Describe Cisco Fabric services and benefits
    Design and implement proper oversubscription in an FC environment
    Validate proper configuration of FC storage based solutions
  • Implement IP Storage based solution
    Implement FC over IP (FCIP)
    Describe iSCSI and its features
    Validate proper configuration of IP Storage based solutions
  • Design and describe NX-OS Unified Fabric features
    Describe Fiber Channel features in the NX-OS environment
    Describe Fiber Channel over Ethernet Protocol and technology
    Design and implement data center bridging protocol and lossless Ethernet
    Design and implement QoS features
    Describe NPV and NPIV features in a Unified Fabric environment
    Describe FCoE NPV features
    Describe Unified Fabric Switch different modes of operations
    Describe multihop FCoE
    Describe and configure universal ports
    Validate configurations and troubleshoot problems and failures using command line, show and debug commands
  • Design high availability features in a standalone server environment
    Describe server-side high availability in the Cisco Unified I/O environment
    Describe Converged Network Adapter used in FCoE topologies
    Configuring NIC teaming
  • Implement SAN management
    Describe Device Manager for element management
    Describe configuration management in Data Center Network Manager
    Describe connectivity and credentials required for DCNM-SAN
    Describe how to monitor and trend utilization with DCNM Dashboard

Cisco Data Center Virtualization

  • Implement Data Center Virtualization with Nexus1000v
    Describe the Cisco Nexus1000v and its role in a virtual server network environment
    Describe Virtual Ethernet Module (VEM) on Nexus1000v
    Describe Virtual Supervisor Module (VSM)
    Describe the Cisco Nexus 1010 physical appliance and components
    Describe Port Profiles and use cases in Nexus1000v
    Describe QoS, Traffic Flow and IGMP Snooping in Nexus1000v
    Describe Network monitoring on Nexus1000v
    Explain the benefits of DHCP snooping in a VDI environment
    Describe how to intercept traffic using Vpath and its benefits
    Describe and implement Nexus1000v port channels
    Describe Virtual Service Domain
    Validate configurations and troubleshoot problems and failures using command line, show and debug commands

Cisco Unified Computing

  • Unified Computing System components and architecture
    Describe Cisco Unified Computing System components and architecture
    Describe the Cisco Unified Computing server deployment and implementation model
    Describe Cisco UCS Management features
    Describe Cisco UCS Connectivity from both LAN and SAN perspective
    Describe Cisco UCS High Availability
    Describe what the capability catalog is and how it is used
    Describe Cisco UCS C Series Integration
    Describe the functional differences between physical and virtual adaptors
  • Describe LAN connectivity in a Cisco Unified Computing environment
    Describe Fabric Interconnect for LAN connectivity
    Implement server and uplink ports
    Describe End Host Mode
    Implement Ethernet Switching Mode
    Implement VLANs and port channels
    Implement Pinning and PIN groups
    Describe Disjoint Layer 2 and design consideration
    Describe Quality of Service (QoS) options and configuration restrictions
    Design and verify scalable Cisco Unified computing systems
  • Describe Implement SAN connectivity in a Cisco Unified Computing environment
    Describe Fabric Interconnect for SAN connectivity
    Describe End Host Mode
    Implement NPIV
    Implement FC Switch mode
    Implement FC ports for SAN connectivity
    Implement Virtual HBA (vHBA)
    Implement VSANs
    Implement SAN port channels
    Describe and implement direct attach Storage connectivity options
    Describe and implement FC trunking and SAN pinning
  • Describe Cisco Unified Computing Server resources
    Describe Service Profiles in Cisco UCS including templates and contrast with cloning
    Describe Server Resource Pools
    Implement updating and initial templates
    Describe Boot From remote storage
    Detail best practices for creating pooled objects
    Explain how to use the Cisco UCS KVM with Vmedia and session management
    Describe local disk options and configuration protection
    Describe power control policies and their effects
  • Describe role-based Access Control Management Groups
    Understand Cisco UCS Management Hierarchy using ORG and RBAC
    Describe roles and privileges
    Implement integrated authentication
  • Cisco Unified Computing troubleshooting and maintenance
    Understand backup and restore procedures in a unified computing environment
    Manage high availability in a Cisco Unified Computing environment
    Describe monitoring and analysis of system events
    Implement External Management Protocols
    Analyze statistical information
    Understand Cisco Unified Computing components system upgrade procedure
    Describe how to manage BIOS settings
    Describe memory extension technology

Cisco Application Networking Services—ANS

  • Data center application high availability and load balancing
    Describe standard ACE features for load balancing
    Describe different Server Load Balancing Algorithm
    Describe health monitoring and use cases
    Describe Layer 7 load balancing
    Describe sticky connections
    Understand SSL offload in SLB environment
    Describe Protocol Optimization
    Describe Route Health Injection (RHI)
    Describe Server load balancing Virtual Context and HA
    Describe Server load balancing management options
  • Global load balancing
    Describe basic DNS resolution process
    Describe the benefits of the Cisco Global Load Balancing Solution
    Describe how the Cisco Global Load Balancing Solution integrate with local Cisco load balancers
    Implement a Cisco Global Load Balancing Solution into an existing network infrastructure