Recently, a customer asked me for a quick how-to for plumbing and configuring northbound port-channels on their UCS B-series setup. The basic install including management access had been completed some time ago, but as projects sometimes go, this one had been back-burnered for some time so we were just getting around to making it work.
I spun up my copy of the UCSPE (obtainable here) and grabbed a few screenshots to provide a quick walkthrough. The customer was able to follow my instructions quickly and with no further follow-up questions, so I figured I’d toss this into a quick blog post for anyone else looking to do the same.
Setup:
Each 6200-series UCS Fabric Interconnect will have a single 4-member port-channel that goes up to the pair of Nexus using a vPC.
Procedure:
1. Preconfigure the ports on the Nexus switches. Since this is a vPC arrangement, each Nexus will require identical configuration:
interface port-channel5 description To ucs6248-a switchport mode trunk switchport trunk allowed vlan 1-50 spanning-tree port type edge trunk vpc 5 ! interface Ethernet1/5 description To ucs6248-a switchport mode trunk switchport trunk allowed vlan 1-50 channel-group 5 mode active no shutdown ! interface Ethernet1/6 description To ucs6248-a switchport mode trunk switchport trunk allowed vlan 1-50 channel-group 5 mode active no shutdown interface port-channel7 description To ucs6248-b switchport mode trunk switchport trunk allowed vlan 1-50 spanning-tree port type edge trunk speed 10000 vpc 7 ! interface Ethernet1/7 description To ucs6248-b switchport mode trunk switchport trunk allowed vlan 1-50 channel-group 7 mode active no shutdown ! interface Ethernet1/8 description To ucs6248-b switchport mode trunk switchport trunk allowed vlan 1-50 channel-group 7 mode active no shutdown
2. Select the links in the UCS LAN Uplink Manager. To get there, go to the Equipment tab in UCSM, then navigate to Equipment > Fabric Interconnects > Fabric Interconnect A, and select “LAN Uplinks Manager” in the General tab of the main pane. The Uplinks Manager will come up and you can see your ports:
If the ports you are looking to port-channel already show up under the “Uplink Eth Interfaces” list of the main pane of the LAN Uplinks Manager, then the ports have already been assigned a personality. If not, then as shown above, you may need to expand “Unconfigured Ethernet Ports” in the left pane of the Uplinks Manager and select the appropriate phyiscal ports, right-click them, and select “Configure as Uplink Port.” They will then show up under the “Uplink Eth Interfaces” list in the main pane. Repeat this on Fabric Interconnect B.
3. Next, you will click the “Create Port Channel” button under the list of Uplink interfaces and in the small pop-up that comes up select “Fabric A”. Location of the button is shown here:
4. Next, you will go through the port-channel wizard. You will select a port-channel ID. This can be any number you want. 100 for Fabric A and 200 for Fabric B or 10 and 20 or 101 and 102 are all common choices. You could make them match the Nexus-side config as well. The name is just a label field, make it anything you want (no spaces allowed, I don’t think). Hit “Next” and then select the 4 ports that should show up (or the 4 ports you’re assigning to the port-channel if more are available in the list). Hit the “>>” button to assign them to the port-channel, and hit “Finish“.
When done the A fabric, do the same for Fabric B (starting with the “Create Port Channel” button again).
5. When finished, the LAN Uplinks Manager should look like this:
Hit the OK button, and then cable the links up based on the diagram, FI-A ports 1-4 to ports 5-6 of both Nexus, and FI-B ports 1-4 to ports 7-8 of both Nexus.
6. Check status of the port-channel from the Nexus side with “show port-channel summary” and/or “show vpc 5” and “show vpc 7”. If everything is up, you should be good to go.
Hi Bob,
Quick question / comment on your design. Bear in mind, I’m coming at this from a view of a sysadmin and one with no blade experience.
IMO, I would not want my links between the fabric interconnect and the nexus 5k spread across two switches. Meaning instead of having a vPC that ‘s spread across nexus a *and* b, I would prefer all 4 links go from a to a and from b to b.
Here is why, and perhaps you’ll have some comments as to why that’s not a problem, or something else i’m not considering. Think about where your storage uplinks are going to be located? Its highly likely a SAN will have uplink 1 in switch a and uplink 2 in switch b. In turn, a typical windows / VMware host, will also have an uplink 1 in FIA and uplink 2 in FIB. The reason for this is iSCSI through multi-pathing has built in HA, and intelligently load balances the links. When you throw a vPC that spans multiple switches, your now throwing a cog/layer of complexity into the iSCSI MPIO IMO for several reasons:
1. You’re cutting the effective bandwidth in half or if nothing else adding an extra hop between the san and host uplinks. While its quite possible that the path of a packet may go from host > FIA > Nexus A > SAN link A, its also possible that the packet will need to go the following path hot > FIA >Nexus B > Nexus vPC > SAN link A. In a world where flash is being added to the SAN in hopes of adding performance, every hop counts IMO.
2. As mentioned before, the SAN have native link HA built in. if host link A can’t access SAN port A, host link B will still be working. So there’s not need to span switches.
3. The design i’m debating, would provide more predictable performance / outcomes. If anything in “a” dies, then the A side is down and everything fails over to the b side.
4. Things like vMotion have similar design clauses, although not as big of a deal.
5. Most servers now a days are natively able to aggregate links, so why not put the redundancy back on them?
The only issue that I can see which might occur, is if either the FI or nexus becoming a black hole in the event that its parent switch goes down or vice versa. Then again, I’d like to think perhaps inter nexus vPC could help with that (perhaps I’m wrong).
What are your thoughts? Hopefully I outlined my concern well enough.
Hi Eric, good to hear from you and sorry for not replying for far too long! I kept waiting until I had time to really digest your comment and compose a reply, and you know how that goes…
Anyway, your concern has some validity and to be sure Cisco’s recommendation if the upstream switch is not MLAG/vPC capable is to do just as you suggest: FI-A to upstream switch A, and B-to-B.
The benefit of using a vPC or MLAG when possible is avoiding a disruption in the fabric in event of a failure on the upstream switch. In the A-to-A/B-to-B model, if the upstream switch fails, you have a fabric failover on your hands. That *should* work fine, but it still means fate is shared between two devices that don’t really have any tight coupling, and you’re hoping that everything happens correctly. It also means that an upstream device failure has removed redundancy within the UCS environment — this is what I mean by fate-sharing, but I wanted to specifically highlight the impact.
In the MLAG model, a failure of the upstream switch reduces bandwidth across the board (as you recognized) but does not cause a fabric failover. Is this definitely better? Well, it very much depends on your situation and there are few “rights” and “wrongs” in design but generally avoiding a fabric failover is desirable, and avoiding unnecessary fate-sharing is also highly desirable. However, the A-to-A model is a valid design and if you feel that is better for your environment and would rather endure the chain-reaction failover in exchange for full bandwidth on a single fabric I don’t think anyone would tell you your design is “wrong” just because you did that despite MLAG being an option.
Briefly to your other comments, yes vMotion is very much something that should be considered here. You want to avoid a vMotion from blade 1 to blade 2 in the same chassis traversing the upstream switch. Usually you want to set the vMotion vSwitch up with explicit failover order (not an active/active distribution) and have them all prefer one fabric. I’ve also seen a recipe where only one vMotion vNIC is created on the UCS and fabric failover at the UCS layer is associated with it, to remove the decision from vSphere and guarantee that in any given chassis all vSphere hosts are using the same fabric at the same time. I’ve never implemented that method, but it’s interesting to me.
Finally, your question about the storage connection optimality (comments #1,2): The vast majority of UCS installs I work on are a FlexPod style architecture (since I work at a FlexPod Premier Partner!) and in that case we always vPC both NetApp controller heads into both Nexus switches, so both your SAN controllers are linked to both Nexus. The vPC rules state that you will never cross the vPC peer-link if both “sides” of the vPC are up, and so that suboptimal case you refer to will never happen unless a link to a SAN head is down. If FI-A hashes onto the link to Nexus B, then Nexus B will switch the frame out to its local vPC member link to SAN head A if that’s what is needed. If we’re talking an FC environment then then this is all out the window and you definitely do NOT cross-connect the FIs to the alternate SAN fabric switches.
Hope all this helps. If you ever need any assistance with your UCS environment, feel free to reach out. We’d love to work with you.
Thank you so much for putting this together. I am new to UCS and still learning. I was able to follow your instructions without any issue.
Glad it was helpful!