Cisco ACI Multi-Site

By John Gardner | 46 Min Video | Technical Level: Intermediate-Advanced

In this topic, John will be discussing how the APICs at each site communicate and how the policies are applied across all sites. After attending this session, you will be able to understand the communication and policy instantiation between Cisco ACI sites. Key topics include what is ACI, decoupling of endpoint location and policy, ACI evolution, multi-site components, multi-site orchestrator, and policy distribution.

 

 

Watch more videos like this on our YouTube Channel.

View our Data Center Courses.

 

Agenda

My name is John Gardner and thank you for joining us here for the Cisco ACI multi-site Tech Talk webinar. The first thing I want to do is a quick introduction. What we’re going to be talking about is what exactly is application-centric infrastructure. then we’re going to talk about what an application policy infrastructure controller is and the reason why we want to talk about those two first is because those are kind of the lynchpins on what multi-site operates on top of. We must understand those two things before we get into multi-site. I also want to talk about the evolution of application-centric infrastructure to show us how we got here. Lastly, I want to talk about multi-site. What are the different components? What’s a multi-site orchestrator? What are the benefits? Why do we have this whole multi-site thing, to begin with? Now, the target audience, I would say anybody who’s interested in cloud, data center, or if you’ve already got an application application-centric infrastructure and you’re looking to expand out. Again, what exactly is application-centric infrastructure, what are some of the benefits, what are some of the components?

 

What is Cisco ACI?

What exactly is ACI? The first thing I want to talk about here is the application-centric infrastructure is a fabric that we can connect devices to. I’m sure you’re used to seeing core access distribution. We’ve modified that, I like to think that we’ve improved it into what we call a spine-leaf architecture. The spine-leaf architecture is fantastic because what it allows us to do is we have these hops where one device is only going to be maybe one hop away from another device. So this leaf is just simply one hop away from the other leaf. With that being said, when we talk about our Klaus architecture, which it’s commonly referred to, we have what are called leaves and then we have our spines. Now, keep in mind anything that you’re going to plug in with ACI, you’ve got to plug into the leaves. The whole point of building this fabric is so that we can start supporting things like multi-tier applications and I want to take a side step here and explain what exactly a multi-tier application is.

 

A great example is if you go to check your bank account. When we go to check our bank account, the first thing you’re going to do is go to a web server. So we’ve got our web server over here and let’s say fifththird.com. That’s when the website pops up. You’re at an actual web server but what happens if you want to log in and check your account? You’re going to have to have some sort of authentication. If you’re familiar with lightweight directory access protocol (LDAP), that’s an active directory, or it could be Linux, but what it allows us to do is authenticate that user account. But what about all those transactions? Well, they’re not held on the webserver because a lot of people got to go to that web server every day. It’s not held on the LDAP server because that’s pretty much just for authentication. So we usually have this third tier called a database tier. This could be a SQL server or Oracle database but it holds all of your information. Traditionally, for me to get that to work, it was very arduous because there were a lot of VLANs, firewalls, load balancers, SSL offloaders, those were all in the way. We had to point towards the firewall and the load balancer and these were all done with things like routing protocols, static routes, and VLANs. The problem with that is if we continued to do that, we would our routing and switching infrastructure would be bogged down by that large configuration. Not to mention, you’re error-prone to building out that configuration and how often do you remove a configuration once that client or that server is gone?

 

ACI

Let’s insert application-centric infrastructure. What application-centric infrastructure is allowing us to build these multi-tier applications anywhere on this fabric and that’s the beauty of it. I’m not tying a specific server to an interface anymore. It can live anywhere on the fabric. That is the number one advantage here. We can also support traditional applications like a standard web server and virtualized applications. We definitely support things like Kubernetes, Nutanix, your containerized systems like docker. The big thing here is that it is open source and what we mean by open source is that it has the ability to communicate and connect any vendor. I don’t care if it’s an f5 load balancer, Citrix net scaler, Palo Alto firewall, you want to plug in an HP pro curve switch into one of these leaves, go for it. All we care about here in application-centric infrastructure is the application we’re moving away from certain traditional paradigms. Note that we also do support the physical and virtual endpoints. What are virtual endpoints? Those are going to be things like VMware and hyper-v so I can support my hypervisors that you folks already have implemented. The biggest thing is policy abstraction. That is the number one goal of application-centric infrastructure. We want to get away from what we call box by box configuration methodology.

 

Differences

Let’s show you the difference here. In a traditional architecture, we would have a server and it was probably plugged in redundantly. But here’s the problem. When you configured that server, you had to configure these two interfaces to be in a VLAN then you had to configure all of these interfaces for trunking. You also had to allow the VLANs. But what if the device wanted to talk to another subnet? We’re going to have to insert routing and then configure the router for routing between those different subnets. What application-centric infrastructure is doing for us now, is instead of configuring all of these interfaces here, we’re going to create what’s called an application network profile. That application network profile instead of being configured on every single interface, imagine it’s just floating around. When we plug something into our infrastructure, let’s just say that we plug in our server here, the architect or the infrastructure has the ability to identify what application is going across these interfaces. It looks at this application profile and says, what should I do with the traffic? It configures it on demand and let’s say you remove that server, now the configuration’s gone so we’re running lean and clean. We’re trying to abstract the configuration from the interfaces. We want to be able to have this configuration living somewhere else so that we don’t have to configure everything every single time something touches our fabric.

 

Application Policy Infrastructure Controller (APIC)

With that being said, where is all of that done? The application policy infrastructure controller (APIC) is truly the brains of the operation. The APIC sits inside the fabric and any time one of these leaves sees a traffic pattern, it asks the APIC, what should I do with this traffic? The APIC is where we’ve configured our application network profile so it simply tries to match the traffic pattern. For instance, let’s say that it’s coming from the outside. If it comes from the outside and it’s trying to go to a web server, do we have a policy that permits that? Yes. Do whatever that policy says. So if we say permit that traffic, it will permit that traffic. The only thing I did here was create an application network profile and let ACI do its thing.

 

So here’s our spine-leaf architecture. If you are familiar with fabric path, it’s similar to fabric path. We’ve created a high-speed fabric with 40 gig connections and 100 gig connections, back here on the integrated overlay so it is a very fast backplane. You have a minimum of two spines. There’s a couple of different reasons why we have two spines. The spines are the backbone. You don’t want one and have that backbone go offline. Then you’re going to have a bad day. Leaves never connect to each other. There are two things – LLDP and the missed wired configuration protocol that watch for that and if it sees it, it’ll shut down those ports. Devices – now since we’re talking about multi-site this isn’t so true but up until the 2.0 series, you could only connect devices, servers, routers, and other switches into the leaves. Today, we’re going to talk about the 3.0 series of ACI that does support multi-site now.

 

Again, the Cisco APIC is the brains of the operation all the way from creating the application network profile to building the actual fabric itself. Whenever you kick off a discovery of the fabric, which I want to mention just so you know, don’t have to configure these links. All you do is discover these devices and they build them themselves with the information that they get from the APIC. What is it holding? Management information. Think of it this way. If you’ve ever copied and pasted a configuration onto an interface, imagine instead of you copying and pasting it from a file server to the interface, imagine if that file server said, do you need this? That’s exactly what the APIC does. It holds management information but it is not a control plane. These leaves and spines still do their own thing. They still have their own control plane and you can still connect to these devices and configure from the CLI, for instance. Also keep in mind that if you’re going to have one controller that controls the entire environment, that’s a single point of failure. You’re probably going to want to have multiple so generally speaking, we have 3 or more but it could run in 1. The biggest thing here is that it instantiates policy changes. Let’s say that a new customer jumps into our environment. All we have to do is create a new application network profile and then it’s instantiated onto the fabric as soon as it’s needed.

 

Decoupling of Endpoint Location and Policy

This is one of the biggest things for application-centric infrastructure whether it’s single-site, multi-site, single pod, multi-pod, it does not matter. We’re trying to remove the need to map a device to a specific interface. That is the biggest subject that I would discuss for ACI. Endpoints – we’re going to identify them based upon their IP and Mac address information. Just like a normal switch, what’s the whole point of the switch? It’s not to switch. It’s to learn endpoint location. That was the big difference between hubs and switches. With these switches, the first thing they’re going to do is learn the IP address and Mac address information. Think of these as multi-layer switches. They can do layer 2 switching and layer 3 routing all day. They learn the IP and Mac information then they update that information to these spines so that the spines know the location of everything.

 

But we’re going to do something that’s different than what you’ve probably seen in traditional architecture. We’re going to map it to a virtual tunnel endpoint, which is essentially a VXLAN source and destination. If you’ve ever done GRE or IPsec tunnels, you know that you have a source in the destination that you’ve got to configure. That’s exactly what the VTEPs are. Our virtual tunnel endpoints are the source and destination of the traffic. We just simply build a tunnel between them. Forwarding does occur between the v-taps. Transport is based on VXLAN header information. That’s going to be things like your VXLAN tunnel ID and there’s some ACI stuff in there that gives us a good idea of the policy that we’re trying to instantiate.

 

The last thing here is a distributed reachability database maps endpoints to VTEP locations. Let’s decipher that. Whenever we learn a new endpoint, we’re going to update that to the spines. On the spines, they have an oracle database called the cooperative oracle protocol (COOP). Anytime I learn a new device location, I’m going to go ahead and update that or synchronize that across all of my spines. This is fantastic especially if you’re looking for a device and let’s say you’re ARPing for it and you don’t know where it’s at. You can ask the spine and the spine probably knows where it’s located.

 

ACI Evolution

That’s a quick primer on what application-centric infrastructure is, what the fabric is, and what the APIC is for. The reason why I talk about it is, notice, the only thing you see here on the slide is the fabrics and the APICs. When we first started back in the day when dinosaurs roamed the Earth, we had ACI 1.0 and this is actually where I did get started. When we started with leaf and spine architecture, we only had a single pod. Like any other sort of fabric that you see, you usually have some sort of routing protocol or some protocol that keeps track of it. That’s what a single pod means is that we only have one instance of that protocol. For your information, that’s actually intermediate system to intermediate system. Well, we started noticing that most companies and customers didn’t have just one location. They had multiple locations and the problem was that we had to start taking a look at configuration consistency. I needed the configurations and policies to be the same across multiple data centers. This was super important so we created what was called a stretched fabric.

 

Again, this is just a single pod running I-S I-S to keep track of where all the locations are. I also want you to note these APIC clusters. Remember those three APICs we were talking about that control the fabric? We were maybe putting two APICs at one location and another APIC at another location. We were stretching the administration or that cluster across two data centers. That was fantastic because now at this point, all I have to do is create a configuration on one APIC and it will instantiate across the board. Back in those days, what we had between the data centers was what we called an inner pod network (IPN). We still see that you actually see that here in multi-pod multi-site. But, it was all usually plugged into the leaves. Again, it was just one fabric and one cluster. The problem with one fabric and one cluster are it doesn’t provide me redundancy. What happens if one of these pods or data centers fail? One data center failed. What we started doing in 2.0, we supported the multi-pod. Multi-pod means that these two act independently of each other from a fabric perspective but have a consistent policy model between them. I also want to mention here that we are plugging stuff into the spines now. We’re going to plug in maybe a couple of ASR 9ks at each location. You’re going to build some OSPF networking between them and then you can plug in your inner pod network.

 

Again, keep in mind that these are now two separate fabrics. It’s not one so I could make it to where if this fails, we could simply fail over here to another pod. I want to mention also that this is in one data center. This is a singular data center in multi-pod. If we go over here to the 3.0, 4.0, and now the 5.0, it supports what we call multiple availability zones. Basically stating that this is its own fabric and its own APIC cluster. They are completely independent of each other so we’re not inside one building with a stretch cluster. We are completely independent of each other. “But John, that means that I have two clusters that are independent of each other. How how do I keep that configuration consistent? Isn’t that the number one thing that we want here?” Absolutely, it is! That’s why we have what’s called the multi-site orchestrator (MSO). This is a fantastic utility that allows us to hook in. By the way, all you need is the IP address and credential information and you add these two sites in the multi-site orchestrator. At this point, whenever I create a policy, it pushes down to both so they’re mirror twins of each other, which is fantastic.

 

Multi-Site Components

What are some of the components that we’re going to be seeing? The biggest thing is the inner pod network and how it communicating back and forth. This can be any layer 2 or layer 3 environment, but what we’re going to be sending across here is this is going to be acting as an Ethernet VPN. A layer 2 multi-protocol, BGP, EVPN. The reason why we need multi-protocol BGP is in the event that you add something new to each one of these fabrics. Multi-protocol BGP will send that information between the locations so that we know about all of our available subnets. Another thing is this inner site control plane between here. That’s all handled by the multi-site orchestrator. I want to mention that normally we connect to it via a graphical user interface but if you are familiar with something called postman or python, we can send programmability scripting over to this multi-site orchestrator. This is great because now not only can we replicate policy to the multi-site orchestrator, I can also use the rest API to talk to multiple MSO instances. As you can see, this consistency keeps building on top of itself.

 

In this example, this is a pure layer 3 transport. This can be anything layer 3. What we’re doing across there – if you’re familiar with VXLAN we have a layer 2 and a layer 3 version of the XLAN. Here in this in this multi-site design, we’re using the layer 3 version. Just a quick primer, I should probably explain what VXLAN is.

 

Let’s go ahead and draw a router and a couple of switches. We have VLAN 11 and we have VLAN 11. Now, anybody who’s been doing routing and switching for a while, you probably already know that things like broadcasts are not going to go across here. Why? Because it stops at this router’s interface. What VXLAN allows us to do, is create a tunnel between these two interfaces on the layer 3 device so that we can enter the tunnel and leave the tunnel so that it feels like these are just one big switch. What’s neat is if you look at the leaves here in ACI, that’s exactly what they are. There’s a little router and a little switch inside of them so that whenever these two leaves are talking, what they’re actually doing this is creating a VXLAN between them. They’re both VLAN 11 now. At this point, this layer 3 environment can be anything. We’re just simply creating a tunnel similar to like a GRE tunnel if you’re familiar with the generic rounding encapsulation. It’s similar to a GRE tunnel. Very simple and easy. It just has a tunnel ID, call it a day and go.

 

Multi-Site Orchestrator

Now, the multi-site orchestrator – I just wanted to give you a quick glimpse into what the multi-site orchestrator looks like. One of the things is you’re going to have a dashboard. It shows you everything. It is very in-depth. This is actually showing the site status so you can see there’s a couple of sites that look like they’ve got some critical errors and a couple that look like they’re fine. The thing to note here is that these are still up. This is just simply showing you the sites and if any sites have any issues. If I clicked on sites here, all I have to do is put in the information for the APIC. Now we’re going to be using an API to talk from the multi-site orchestrator down to your APIC. You can also create schemes or policies and tenants. That’s for the basic administration of the multi-site orchestrator.

 

Policy Distribution

I did want to show you that I know everybody’s sitting there going, “oh no, my policy, my data is going across the series of tubes, the interwebs? That’s not good. We want to look at something like IPsec between there to provide some security and integrity.” Well, let’s start over here with our VTEP information. So we define the source-destination VTEPs. We’re going to define that so we know where to send information back. But then we also have our VXLAN header that’s going to give us the tunnel info. Then we have the cloud sec cloud security and Mac sec. I know you don’t see that here but yes, we do have a layer 2 encryption and a layer 3 encryption. Mac sac and cloud sec. Just a quick heads up, the APIC is in charge of not the actual encryption itself, but holding certificates and encryption keys so on and so forth. The actual leaves and spines do the encryption themselves. In this case, it would be the spines.

 

I did want you to see that again, whenever we update information here or whenever we add a device, what happens is that leaf will send a couped message, a cooperative oracle protocol message, up to the spines now. Before, that information would just stay here. It would just stay in its local site but we’re in multi-site now. What’s important for us at this point is to be able to update that information over to the other side and yes, that’s done with COOP. It’s a COOP message encapsulated in the VXLAN header and sent across that inner pod network. That means that at this point each one of these spines, it knows about all of these different devices connected to either site.

 

Now, at this point, since I know where all of my devices are located, I can now start building a policy that says “do I want this endpoint to talk to this endpoint?” That leads to the next thing which is contracts for multi-site. Contracts are similar to what you are familiar with your ACLs -your access controllers. Now instead of you sitting down on each one of these devices and pounding out ACLs all day like we used to do, if I see this endpoint talking across here to the other endpoint, it installs that ACL for me automatically. What’s even better, is it creates an inbound outbound ACL right here on this interface and this endpoint two interface. You didn’t do anything. It just allowed that traffic because the application profile that we created permitted it. Again, I want to mention another neat part about application-centric infrastructure is the fact that if I remove this endpoint, it removes all that mapping on this leaf for me. That’s going to conserve memory and CPU for us, which I think is pretty awesome.

 

What about inserting things like firewalls? I want to plug a firewall into my environment. There’s this thing that we do called service graph insertion. What this means is let’s say that endpoint one wants to talk to endpoint two. I can put a point to a firewall here in the contract and this is this term is actually used a couple of different places. It’s used here for policy distribution but it’s also used for policy enforcement for L4-7 devices. It creates what’s called a shadow EPG. What it means is that when let’s say endpoint two needs to talk through a firewall, we create a shadow EPG between them to keep track of the source and destination.

 

Otherwise, if you think about it, we’re just “oh yeah, go over to here’s the firewall. I don’t know what to do with it next.” By creating the shadow EPG where the firewall is located, it can see where the inbound interface is and where the outbound interface is thus keeping track of where endpoint 2 came from and where it’s going to on endpoint 1. This is something that you don’t have to configure. You’re never going to see. It’s something that runs under the hood, it’s that secret sauce inside the APIC that allows the communication across L4-7 devices so your firewalls, load balancers, SSL offloaders, these are all important devices that we want to connect to the network.

 

Instructor Bio:

multi-site

As a specialist in the data center space, John provides consulting, implementation, and support of Cisco data center infrastructures. In addition to his CCSI, he holds CCNP certifications in the Cisco Data Center, Cloud and Service Provider spaces, CCNP Cyber Ops, and can deliver the FP200 course for Cisco HTD and firepower security training. John has developed full data center labs for Cisco Nexus and ACI products, created data center derivative works courseware, and has recorded several Data Center videos for Cisco eLearning products. His broad experience will help us continue to grow and deliver outstanding product options to our customers.

Tags: ,
BACK

Did you find this helpful?

Sign Up For Our Monthly
Newsletter For More! 

Stay up to date with our latest news and updates. Subscribe to our newsletter and receive exclusive content, promotions, webinar invites, and much more delivered straight to your inbox.

End-of-Year
Gift Card Giveaway

Gift Card

Sunset Learning is spreading holiday cheer with a giveaway of 5 x $50 Amazon Gift Cards! Enter below for your chance to win.