Gns3 Arista

GNS3 is a great tool to visualize your (home-)lab environment and simulate all kinds of network topologies using different virtualization and isolation technologies. It has been widely used to create environments using vEOS-lab, but because vEOS-lab requires quite some resources (e.g. 2GB of RAM is required) the scale of these labs was often quite limited, especially on low-memory devices.

GNS3 VM and VBOXNET0 interface. Notice the IP address of my GNS3 VM is the same as my vboxnet0 interface. GNS3 installation is pretty straightforward. After installation, you need to create a new template. This is where your Aboot and VEOS images comes to play. Follow the steps below to add a new EOS 4.16.5M GNS3 template. Free software worth $200: EOS Part 1: Extensible Operating System (EOS) is a fully programmabl. Arista vEOS appliance. Arista EOS® is the core of Arista cloud networking solutions for next-generation data centers and cloud networks. Cloud architectures built with Arista EOS scale to tens of thousands of compute and storage nodes with management and provisioning capabilities that work at scale. Free software worth $200: EOS Part 1: Extensible Operating System (EOS) is a fully programmabl. Hi, I have installed VM workstation and create new GNS3 virtual machine. After that I followed arista’s documentation page for adding vEOS image in GNS3 and added successfully but when I tried to take console of arista image it stopped after showing loading linux loading initrd. After that screen is blank.

Arista’s cEOS-lab is a new way of packaging the EOS-lab suite. Using the Docker container daemon, it is possible to use the kernel of the host machine and to only run the EOS processes that are required on the machine, making it possible to vastly reduce the memory footprint of each node.

With cEOS-lab 4.22.0F and above, it is also possible to simulate a Management port, providing for full support of scale-testing your configuration using exactly the same automation flow you would use in your production environment.

Contents

Importing cEOS-lab as a Docker image

To be able to add cEOS-lab nodes into a GNS3 topology, we first have to import the cEOS-lab archive into the Docker daemon on the machine running GNS3. The cEOS-lab software can be obtained through the Arista Software Download portal, listed under the “cEOS-lab” folder. In this example we’ll be using version cEOS-lab-4.22.0.1F.tar.xz, which has been copied to the working folder when executing these commands.

Please note the –change ‘VOLUME /mnt/flash/’ parameter, which accounts for persistence throughout cEOS-lab instance restarts.

After this, we can verify that the image has been correctly added by running the following command:

Adding cEOS-lab as a GNS3 appliance

Then, we add cEOS-lab to GNS through the GUI. Go to “GNS3 –> Preferences” to open the settings page. Then move to the “Docker Containers” view. Click “New” to add the entry, if the only image available in Docker is ceosimage:4.22.0.1F, it will be chosen by default.

Click “Next” and enter the desired name.

Select the number of Adapters that you would like cEOS-lab to have. The first adapter will be linked to the Management0 port, so in this example we will have 8 Ethernet ports available in EOS by using a total of 9 adapters.

Select which command has to be executed when the container boots. This has to be the Linux init process which in turn launches the rest of EOS.

As of version 4.23.0F, we need to supply the variables to systemd by changing the start command to the following:

Gns3 Arista 4

Select the console type that GNS3 makes the node available on. Using the telnet console we can see the status of the init process, while we can use an auxiliary console if there is need to enter the CLI.

Enter the environment variables. These are of vital importance for cEOS-lab to work. Note that when not passing the “MGMT_INTF” variable, eth0 will will not be linked to Management0, and the first port visible from cEOS will be Ethernet1 (mapped to eth1) instead.

Arista

Click “Finish” and don’t forget to click “OK” (make sure not to hit “Cancel”) on the Preferences page.

Gns3 Arista

Adding cEOS-lab node(s) to your GNS3 topology

Drag the node to the canvas, then start it. Right-click and click “Console” to see the boot process of the node.

Wait until the boot process has completed (it will tell you that it has reached the target “Graphical Interface”.

Right-click again and click “Auxiliary Console” to open a second console, then run the command “FastCli” to enter the CLI of the node.

Gns3 Arista 1

Gns3 arista server

Verifying connectivity

To test if this all works we can add another node, then connect the two ends together and do a ping test.

MLAG Functionality

In order for MLAG to function properly, each cEOS-lab node must have a unique system MAC which helps specify the correct system-id. You can accomplish this by creating a file named “ceos-config” that defines the system MAC address along with a serial number (optional). First, we’ll drop into the bash shell and create the file and modify permissions so we can write to the file:

We’ll then define our SERIALNUMBER and SYSTEMMACADDR variables and write them to ceos-config:

Again, each node will need a unique system MAC address (i.e. 001c.7300.0001, 001c.7300.0002, etc). The serial number can be any arbitrary value (i.e. cEOS-lab-host1) but should also be unique per node.

Gns3 Arista

MLAG configuration will now function properly using your cEOS-lab nodes.

OSPF/ISIS

In order for OSPF and/or ISIS to work with cEOS-lab, the INTFTYPE needs to be set to ‘et’ rather than ‘eth’. We’ll change our start command to ‘INTFTYPE=et’ (as of 4.25.1F):

We’ll also update our environment variables to include ‘INTFTYPE=et’:

After the node boots, you’ll notice that the CLI only recognizes the management interface (Ma0):

However, running ‘bash ifconfig’ shows all interfaces:

We’re going to create a script that converts the Linux interface names from ‘eth(x)’ to ‘et(x)’. Here are the commands:

We’ll use the following text in the script to change the names of the interfaces:

Now that we have our script created, we’ll setup an Event-Handler to run the script on-boot:

The next time you start your cEOS-lab node, the interfaces will be recognized via the CLI:

You can then setup OSPF and/or ISIS on your cEOS-lab nodes:

Testing eAPI

Last but not least we’ll also verify that the APIs are working to do some automation testing. First we’ll put an IP address on the management port:

Then we’ll enable the API on the node:

Make sure that you can reach the GNS3 cloud instance using your browser. This can be if it is directly bridged to your network, or if you are running GNS3 on another server by using an SSH tunnel. For more information on setting up tunnels through SSH, visit https://www.ssh.com/ssh/tunneling/example

After we set up the connectivity to the GNS3 cloud instance, we can test eAPI through the web GUI (located at https://<Management0-IP>/explorer.html):

Checking that that the commands have succeeded, showing the Management0 interface from the eAPI (using json output):

You’ve now got cEOS-lab up and running in GNS3. Enjoy creating virtualized labs which are much more scalable than when using regular vEOS-lab instances!

How to Run vEOS 4.16.6M in GNS3 1.5

Contents

This document will go over how to install a vEOS vm instance on both your Windows 7 OS as well as Mac OS X. The steps are exactly the same between OSes. We will first start with Windows 7 installation and will then show a few screenshots on the Mac. Finally we will conclude this post by going over the steps to run vEOS all locally off your machine (however this isn’t recommended as running vEOS in a QEMU vm is much more efficient than running it locally).

  • Aboot-veos-8.0.0.iso
  • vEOS-lab-4.16.6M.vmdk
  • GNS3 version 1.5
  • GNS3 VM
  • Have your local GNS3 client able to ping the GNS3 vm

First, run preferably VM Workstation or vSphere to host your GNS3 VM. In older GNS3 0.8.3 versions, you could run everything locally on workstation. Now with the new GNS3 1.x, we have the ability to offload the running of vms by running these instances in a different vm called “GNS3 VM”. On a laptop or PC with enough resources, you can run both the GNS3 app and GNS3 VM on one box. I personally am using for the Windows implementation VM Workstation with a the GNS3 VM on the same subnet as my GNS3 app.


You could run VirtualBox but issue is per GNS3 VM’s documentation page, “We highly recommend VMware because VirtualBox doesn’t support nested virtualization, this means any VM running inside the GNS3 VM will be slow because the guest VM cannot access to your CPU virtualization instructions (VT-x or AMD-V).”

So for this setup, either run VM Workstation, VM Player, vSphere, or another hypervisor that allows for nested virtualization. You can follow this feature request here.

Make sure your GNS3 app can reach this IP listed in your setup. To add this server to your GNS3 app, go to Preferences > Server > Remote Servers and type in the IP and username (gns3/gsn3 by default). Then make sure you enable the GNS3 VM under the GNS3 VM server tab and select the remote server you entered.

Images

Now that you’ve added the GNS3 VM to the GNS3 app, let’s create the vEOS template. You can either import a pre-configured template from the GNS3 Marketplace, or create one from scratch. I personally prefer the “from-scratch” method as it’s quicker to do than downloading a gns3 appliance.

To create a vEOS template, go to Preferences > QEMU VMs > New.

Select Run the Qemu VM on a remote computer and select the GNS3 VM we defined before. Next, name the template “vEOS” or whatever naming convention you’d lilke.


Next change the binary type used to -x86_64 and change the RAM from 1G to 2G. vEOS requires at least 2GB of RAM to work properly. On the next pages, enter in the aboot iso file. If you haven’t uploaded it to the GNS3 VM, it will upload it here. Click Finish.

Now let’s click Edit on this template and go to the HDD tab. Choose for HDB Slave (primary slave) the vmdk file. Also, under the Network tab, tick up the number of NICs you’d like on the vEOS vm.

Finally as a last step, click under Advanced settings of the template and under the Additonal settings type in “-enable-kvm”

Once completed, you can now drag & drop from the All Devices dock the vEOS vm. Being that this is now a template you will create new vm instances every time you drag out a template. You can from here configure each vm in the workspace uniquely.

From here, right click or select all VMs and click Start. Now you can right click and select Console to get into the console of your QEMU vm!

The steps above are exactly the same you would do on a Mac. To show a screenshot of the final product. We still have our GNS3 VM as well as our GN3 app running locally on my Mac.

The last way we can run vEOS and GNS3 is all on one box. If you have a beefy PC or server, you can keep all roles on one box. To do this, follow the same steps as before, the only different is to omit the part where we check off Use GNS3 Server and when you create the QEMU vm choose to use the local machine. To do this under Preferences > GNS3 VM server > tick off Enable the GNS3 VM

Then, we we go under QEMU VMs and select New. This time, we choose to run the QEMU VM on your local computer.

Notice that the path towards the QEMU binaries change from the /usr/bin location (where we originally used the resources off the GNS3 VM) we now are running the processes locally. Ensure you’ve selected the x86_64w.exe QEMU binary.


From here, follow the same steps we did when we configured the vEOS vm in Windows.


Start off the vm just like we did previously by right clicking the vm and selecting Start.


An issue with running this locally is the delay — it is noticeably much much longer for the vm to fully bootup compared to running it off the GNS3 VM. Even entering commands is delayed, so its best to run this from QEMU. Testing off my PC took almost 40 minutes (see output below).

If you’d like to run vEOS without using GNS3 you can do this also in VMware vSphere or VirtualBox. There is already an article on this here.