TNS
VOXPOP
Will JavaScript type annotations kill TypeScript?
The creators of Svelte and Turbo 8 both dropped TS recently saying that "it's not worth it".
Yes: If JavaScript gets type annotations then there's no reason for TypeScript to exist.
0%
No: TypeScript remains the best language for structuring large enterprise applications.
0%
TBD: The existing user base and its corpensource owner means that TypeScript isn’t likely to reach EOL without a putting up a fight.
0%
I hope they both die. I mean, if you really need strong types in the browser then you could leverage WASM and use a real programming language.
0%
I don’t know and I don’t care.
0%
Cloud Native Ecosystem

Solving a Common Beginner’s Problem When Pinging from an OpenStack Instance

Sep 15th, 2014 10:59pm by
Featued image for: Solving a Common Beginner’s Problem When Pinging from an OpenStack Instance
Feature image via Flickr Creative Commons

Trouble with pinging the outside world from an OpenStack instance is a common problem for people starting with using the open cloud platform.

Here’s why and how I figured out a way to communicate with my instances in OpenStack.

Being a CentOS fan, I began my journey into the OpenStack space, just as any other developer would start off, with disabling the SELinux and iptables. In my experience, these tend to create problems.

However, after days of struggle going through the architecture of OpenStack Networking, I realized that OpenStack needs the iptables to be running. I also figured out that I needed to set SELinux to permissive mode instead of completely disabling it.

It turns out that OpenStack uses iptables rules on TAP devices such as vnet0 to implement security groups, and Open vSwitch is not compatible with iptables rules that are applied directly on TAP devices that are connected to an Open vSwitch port. I’ll walk you through what that means in detail.

The diagram below shows how OpenStack uses iptables. It’s a busy diagram — try not to get intimidated!

under-the-hood-scenario-1-ovs-compute

Source

To start off, focus on the left hand side legend where there are four virtual networking devices:

  1. TAP devices, such as vnet0, are how hypervisors such as KVM and Xen implement a virtual network interface (VIF, or vNIC) card. An ethernet frame sent to a TAP device is received by the guest operating system.
  2. Veth pairs are like a virtual network/ethernet cable for the virtual network devices that they connect. An ethernet frame sent to one end of a veth pair is received by the other end of the veth pair.
  3. Linux bridges, indicated by the blue box at the base of the diagram, act like hubs. A bridge is a networking device that connects two different types of networks. Here, we are connecting the virtual network to the physical one and hence need a bridge. You can connect multiple (physical or virtual) network interface devices to a Linux bridge. Any ethernet frame that comes in from one interface attached to the bridge is transmitted to all of the other devices.
  4. Open vSwitch bridges behave like a virtual switch: Network interface devices connect to Open vSwitch bridge’s ports and the ports can be configured much like a physical switch’s ports, including VLAN configurations.

It’s also important to understand what libvirt is. Wikipedia defines it as “an open source API, daemon and management tool for managing platform virtualization. This is what is used to manage KVM, Xen, VMware ESX, QEMU and other virtualization technologies.”

​It would have been quite straight forward for me to connect the vnet0 to the br-int (the integration bridge implemented using the Open vSwtich). ​However I needed to bring a Linux bridge for the following reason.

Libvirt supports the ​Open vSwitch bridges. And we also have security groups in OpenStack, the ​place where we indicate which ports should be open or closed and who can access them. The security group manipulates iptables which cannot work on the Open vSwitch due to the way the security groups are implemented in it. Hence, Neutron uses the Linux bridge or Open vSwitch hybrid bridge to connect the VMs to the br-in or integration bridge.

Looking at the diagram, instead of connecting vnet0 to an Open vSwitch bridge, it is connected to a Linux bridge, qbrXXX. This bridge is connected to the integration bridge, br-int, via the (qvbXXX, qvoXXX) veth pair.

This is what is used for forwarding packets to and from instances on a compute node, forwarding floating IP traffic, and managing security group rules.

The packet that leaves the VIF of the instance passes through nine virtual devices to make it to the outside network. Using tcpdump, you can monitor the ARP requests on the network. However one of the most essential things that is overlooked is, how are the packets going to make it back to the VM or instance?

For the moment, let’s ignore all the other virtual network devices and just assume that the packets from the VM are sent to the compute node (let’s call it the internal network) and they leave from the compute node to your office or home LAN. Let’s call that the external network.

Now suppose the internal network is on 192.168.1.0/24 and the external network is on 10.1.0.0/16. The dnsmasq in Openstack assigned the instance an IP of 192.168.1.5 and so when the packet leaves the VM, the source IP of such a packet would be 192.168.1.5. With the help of several virtual networking devices, this packet makes it to the external network. If there is a reply for this packet, and it is heading back to 192.168.1.5, it would never make it back, as the external network will not know about the 192 range.

This is where IP masquerading comes to our rescue. IP masquerading was used in the past by those who had several machines to connect to the Internet but had only one gateway to the Internet. What would be done is Source NATing the packet that is leaving the network to carry with the host’s or gateway’s IP address. This is done in the POSTROUTING chain, just before it is finally sent out. When this is done correctly, the reply to the packet will have a valid source IP and will be able to make it back to the gateway and from there be routed to the appropriate machine.

Similarly, in the OpenStack world, the packet leaving the VM is masqueraded as originating from the compute host. And by having the following two settings on the compute node, the packet is forwarded to the Linux Namespace.

Edit /etc/sysctl.conf to contain the following:

net.ipv4.ip_forward=1
# Controls source route verification
net.ipv4.conf.default.rp_filter=0
net.ipv4.conf.all.rp_filter=0

The virtual router or switch within OpenStack will be forwarded this packet and it will be able to NAT it to the VM. Examples of masquerading in the iptables are as below:-

-A POSTROUTING -s -j MASQUERADE

iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE

With this in place, you should have no more trouble pinging your new OpenStack instances and also pinging the outside world from them.

Venu works for ThoughtWorks where he is responsible for building crucial cloud apps. He manages and monitors the cloud apps by automating most of the processes using the language he is passionate about — Python!

Group Created with Sketch.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.