OpenStack deployment via packstack from RDO
This article will cover the steps involved in getting OpenStack deployed using "packstack
" from RDO.
- Deploy assumptions:
- OS: CentOS 7.1 (64-bit; 7.1.1503 Core)
- OpenStack release: "Kilo" (April 2015)
- RDO release: rdo-release-kilo-1 (noarch; 12 May 2015)
Single node
Note: Using neutron with a flat network driver.
$ sysctl -a | grep ip_forward #=> 1 $ sestatus #=> set to "permissive" $ systemctl stop NetworkManager.service $ systemctl disable NetworkManager.service
$ yum update -y $ yum install -y https://rdoproject.org/repos/rdo-release.rpm $ yum install -y openstack-packstack
$ packstack --allinone --provision-demo=n
$ cp /etc/sysconfig/network-scripts/ifcfg-eth0 /root/ # backup $ cat << EOF > /etc/sysconfig/network-scripts/ifcfg-eth0 DEVICE=eth0 HWADDR=FF:FF:FF:FF:FF:FF TYPE=OVSPort DEVICETYPE=ovs OVS_BRIDGE=br-ex ONBOOT=yes EOF
$ cat << EOF > /etc/sysconfig/network-scripts/ifcfg-br-ex DEVICE=br-ex DEVICETYPE=ovs TYPE=OVSBridge MACADDR=FF:FF:FF:FF:FF:FF BOOTPROTO=static IPADDR=10.1.100.15 #PREFIX=23 NETMASK=255.255.254.0 GATEWAY=10.1.100.1 DNS1=8.8.8.8 DNS2=8.8.4.4 ONBOOT=yes EOF
- Modify the following config parameters to define a logical name for our external physical L2 segment, as "extnet" (this will be referenced as a provider network when we create the external networks below) and support more than just a vxlan driver (since we will be using a "flat" driver for this deploy):
$ openstack-config --set /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini ovs bridge_mappings extnet:br-ex $ openstack-config --set /etc/neutron/plugin.ini ml2 type_drivers vxlan,flat,vlan
- Restart all related network services:
$ service network restart $ service neutron-openvswitch-agent restart $ service neutron-server restart
- Bug fix:
$ ovs-vsctl br-set-external-id br-ex bridge-id br-ex $ service neutron-plugin-openvswitch-agent restart
- The following credentials, endpoints, etc. will be used for all OpenStack utilities (e.g.,
`nova`
,`neutron`
, etc.) or API calls for the remainder of this article section:
$ cat keystonerc_admin unset OS_SERVICE_TOKEN export OS_USERNAME=admin export OS_PASSWORD=<password> export OS_AUTH_URL=http://10.1.100.15:5000/v2.0 export PS1='[\u@\h \W(keystone_admin)]\$ ' export OS_TENANT_NAME=admin export OS_REGION_NAME=RegionOne
$ . keystonerc_admin # source the admin environment
- Setup networks:
$ neutron net-create --provider:network_type flat \ --provider:physical_network extnet \ --router:external \ --shared external_network $ neutron subnet-create --name public_subnet \ --enable_dhcp=False \ --allocation-pool start=10.1.100.16,end=10.1.100.20 \ --gateway=10.1.100.1 external_network 10.1.100.0/23 $ neutron net-create private_network $ neutron subnet-create --name private_subnet \ --allocation-pool start=10.10.1.100,end=10.10.1.200 --gateway=10.10.1.1 private_network 10.10.1.0/24 $ neutron router-create router1 $ neutron router-interface-add router1 private_subnet $ neutron router-gateway-set router1 external_network
- Create new (non-admin) tenant and user:
$ keystone tenant-create --name demo --description "demo tenant" --enabled true $ keystone user-create --name demo --tenant demo --pass "password" --email demo@example.com --enabled true
- Populate glance with an initial image:
$ CIRROS_IMAGE_NAME=cirros-0.3.4-x86_64 $ CIRROS_IMAGE_URL="http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img" $ curl ${CIRROS_IMAGE_URL} | \ glance image-create --name="${CIRROS_IMAGE_NAME}" \ --is-public=true \ --container-format=bare \ --disk-format=qcow2 $ glance image-list
- Create basic security groups/rules (to allow basic networking traffic in/out of VMs):
$ nova secgroup-create all "Allow all tcp ports" $ nova secgroup-add-rule all TCP 1 65535 0.0.0.0/0 $ nova secgroup-create base "Allow Base Access" $ nova secgroup-add-rule base TCP 22 22 0.0.0.0/0 $ nova secgroup-add-rule base TCP 80 80 0.0.0.0/0 $ nova secgroup-add-rule base ICMP -1 -1 0.0.0.0/0
- Create keypair:
$ nova keypair-add admin >/root/admin.pem $ chmod 600 /root/admin.pem $ nova keypair-list
- Create a very small ("nano") flavor for use in testing (spins up faster, uses less resources, etc.):
$ nova flavor-create m1.nano 42 64 0 1 # <name> <id> <ram> <disk> <vcpus> $ nova flavor-list
- Setup environment variables in order to keep track of UUIDs, etc.:
$ INSTANCE_NAME=rdo-test-01 $ GLANCE_IMAGE_ID=$(glance image-list | \grep ${CIRROS_IMAGE_NAME} | awk '{print $2}') $ PRIVATE_NET_ID=$(neutron net-list | \grep private_network | awk '{print $2}')
- Spin up a nova instance (VM):
$ nova boot --flavor m1.nano --image ${GLANCE_IMAGE_ID} --nic net-id=${PRIVATE_NET_ID} \ --key-name admin --security-groups base ${INSTANCE_NAME} $ INSTANCE_ID=$(nova list | \grep ${INSTANCE_NAME} | awk '{print $2}')
- Associate a floating IP with the new instance (this "floating IP" is how the instance communicates with the Internet):
$ neutron floatingip-create external_network $ FLOATINGIP_ID= $ NEUTRON_COMPUTE_PORT_ID=$(neutron port-list -c id -c device_owner -- \ --device_id ${INSTANCE_ID} | \grep compute | awk '{print $2}') $ neutron floatingip-associate ${FLOATINGIP_ID} ${NEUTRON_COMPUTE_PORT_ID} $ neutron floatingip-show ${FLOATINGIP_ID} $ neutron floatingip-list $ nova floating-ip-list
- Log into instance:
$ ssh -i /root/admin.pem cirros@10.1.100.x # use associated floating IP
- Direct access to Nova metadata:
- see: for details
$ SHARED_SECRET=$(crudini --get /etc/nova/nova.conf neutron metadata_proxy_shared_secret) $ META_SIGNATURE=$(python -c 'import hmac,hashlib;print hmac.new("'${SHARED_SECRET}'",\ "'${INSTANCE_ID}'",hashlib.sha256).hexdigest()') $ ADMIN_TENANT_ID=$(keystone tenant-list | \grep admin | awk '{print $2}') $ ENDPOINT=http://10.1.100.15:8775 $ curl -s -H "x-instance-id:${INSTANCE_ID}" \ -H "x-tenant-id:${ADMIN_TENANT_ID}" \ -H "x-instance-id-signature:${META_SIGNATURE}" \ ${ENDPOINT}/latest/meta-data
# RESPONSE: ami-id ami-launch-index ami-manifest-path block-device-mapping/ hostname instance-action instance-id instance-type kernel-id local-hostname local-ipv4 placement/ public-hostname public-ipv4 public-keys/ ramdisk-id reservation-id security-groups
Troubleshooting
- Make sure the "
openvswitch
" kernel module is installed and configured properly:
$ lsmod | grep openvswitch $ modinfo openvswitch
- Poke around your Open vSwitch setup:
$ ovs-vsctl show $ ovs-vsctl list-br $ ovs-vsctl list-ports br-ex # => em1,phy-br-ex $ ovs-ofctl dump-flows br-ex $ ovs-ofctl dump-ports br-ex $ ovs-ofctl show br-int $ ovs-vsctl list bridge $ ovs-vsctl list port $ ovs-vsctl list interface $ ovs-appctl fdb/show br-int
- Look through ovs db logs to see what/how/when ports, bridges, interfaces, etc. were created for a given instance:
$ ovsdb-tool show-log | grep ${INSTANCE_ID}
# example output: record 467: 2015-10-06 23:37:05.589 "ovs-vsctl: /bin/ovs-vsctl --timeout=120 \ -- --if-exists del-port qvo55bfa719-2b \ -- add-port br-int qvo55bfa719-2b \ -- set Interface qvo55bfa719-2b \ external-ids:iface-id=55bfa719-2b39-4793-823e-4a79405d9256 \ external-ids:iface-status=active \ external-ids:attached-mac=fa:16:3e:b8:dc:e2 \ external-ids:vm-uuid=0e6c8821-22f3-4c6a-95b9-0fbf855e82e0"
$ brctl show $ brctl showmacs qbrc648c3ca-76 $ ps afux|grep [d]nsmasq $ cat /proc/$(pidof dnsmasq)/cmdline | tr '\0' '\n' $ ps afux|grep qemu
$ QROUTER=qrouter-$(neutron router-list|\grep router1|awk '{print $2}') $ ip netns exec ${QROUTER} ip a $ ip netns exec ${QROUTER} router $ ip netns exec ${QROUTER} ping -c3 10.10.1.1 # gateway $ ip netns exec ${QROUTER} iptables -S -t nat
$ ip netns identify $(pidof dnsmasq) #~OR~ $ QDHCP=qdhcp-$(neutron net-list|\grep private_network|awk '{print $2}') $ ip netns exec ${QDHCP} ip r
- Check host's iptables rules related to our instance:
$ NEUTRON_COMPUTE_PORT_ID=$(neutron port-list -c id -c device_owner -- \ --device_id ${INSTANCE_ID} | \grep compute | awk '{print $2}') $ iptables -L neutron-openvswi-i$(echo ${NEUTRON_COMPUTE_PORT_ID} |awk '{print substr($0,1,10)}')
# example output (trimmed): RETURN tcp -- anywhere anywhere tcp dpt:http RETURN tcp -- anywhere anywhere tcp dpt:ssh RETURN icmp -- anywhere anywhere
The above rules should match our Nova secgroup "base" rules we defined previously. I.e.,
$ nova secgroup-list-rules base +-------------+-----------+---------+-----------+--------------+ | IP Protocol | From Port | To Port | IP Range | Source Group | +-------------+-----------+---------+-----------+--------------+ | tcp | 80 | 80 | 0.0.0.0/0 | | | tcp | 22 | 22 | 0.0.0.0/0 | | | icmp | -1 | -1 | 0.0.0.0/0 | | +-------------+-----------+---------+-----------+--------------+
- Capture traffic to/from host and/or instance:
$ tcpdump -nni eth0 \( dst host 10.0.0.19 and port 22 \) $ tcpdump -nni eth0 icmp $ tcpdump -i any -n -v \ 'icmp[icmptype] = icmp-echoreply or icmp[icmptype] = icmp-echo'
- See what virsh knows about our instance:
$ VIRSH_INSTANCE_ID=$(nova show ${INSTANCE_ID} |\grep OS-EXT-SRV-ATTR:instance_name|awk '{print $4}') $ virsh dumpxml instance-00000003 | grep -A3 bridge $ grep -A4 'type="bridge"' /var/lib/nova/instances/${INSTANCE_ID}/libvirt.xml $ xmllint --xpath '//devices/interface[@type="bridge"]/source[@bridge]' libvirt.xml
Open vSwitch (OVS)
+--------------------------------------+ | cirros | | +------+ | | | eth0 | (10.10.1.102) | +---------------+------+---------------+ | | +-----------------+ Virtual NIC (tap) | tap-55bfa719-2b | +-----------------+ | | +-----------------+ Linux bridge | qbr-55bfa719-2b | +-----------------+ | | +-----------------+ | qvb-55bfa719-2b | +-----------------+ veth pair | +-----------------+ | qvo-55bfa719-2b | +---------------+-----------------+--------------+ OVS Bridge | 10.10.1.0/24 br-int | +--+----------------+-----------+----------------+ / | qr-b24d8155-69 | \ / | tap0d8a8773-84 | \ / +----------------+ \ / +----------------+ \ / \ / \ | qrouter- | | Namespace: | | 9f36.... | | qdhcp- | | | | | | 10.1.100.16 | \______________________/ \ +----------------+ / \ | qg-20ffa8ce-2f | / +--+----------------+--+ | 10.1.100.0/23 | OVS Bridge | br-ex | +----------------------+
The names for the OVS ports and interfaces are made from a prefix (qr-, qg-, or tap) followed by the first 11 chars of the Neutron port they represent.
- Get a list of network namespaces:
$ ip netns list qrouter-9f3646b2-cfa0-435b-9708-5854656b797f # router1 qdhcp-7f8821c4-7f8a-4a48-ac17-15f27a32a60c # private_network
$ QROUTER=qrouter-$(neutron router-list|\grep router1|awk '{print $2}') $ ip netns exec ${QROUTER} ip a $ ip netns exec ${QROUTER} router $ ip netns exec ${QROUTER} ping -c3 10.10.1.1 # gateway $ ip netns exec ${QROUTER} iptables -S -t nat
$ ip netns exec ${QROUTER} ip r default via 10.1.100.1 dev qg-20ffa8ce-2f 10.1.100.0/23 dev qg-20ffa8ce-2f proto kernel scope link src 10.1.100.16 10.10.1.0/24 dev qr-b24d8155-69 proto kernel scope link src 10.10.1.1
- Check interfaces configured within the network namespace:
$ ip netns exec ${QROUTER} ip a|grep -E 'state|inet' ... 32: qr-b24d8155-69: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN inet 10.10.1.1/24 brd 10.10.1.255 scope global qr-b24d8155-69 36: qg-20ffa8ce-2f: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN inet 10.1.100.16/23 brd 10.1.101.255 scope global qg-20ffa8ce-2f inet 10.1.100.19/32 brd 10.1.100.19 scope global qg-20ffa8ce-2f
The agent has bound 10.10.1.1 (the gateway_ip
of our private_subnet
) to qr-b24d8155-69
(router1
's interface for private_subnet
). DHCP pushes this IP to instances connected to private_subnet
for their default gateway.
The agent has also bound IP addresses 10.1.100.16 (for gateway SNAT) and 10.1.100.19 (the floating IP) to the gateway interface qg-20ffa8ce-2f
.
- Verify agent has also enabled IP forwarding within the namespace:
$ ip netns exec ${QROUTER} sysctl net.ipv4.ip_forward net.ipv4.ip_forward = 1
- Look at the Network Address Translation (NAT) rules within the network namespace:
$ ip netns exec ${QROUTER} iptables-save|grep -i nat ... -A neutron-l3-agent-OUTPUT -d 10.1.100.19/32 -j DNAT --to-destination 10.10.1.102 -A neutron-l3-agent-PREROUTING -d 10.1.100.19/32 -j DNAT --to-destination 10.10.1.102 -A neutron-l3-agent-float-snat -s 10.10.1.102/32 -j SNAT --to-source 10.1.100.19 -A neutron-l3-agent-snat -j neutron-l3-agent-float-snat -A neutron-l3-agent-snat -o qg-20ffa8ce-2f -j SNAT --to-source 10.1.100.16
Thus, a connection initiated from within instance to the outside will hit this rule in the NAT table (post routing):
-A neutron-l3-agent-float-snat -s 10.10.1.102/32 -j SNAT --to-source 10.1.100.19
And connection initiated from the outside to the instance's floating IP will hit this rule:
-A neutron-l3-agent-PREROUTING -d 10.1.100.19/32 -j DNAT --to-destination 10.10.1.102
$ QDHCP=qdhcp-$(neutron net-list|\grep private_network|awk '{print $2}') $ ip netns exec ${QDHCP} ip r
$ grep tap /var/lib/neutron/dhcp/${PRIVATE_NET_ID}/interface tap0d8a8773-84
$ cat /var/lib/neutron/dhcp/${PRIVATE_NET_ID}/opts tag:tag0,option:router,10.10.1.1 tag:tag0,option:dns-server,10.10.1.100
cirros$ route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 10.10.1.1 0.0.0.0 UG 0 0 0 eth0 10.10.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
cirros$ cat /etc/resolv.conf search openstacklocal nameserver 8.8.8.8
External links
- xtof-openstack-rdo-packstack on GitHub