OpenStack deployment via packstack from RDO
From Christoph's Personal Wiki
This article will cover the steps involved in getting OpenStack deployed using "packstack
" from RDO.
WARNING: This article is very much under construction. Proceed with caution until this banner has been removed.
- Deploy assumptions:
- OS: CentOS 7.1 (64-bit; 7.1.1503 Core)
- OpenStack release: "Kilo" (April 2015)
Single node
Note: Using neutron with a flat network driver.
$ sysctl -a | grep ip_forward #=> 1 $ sestatus #=> set to "permissive" $ systemctl stop NetworkManager.service $ systemctl disable NetworkManager.service
$ yum update -y $ yum install -y https://rdoproject.org/repos/rdo-release.rpm $ yum install -y openstack-packstack
$ packstack --allinone --provision-demo=n
$ cp /etc/sysconfig/network-scripts/ifcfg-eth0 /root/ # backup $ cat << EOF > /etc/sysconfig/network-scripts/ifcfg-eth0 DEVICE=eth0 HWADDR=FF:FF:FF:FF:FF:FF TYPE=OVSPort DEVICETYPE=ovs OVS_BRIDGE=br-ex ONBOOT=yes EOF
$ cat << EOF > /etc/sysconfig/network-scripts/ifcfg-br-ex DEVICE=br-ex DEVICETYPE=ovs TYPE=OVSBridge MACADDR=FF:FF:FF:FF:FF:FF BOOTPROTO=static IPADDR=10.1.100.15 #PREFIX=23 NETMASK=255.255.254.0 GATEWAY=10.1.100.1 DNS1=8.8.8.8 DNS2=8.8.4.4 ONBOOT=yes EOF
$ openstack-config --set /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini ovs bridge_mappings extnet:br-ex $ openstack-config --set /etc/neutron/plugin.ini ml2 type_drivers vxlan,flat,vlan
$ service network restart $ service neutron-openvswitch-agent restart $ service neutron-server restart
- Bug fix:
$ ovs-vsctl br-set-external-id br-ex bridge-id br-ex $ service neutron-plugin-openvswitch-agent restart
$ cat keystonerc_admin unset OS_SERVICE_TOKEN export OS_USERNAME=admin export OS_PASSWORD=<password> export OS_AUTH_URL=http://10.1.100.15:5000/v2.0 export PS1='[\u@\h \W(keystone_admin)]\$ ' export OS_TENANT_NAME=admin export OS_REGION_NAME=RegionOne
$ . keystonerc_admin # source the admin environment
- Setup networks:
$ neutron net-create --provider:network_type flat \ --provider:physical_network extnet \ --router:external \ --shared external_network $ neutron subnet-create --name public_subnet \ --enable_dhcp=False \ --allocation-pool start=10.1.100.16,end=10.1.100.20 \ --gateway=10.1.100.1 external_network 10.1.100.0/23 $ neutron net-create private_network $ neutron subnet-create --name private_subnet \ --allocation-pool start=10.10.1.100,end=10.10.1.200 --gateway=10.10.1.1 private_network 10.10.1.0/24 $ neutron router-create router1 $ neutron router-interface-add router1 private_subnet $ neutron router-gateway-set router1 external_network
- Create new (non-admin) tenant and user:
$ keystone tenant-create --name demo --description "demo tenant" --enabled true $ keystone user-create --name demo --tenant demo --pass "password" --email demo@example.com --enabled true
- Populate glance with initial image:
$ CIRROS_IMAGE_NAME=cirros-0.3.4-x86_64 $ CIRROS_IMAGE_URL="http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img" $ curl ${CIRROS_IMAGE_URL} | \ glance image-create --name="${CIRROS_IMAGE_NAME}" \ --is-public=true \ --container-format=bare \ --disk-format=qcow2 $ glance image-list
- Create basic security groups/rules (to allow basic networking traffic in/out of VMs):
$ nova secgroup-create all "Allow all tcp ports" $ nova secgroup-add-rule all TCP 1 65535 0.0.0.0/0 $ nova secgroup-create base "Allow Base Access" $ nova secgroup-add-rule base TCP 22 22 0.0.0.0/0 $ nova secgroup-add-rule base TCP 80 80 0.0.0.0/0 $ nova secgroup-add-rule base ICMP -1 -1 0.0.0.0/0
- Create a very small ("nano") flavor for use in testing (spins up faster, uses less resources, etc.):
$ nova flavor-create m1.nano 42 64 0 1 # <name> <id> <ram> <disk> <vcpus> $ nova flavor-list
- Setup environment variables in order to keep track of UUIDs, etc.:
$ INSTANCE_NAME=rdo-test-01 $ GLANCE_IMAGE_ID=$(glance image-list | \grep ${CIRROS_IMAGE_NAME} | awk '{print $2}') $ PRIVATE_NET_ID=$(neutron net-list | \grep private_network | awk '{print $2}')
- Spin up a nova instance (VM):
$ nova boot --flavor m1.nano --image ${GLANCE_IMAGE_ID} --nic net-id=${PRIVATE_NET_ID} \ --key-name admin --security-groups base ${INSTANCE_NAME} $ INSTANCE_ID=$(nova list | \grep ${INSTANCE_NAME} | awk '{print $2}')
- Associate a floating IP with the new instance (this "floating IP" is how the instance communicates with the Internet):
$ neutron floatingip-create external_network $ FLOATINGIP_ID= $ NEUTRON_COMPUTE_PORT_ID=$(neutron port-list -c id -c device_owner -- \ --device_id ${INSTANCE_ID} | \grep compute | awk '{print $2}') $ neutron floatingip-associate ${FLOATINGIP_ID} ${NEUTRON_COMPUTE_PORT_ID} $ neutron floatingip-show ${FLOATINGIP_ID}
- Direct access to Nova metadata:
- see: for details
$ SHARED_SECRET=$(crudini --get /etc/nova/nova.conf neutron metadata_proxy_shared_secret) $ META_SIGNATURE=$(python -c 'import hmac,hashlib;print hmac.new("'${SHARED_SECRET}'",\ "'${INSTANCE_ID}'",hashlib.sha256).hexdigest()') $ ADMIN_TENANT_ID=$(keystone tenant-list | \grep admin | awk '{print $2}') $ ENDPOINT=http://10.1.100.15:8775 $ curl -s -H "x-instance-id:${INSTANCE_ID}" \ -H "x-tenant-id:${ADMIN_TENANT_ID}" \ -H "x-instance-id-signature:${META_SIGNATURE}" \ ${ENDPOINT}/latest/meta-data
# RESPONSE: ami-id ami-launch-index ami-manifest-path block-device-mapping/ hostname instance-action instance-id instance-type kernel-id local-hostname local-ipv4 placement/ public-hostname public-ipv4 public-keys/ ramdisk-id reservation-id security-groups
Troubleshooting
- Make sure the "
openvswitch
" kernel module is installed and configured properly:
$ lsmod | grep openvswitch $ modinfo openvswitch
- Poke around your Open vSwitch setup:
$ ovs-vsctl show $ ovs-vsctl list-br $ ovs-vsctl list-ports br-ex # => em1,phy-br-ex $ ovs-ofctl dump-flows br-ex $ ovs-ofctl dump-ports br-ex $ ovs-ofctl show br-int $ ovs-vsctl list interface $ ovs-appctl fdb/show br-int
$ brctl show $ brctl showmacs qbrc648c3ca-76 $ ps afux|grep [d]nsmasq $ cat /proc/$(pidof dnsmasq)/cmdline | tr '\0' '\n' $ ps afux|grep qemu
$ QROUTER=qrouter-$(neutron router-list|\grep router1|awk '{print $2}') $ ip netns exec ${QROUTER} ip a $ ip netns exec ${QROUTER} router $ ip netns exec ${QROUTER} ping -c3 10.10.1.1 # gateway $ ip netns exec ${QROUTER} iptables -S -t nat
$ QDHCP=qdhcp-$(neutron net-list|\grep private_network|awk '{print $2}') $ ip netns exec ${QDHCP} ip r
$ tcpdump -nni eth0 \( dst host 10.0.0.19 and port 22 \) $ tcpdump -nni eth0 icmp $ tcpdump -i any -n -v \ 'icmp[icmptype] = icmp-echoreply or icmp[icmptype] = icmp-echo'
$ virsh dumpxml instance-00000003 | grep -A3 bridge $ grep -A4 'type="bridge"' /var/lib/nova/instances/${INSTANCE_ID}/libvirt.xml $ xmllint --xpath '//devices/interface[@type="bridge"]/source[@bridge]' libvirt.xml
Open vSwitch (OVS)
+--------------------------------------+ | cirros | | +------+ | | | eth0 | (10.10.1.102) | +---------------+------+---------------+ | | +-----------------+ Virtual NIC (tap) | tap-55bfa719-2b | +-----------------+ | | +-----------------+ Linux bridge | qbr-55bfa719-2b | +-----------------+ | | +-----------------+ | qvb-55bfa719-2b | +-----------------+ veth pair | +-----------------+ | qvo-55bfa719-2b | +---------------+-----------------+--------------+ OVS Bridge | 10.10.1.0/24 br-int | +--+----------------+-----------+----------------+ / | qr-b24d8155-69 | \ / | tap0d8a8773-84 | \ / +----------------+ \ / +----------------+ \ / \ / \ | qrouter- | | Namespace: | | 9f36.... | | qdhcp- | | | | | | 10.1.100.16 | \______________________/ \ +----------------+ / \ | qg-20ffa8ce-2f | / +--+----------------+--+ | 10.1.100.0/23 | OVS Bridge | br-ex | +----------------------+
External links
- xtof-openstack-rdo-packstack on GitHub