Back to Black

I am back to using Puppet with my home lab, after just about a year running SaltStack.

I enjoyed using Saltstack, with its light-weight orchestration and fairly simple syntax. However, I am planning to get my RHCA by the end of next year, and one of the exams I am planning to take (EX405) to satisfy the requirement is based on open-source Puppet.

Essentially, in order to pass the exam, I must be able to:

  • Install and configure Puppet.
  • Create and maintain Puppet manifests.
  • Create Puppet modules.
  • Use facter to obtain system information.
  • Work with Git repositories.
  • Implement Puppet in a Red Hat Satellite 6 environment.

With the exception of the last one, I am pretty comfortable with the rest of the objectives. However, I won’t make the same mistake I did with the RHCE exam by taking it for granted; so for the 12-18 month, I will be heavily managing my home lab with Puppet.

On that note, even though it had been over a year since I last touched a Puppet install, it was surprisingly easy to get back up to speed (yay for muscle memory). I was able to quickly get Puppet Server installed, configure R10k and deployed PuppetDB.

I did not put  MCollective on, though – and I am not planning to anytime soon. After months of using with Saltstack, I found MCollective to be quite limited for the amount of resources it consumes. Instead, for orchestration, I’ll leverage Ansible to kick off my puppet runs, which, incidentally, is another expertise exam that is coming from Red Hat.

At this point, now my environment has been setup, it is time to start planning some Puppet-based projects. Stay tuned!

It’s bold strategy, Cotton. Let see if it pays off

A month ago, I took the RHCE exam, fairly confident I will pass after spending hours studying and practicing.

It did not happen. I was not even able to complete the exam on time. I was hoping when I came home that I somehow squeaked though, but then came the exam notification::

Passing score for the exam: 210
Your score: 206

Result: NO PASS

For the next few hours, I was pretty depressed. I actually studied for the test far more extensively than the last time I took the RHCE, so it was a big blow to my confidence. At one point, I thought about not continuing on the RHCA path.

Then I decided to re-group and give it another go.

After signing up for the exam again (which, I will add, came at considerable cost, as Red Hat do not offer free re-takes), I took another look at the exam objectives and realize that in order to pass the exam, I need to complete all objectives in 3 1/2 hours (or 210 minutes). So I consolidated the list of objects as follows:

  • Configure a caching-only name server
  • Configure a system to forward all email to a central mail server
  • SSH Key Configuration with ACL
  • Synchronize time using other NTP peers
  • Apache – Configure a virtual host – with acl
  • Apache – Configure private directories
  • Apache – Configure group-managed content
  • Apache – Deploy a basic CGI application
  • Apache – Configure TLS security
  • Produce and deliver reports on system utilization (processor, memory, disk, and network)
  • Configure a system to authenticate using Kerberos
  • NFS – Provide network shares to specific clients
  • NFS – Provide network shares suitable for group collaboration (multi-user)
  • NFS – Use Kerberos to control access to NFS network shares
  • Samba – Provide network shares to specific clients
  • Samba – Provide network shares suitable for group collaboration
  • Use firewalld and associated mechanisms such as rich rules, zones and custom rules, to implement packet filtering and configure network address translation (NAT)
  • Route IP traffic and create static routes
  • Use /proc/sys and sysctl to modify and set kernel runtime parameters
  • Configure IPv6 addresses and perform basic IPv6 troubleshooting
  • Use network teaming or bonding to configure aggregated network links between two Red Hat Enterprise Linux systems
  • Install and configure MariaDB
  • Use shell scripting to automate system maintainance tasks
  • Configure a system as either an iSCSI target or initiator that persistently mounts an iSCSI target

Then I put them spreadsheet and start logging the time it takes me to complete each task over the course the week. The results were not pretty – it took about 162 minutes complete most of them.

(Actually, some of the tasks (in particularly, Apache), took far longer than I expected and some others I gave up after 10-15 minutes).

The important thing, though, is after that practice run, I know where my area sof weaknesses were. So I review the material again on my way to work and back, did some quick practice sessions and then went through the tasks again.

As the result, the following week was a different story I was able to cut my time down by almost 40 minutes – down to 128 minutes.

Again, I look at areas where I was weak at, practice and review. By the Sunday before the exam, I was able to cut my time to under 2 hours. Then I did some final review on some parts on Sunday and Monday.

As the result, when I re-took the exam Tuesday afternoon, I was able to breeze through all the items, and complete all of them with an hour to spare. At that point, I was able to spend the remaining time validating the setup, and going back and correcting things that I missed.

Later on that evening, I received the results:


Passing score for the exam: 210
Your score: 271

Result: PASS

Boom, baby.

Hold your nose and close your eyes

Aggregating interfaces can be a pain, but it doesn’t have to be. With Red Hat 7 and above, you can team your interfaces with very little effort. Frankly, it is pretty awesome.

There is one catch, though. You will have to learn to use Network Manager. Specifically, nmcli.

Much can be say about whether  Network Manager is necessary or not on the server, but after working with nmcli, I could at least see how useful it is when comes to persistently set teaming configurations. I mean, the setup goes something like this:

Create a team configuration file, using one of the example in the documentation directory

cd /usr/share/doc/teamd-1.17/example_configs/
cp activebackup_ethtool_1.conf tmp.json
cat tmp.json

Then create the master, using the above configuration:

nmcli con add type team con-name team0 ifname team0 config tmp.json

Then add the slaves:

nmcli con add type team-slave con-name ens8 ifname ens8 master team0
nmcli con add type team-slave con-name ens9 ifname ens9 master team0

Re-start the interfaces and you are done!

It does smell a bit, but after fighting with Network Manager for the last decade, maybe it is time to at least give it a chance.

I should put something here

Some days, keeping up with technology can be a mix of frustration and excitement.

I am currently working on getting back my RHCE (Red Hat Certified Engineer) credentials (I had it before, but for reasons I won’t get to, it expired). From there, I will be able to avail myself of a suite of certificates from Red Hat, eventually getting either a RHCA (Red Hat Certified Architect) in Cloud or DevOps (or if time does not permits, just plain RHCA) I will do this by using existing resources (books, documentation and supplemented by in-expensive online training) rather that taking the rather pricey ROLE courses.

That is the idea, at least.

Case in point, Samba. Now, I don’t use Samba that much, but it is a key objective to complete in the RHCE exam – not just using it, but configuring and setting up the appropriate access controls. From reading the RHCE books, it seems pretty straight forward. For example:

  • Provide network shares to specific clients
  • Provide network shares suitable for group collaboration

Which mean you need to do the following on the server:

1) Install Samba on the server.

yum -y install samba samba-client

2) Add group that will be used for collaboration

groupadd -g 8888 shared

3) Modify existing users so they are part of the group

usermod -aG shared amy
usermod -aG shared rory

5) Create samba users:

smbpasswd -a amy
smbpasswd -a rory

6) Set the appropriate permissions on the directory you want to share.

chmod 770 /srv/directory_to_be_shared
chown nobody:shared /srv/directory_to_be_shared

7) Set selinux permissions as follows:

semanage fcontext -a -t samba_public_t /srv/directory_to_be_shared
restorecon -rv /srv/directory_to_be_shared

8) Create entry in /etc/samba/smb.conf

comment = “shared directory”
path = /srv/directory_to_be_shared
writable = yes
browsable = yes
write list = +shared
hosts allow =

9) run testparm to validate the configuration

10) Enable and start samba:

systemctl enable samba
systemctl start samba

11) open the firewall:

firewall-cmd —add-service=samba
firewall-cmd —add-service=samba —permanent

While on the client:

1) Install samba and cifs-utils:

yum -y install cifs-utils samba

2) Create directory to mount the share:

mkdir /mnt/shared

3) Create a file that contain the credentials used to mount the share and secure the file:

echo 'username=amy' > /etc/samba/secret
echo 'password=doctor!' >> /etc/samba/secret
chmod 0400 /etc/samba/secret

4) Update fstab to mount the directory

// /mnt/shared cifs _netdev,credentials=/etc/samba/pw 0 0

5) Finally, mount the share:

mount /mnt/shared

As you can tell, I got it down cold. Why? Because until today, I couldn’t do step 5. I kept getting permission errors:

mount error(13): Permission denied
Refer to the mount.cifs(8) manual page (e.g. man mount.cifs)

Now I was able to mount if I remove the hosts allow entry:

comment = “shared directory”
path = /srv/directory_to_be_shared
writable = yes
browsable = yes
write list = +shared

But that would mean that I wouldn’t be able to use ACL controls.

After some searching, I found that I can block via IP, which is sort of better – but I still wasn’t satisfied.

I looked at the walkthroughs for all the RHCE books (Van Vugt, Ghori, Jang, Tecmint) and so far, from what I can tell, it should work. I mean, surely the authors have all figured it out, right?

Well, today, I gave it one more and something occur to me that, perhaps, Samba don’t do lookups by default. Sure enough, after some searching, I found:

In order for host allow entries using hostnames to work you need to enable

hostname lookups = yes
In the global configuration of smb.conf.

And sure enough, adding that in smb.conf:

hostname lookups = yes

Allow me to mount with using host controls on the hostname.

Turns out that hostname lookups are quite expensive, resource-wise, so samba have it turned off by default.

I am not sure why all the major RHCE prep books missed this. I thought at first that it may a problem with the editing, which I could understand for one book

But all four?


Oh, I hate Pollen

Its end of April and the pollen season is in fully bloom here in Chicago. I am on the edge of scratching my eyes out.

It could be worse. I could be living in Atlanta, where everything turns yellow.

Anyway, in my learnings with OpenStack, I was about to go and upload additional images into my lab. While I could build my own images, the nice thing with OpenStack is that I can upload AMI images. I went to the following page to retrieve additional images:

There are other free AMI images out there, but I choose these because I can use Ubuntu’s built in tool to upload the images:

At openstack@openstackstorage:~/openstack/images$ cloud-publish-tarball opensuse-12.2-x86_64-emi.tar.gz images x86_64 Mon Apr 29 17:06:21 CDT 2013: ====== extracting image ====== kernel : kvm-kernel/vmlinuz-3.4.11-2.16-default ramdisk: kvm-kernel/initrd-3.4.11-2.16-default image : opensuse-12.2-x86_64-emi.img Mon Apr 29 17:07:12 CDT 2013: ====== bundle/upload kernel ====== Mon Apr 29 17:07:19 CDT 2013: ====== bundle/upload ramdisk ====== Mon Apr 29 17:07:23 CDT 2013: ====== bundle/upload image ====== Mon Apr 29 17:13:11 CDT 2013: ====== done ====== emi="ami-0000000b"; eri="ari-0000000a"; eki="aki-00000009";

Of course, I could always use euca2ools to upload individual images (in fact, that is what cloud-publish-tarball is – a wrapper around some of the euca2ools commands). However, the nice thing about the cloud-publish-tarball tool is thattakes care of uploading the images as well as the associated manifests.

At any event, once the images are upload, you will see them present via the nova image-list command:

stardust:openstack rilindo$ nova image-list
| ID                                   | Name                                                | Status | Server |
| 36acf23a-07e4-4253-8183-da60253d919a | images/centos-6.3-x86_64.img                        | ACTIVE |        |
| 1bba647b-111b-4ea7-a614-76229fd63c8c | images/initrd-2.6.32-279.14.1.el6.x86_64.img        | ACTIVE |        |
| 054b1f8b-1051-4fdd-b8a1-0efb442ab127 | images/initrd-3.4.11-2.16-default                   | ACTIVE |        |
| 756ee7b6-831b-4e30-899c-4b6aa2f3fafd | images/opensuse-12.2-x86_64-emi.img                 | ACTIVE |        |
| 6e63d0cd-4e24-4766-ad48-3a01670a607e | images/precise-server-cloudimg-i386-vmlinuz-virtual | ACTIVE |        |
| bedf0e78-c7d4-414e-85fb-291a0ccd851d | images/precise-server-cloudimg-i386.img             | ACTIVE |        |
| 490a92a6-5741-4485-8465-df9fc2c19a5c | images/vmlinuz-2.6.32-279.14.1.el6.x86_64           | ACTIVE |        |
| b9886335-e04a-4086-a860-852240430d53 | images/vmlinuz-3.4.11-2.16-default                  | ACTIVE |        |

The first column is the actual file names of the images. Those are uploaded in the following directory:

root@openstack1:/var/lib/glance/images# ls -la
total 6125848
drwxr-xr-x 2 glance glance       4096 Apr 29 16:40 .
drwxr-xr-x 4 glance glance       4096 Apr 29 16:42 ..
-rw-rw-r-- 1 glance glance    5943048 Apr 29 16:18 1bba647b-111b-4ea7-a614-76229fd63c8c
-rw-rw-r-- 1 glance glance 4781506560 Apr 29 16:42 36acf23a-07e4-4253-8183-da60253d919a
-rw-rw-r-- 1 glance glance    3988752 Apr 29 16:18 490a92a6-5741-4485-8465-df9fc2c19a5c
-rw-rw-r-- 1 glance glance    5017344 Apr 22 18:24 6e63d0cd-4e24-4766-ad48-3a01670a607e
-rw-rw-r-- 1 glance glance 1476395008 Apr 22 18:26 bedf0e78-c7d4-414e-85fb-291a0ccd851d

And registered by glance-registry

Important point: When you do upload the image, CPU utilization for nova-api will temporarily sky-rocketed. On a server system, it would probably be fairly brief. On my desktop “server”, it took about 10-15 minutes

top - 16:34:51 up 3 days, 25 min,  1 user,  load average: 1.83, 1.30, 0.85
Tasks: 116 total,   2 running, 114 sleeping,   0 stopped,   0 zombie
Cpu(s): 97.0%us,  3.0%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Mem:   3791440k total,  3641552k used,   149888k free,   145980k buffers
Swap:  3928060k total,     3232k used,  3924828k free,  2797332k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND                                                                                     
24048 nova      20   0  247m  86m 5764 R 95.2  2.3  10:09.51 nova-api                                                                                    
 1251 nova      20   0  205m  58m 4588 S  2.0  1.6  21:57.61 nova-network                                                                                
    3 root      20   0     0    0    0 S  0.3  0.0   2:06.05 ksoftirqd/0                                                                                 
   22 root      20   0     0    0    0 D  0.3  0.0   0:00.15 kswapd0                                                                                     
 1250 nova      20   0  197m  50m 4588 S  0.3  1.4  12:52.87 nova-scheduler                                                                              
 1252 nova      20   0  195m  49m 4588 S  0.3  1.3  12:08.12 nova-cert                                                                                   
 1522 mysql     20   0  870m  58m 7820 S  0.3  1.6  11:18.90 mysqld                                                                                      
 1757 rabbitmq  20   0  568m  29m 2284 S  0.3  0.8   5:44.25 beam                                                                                        
27080 root      20   0     0    0    0 S  0.3  0.0   0:00.05 kworker/0:1             

I couldn’t use the nova commands, as they will hang and wait until nova-api finishes. The first time it happened, I restarted nova-api, which killed the image registration, forcing me to delete and restart the image upload. 😦 But eventually it finishes and after some inspection, I was able to build my Fedora, CentOS 6 and OpenSuSE instances.

Linux vmi012 3.4.11-2.16-default #1 SMP Wed Sep 26 17:05:00 UTC 2012 (259fc87) x86_64 x86_64 x86_64 GNU/Linux
stardust:openstack rilindo$ ssh -i mykey.pem -lroot uname -an
Linux vmi013.novalocal 2.6.32-279.14.1.el6.x86_64 #1 SMP Tue Nov 6 23:43:09 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux
stardust:openstack rilindo$ ssh -i mykey.pem -lroot uname -an

Next: Mac OS X and maybe Keystone

The Dangers of Skipping

Long time, no write. This will change immediately.

Recently, I have getting myself familiar with OpenStack by way of the book OpenStack Cloud Computing Cookbook. For the most part, it was an easy read and I got my home OpenStack environment and running, albeit with some issues.

The book starts up with asking the reader to setup up a couple of Ubuntu 12.04 using VirtualBox. I decided to be clever and setup KVM instances instead, with fairly mixed results (At some point during the exploration, I decided to started over and went with some spare physical hardware  – just as well, I needed to redo my private lab anyway).

Next, I went and installed the following prerequisites per instruction on one single server:

sudo apt-get -y install rabbitmq-server nova-api nova-objectstore nova-scheduler nova-network nova-compute nova-cert glance qemu unzip.

Which installed OpenStack Essex, which is the default on Ubuntu 12.04. 

Then I setup  pressed parameters for MySQL server. Originally, it was MySQL 5.1

cat MYSQL_PRESEED | debconf-set-selections<
mysql-server-5.1 mysql-server/root_password password openstack
mysql-server-5.1 mysql-server/root_password_again password openstack
mysql-server-5.1 mysql-server/start_on_boot boolean true

However, Ubuntu 12.04.02 (which is what I am using) apparently install 5.5 by default, so some quick changes was in order:

cat «MYSQL_PRESEED | debconf-set-selections
mysql-server-5.5 mysql-server/root_password password openstack
mysql-server-5.5 mysql-server/root_password_again password openstack
mysql-server-5.5 mysql-server/start_on_boot boolean true

Then I changed the default config:

sudo apt-get update
sudo apt-get -y install mysql-server
sudo sed -i 's/' /etc/mysql/my.cnf

Then reset the default password:

mysql -uroot -p$MYSQL_PASS -e 'CREATE DATABASE nova;'
mysql -uroot -p$MYSQL_PASS -e "GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%'"
mysql -uroot -p$MYSQL_PASS -e "SET PASSWORD FOR 'nova'@'%' = PASSWORD('$MYSQL_PASS');"

Then I updated /etc/nova/nova.conf with the mySQL password credentials:


The IP .105 being the IP of the Ubuntu box openstack1.

Then I added the next set of parameters:


(incidentally, I am not getting into detail as the parameters – you can look them up at

Then I sync the configs into the database.

sudo nova-manage db sync

Then i setup the network ranges.

sudo nova-manage network create vmnet --fixed_range_v4= --network_size=64 --bridge_interface=eth0
sudo nova-manage floating create --ip_range=

The first set is for administrative access VMs to talk to each other as well to OpenStack. The other set is the public facing IP range (or at least public from a user standpoint).

Then I stopped and started the following services:










By way of brief explanation, nova-compute creates and destroy the instances, while nova-manage assigns and creates the IPs and VLANs. The nova-api provides access to services by the various application components, nova-scheduler runs the commands submitted by the various services. 

I am not sure about nova-objectstore, but I am sure that nova-vert encrypts the connection between the various services. libvirt-bin, of course, is the wrapper around KVM and provide access to the KVM hypervisor. Finally, glance-registry and glance-api registers and manage the images. 

At this point, I went and created a user, gave it admin access, and then created a project called “cookbook”:

sudo nova-manage user admin openstack
sudo nova-manage role add openstack cloudadmin
sudo nova-manage project create cookbook openstack

Then I zip up the cook:

sudo nova-manage project zipfile cookbook openstack

Then installed the tools on my Ubuntu) client necessary to manage openstack

sudo apt-get install euca2ools python-novaclient unzip

(later one, I used the Mac OS X version, which I will get to later on).

Copied the zip file from openstack1, unzip it into a directory called openstack, cd to it, then source an environment file so that I got the parameters into the shell.

. novarc

Finally, I generated a key and inserted into the database

nova keypair-add openstack > openstack.pem  

chmod 0600 *.pem

This allows me to setup a SSH key to whatever the default user of the instance I am created, so that I can simply run:

ssh -i openstack.pem username@instance name.

(the book had it was chmod 0600.pem, BTW, which is an obvious typo)

Finally, I was ready to update my image. So I downloaded a cloud version of Ubuntu’s server:


Installed the cloud_util tools:

sudo apt-get install cloud_utils

And then I attempted to upload the image …

cloud-publish-tarball ubuntu-12.04-server-cloudimg-i386.tar.gz images i386

and I ran into problems – kept running of space. I though the destination was the issue, so I re-did the server (at the time, it was a virtual server, not a physical). Eventually, turns out that it is running out space at the source – I had installed the Ubuntu server partitioned with separate file systems for /, /usr, /var, /tmp. By default, it extracts to /tmp, which was too small. So I change the default in the shell to:

export TMPDIR ; export TEMPDIR

And afterwards, I was able to upload the image, which I was able to view with:


| ID                                   | Name                                                | Status | Server |


| 6e63d0cd-4e24-4766-ad48-3a01670a607e | images/precise-server-cloudimg-i386-vmlinuz-virtual | ACTIVE |        |

| bedf0e78-c7d4-414e-85fb-291a0ccd851d | images/precise-server-cloudimg-i386.img             | ACTIVE |        |


So I ready to build an instance. 

I added the appropriate access to the ports:

nova secgroup-add-rule default tcp 22 22 
nova secgroup-add-rule default icmp -1 -1

And then I went and “boot” or create my instances:

nova boot myInstance —image 0e2f43a8-e614-48ff-92bd-be0c68da19f4 —flavor 2 —key_name openstack 

And… it didn’t quite work. I mean, I can see the console output with:

nova console-log myInstance

But no IP was assigned, so I couldn’t login. After much reason, I found that the firewall rules was proving DHCP request from going through, so I added:

iptables -A POSTROUTING -t mangle -p udp —dport 68 -j CHECKSUM —checksum-fill

And from that point on, DHCP requests was going through and I was able to login.

At this point, I felt confident and decided to skip ahead and added another node to the group. 

Big mistake. But I am getting ahead of myself.

I went to chapter 11 and per instruction, installed just the following:

sudo apt-get -y installed nova-compute nova-network nova-apit

Copied the nova.conf to the new node, updating the IPs, verified on the original node that the services are listening:

root@openstack1:~# nova-manage service list

2013-04-27 22:38:46 DEBUG nova.utils [req-9dd11667-7967-4221-807f-98ddaf9371b3 None None] backend  from (pid=18080) __get_backend /usr/lib/python2.7/dist-packages/nova/

Binary           Host                                 Zone             Status     State Updated_At

nova-cert        openstack1                           nova             enabled    :-)   2013-04-28 03:38:45

nova-scheduler   openstack1                           nova             enabled    :-)   2013-04-28 03:38:44

nova-network     openstack1                           nova             enabled    :-)   2013-04-28 03:38:45

nova-compute     openstack1                           nova             enabled    :-)   2013-04-28 03:38:38

nova-compute     openstack2                           nova             enabled    :-)   2013-04-28 03:38:39

nova-network     openstack2                           nova             enabled    :-)   2013-04-26 16:58:30

Feeling pretty sure that it will work as intended, I attempted to create more instances – and I failed. IPs failed to be assigned once again.

First of all, I didn’t setup my switch properly to handle the private network between the new nodes. I forgot my switch can only separate ports into separate broadcast domains using VLAN tagging (which OpenStack uses by default by way of VLAN Manager, which I didn’t pay much attention to – this will become significant shortly in this blog). After a while, I just gave up and plugged a crossover between openstack1 and openstack2 (later one, I did remember how to setup the switch properly and got the packets tagged appropriately). 

So at that point, I was able to create the instances. But then I couldn’t login using my private keys. Reviewing my console log, I found this (example pulled from google):

‘’ failed [50/120s]:  url error [timed out]

When you build an instance, it pulls the credentials from this loop back, which in turns is supposed to be routed to the API on the controller (which is on openstack1). I corrected that problem by adding the route:

sudo ip route add metric 1000 dev eth1

openstack@openstack2:~$ ip route show

default via dev eth1  metric 100 dev eth1  scope link  metric 1000 dev eth1  proto kernel  scope link  src dev virbr0  proto kernel  scope link  src 

But the problem still persists. 

Finally, after several long looks at the following pages:

And going back and reading at chapter 10 in the book (the chapter I skipped) I uninstalled nova-network. And suddenly, the instances are able to reach the API now.

Remember how the default OpenStack setup was using VLAN Manager? That is useful if you need to separate tenants using IP ranges and VLANs. More importantly, it meant that nova-network is only needed on the controller side (since the controller handles the route and IP. It is only when I need to use the other networking setup (Flat Networking or Flat Networking with DHC –  where I isolate the tenants using Security Group Modes) is nova-network is necessary on the new nodes. Otherwise, the firewalls setup by nova-network was blocking access to the  API by the instances.

So that was resolved and I was able to build instances at will – well mostly. Just tried to create a new instance just now. and I get this:

ERROR: Quota exceeded: code=InstanceLimitExceeded (HTTP 413) (Request-ID: req-82aa560f-0318-4e55-b5bd-98b10d1b9c60)

Heh. Removing one instance now:

stardust:openstack rilindo$ nova delete vmi001

stardust:openstack rilindo$ nova list


| ID                                   | Name   | Status | Networks                       |


| 7a6a4912-71ce-45e2-8a38-51537a4c7ffb | vmi001 | ACTIVE | vmnet=,  |

| 2fbaad51-35f4-4ac4-b6a3-d338af5905d8 | vmi002 | ACTIVE | vmnet=,  |

| c70c7522-c8fb-4196-8161-6c6153549729 | vmi003 | ACTIVE | vmnet=,  |

| 9899b0ee-4cac-4588-891b-705a6cc95512 | vmi004 | ACTIVE | vmnet=,  |

| 41db335c-97a2-4335-9e91-7a0a4345b70a | vmi005 | ACTIVE | vmnet=, |

| 4d2698e2-88ba-42f6-9107-0380d89c3e89 | vmi006 | ACTIVE | vmnet=, |

| 9cf60a41-1a58-4dbd-a47c-b4f2c42e6117 | vmi007 | ACTIVE | vmnet=, |

| de257ab1-cf03-4f08-816f-c9e16ce2793a | vmi008 | ACTIVE | vmnet=, |

| 0c67d3bc-1ca1-4cfe-9653-eb3a5c94350f | vmi009 | ACTIVE | vmnet=, |

| 9b5601bf-969f-447f-9e2a-8727dd3d45e2 | vmi010 | ACTIVE | vmnet=, |


stardust:openstack rilindo$ nova list


| ID                                   | Name   | Status | Networks                       |


| 2fbaad51-35f4-4ac4-b6a3-d338af5905d8 | vmi002 | ACTIVE | vmnet=,  |

| c70c7522-c8fb-4196-8161-6c6153549729 | vmi003 | ACTIVE | vmnet=,  |

| 9899b0ee-4cac-4588-891b-705a6cc95512 | vmi004 | ACTIVE | vmnet=,  |

| 41db335c-97a2-4335-9e91-7a0a4345b70a | vmi005 | ACTIVE | vmnet=, |

| 4d2698e2-88ba-42f6-9107-0380d89c3e89 | vmi006 | ACTIVE | vmnet=, |

| 9cf60a41-1a58-4dbd-a47c-b4f2c42e6117 | vmi007 | ACTIVE | vmnet=, |

| de257ab1-cf03-4f08-816f-c9e16ce2793a | vmi008 | ACTIVE | vmnet=, |

| 0c67d3bc-1ca1-4cfe-9653-eb3a5c94350f | vmi009 | ACTIVE | vmnet=, |

| 9b5601bf-969f-447f-9e2a-8727dd3d45e2 | vmi010 | ACTIVE | vmnet=, |


And adding a new one:

stardust:openstack rilindo$ nova boot vmi011 —image bedf0e78-c7d4-414e-85fb-291a0ccd851d —flavor 2 —key_name mykey


| Property                            | Value                                                    |


| status                              | BUILD                                                    |

| updated                             | 2013-04-28T04:02:23Z                                     |

| OS-EXT-STS:task_state               | scheduling                                               |

| OS-EXT-SRV-ATTR:host                | openstack2                                               |

| key_name                            | mykey                                                    |

| image                               | images/precise-server-cloudimg-i386.img                  |

| hostId                              | e1693fb6dfb89d758273a4312096678745f8f568dbdc3fbe279e286b |

| OS-EXT-STS:vm_state                 | building                                                 |

| OS-EXT-SRV-ATTR:instance_name       | instance-00000046                                        |

| OS-EXT-SRV-ATTR:hypervisor_hostname | None                                                     |

| flavor                              | m1.small                                                 |

| id                                  | 52cc6451-1c5c-462e-8d06-6f9465ce1a94                     |

| user_id                             | openstack                                                |

| name                                | vmi011                                                   |

| adminPass                           | RVeaJd7j7Kmu                                             |

| tenant_id                           | cookbook                                                 |

| created                             | 2013-04-28T04:02:22Z                                     |

| OS-DCF:diskConfig                   | MANUAL                                                   |

| accessIPv4                          |                                                          |

| accessIPv6                          |                                                          |

| progress                            | 0                                                        |

| OS-EXT-STS:power_state              | 0                                                        |

| metadata                            | {}                                                       |

| config_drive                        |                                                          |


stardust:openstack rilindo$ nova list


| ID                                   | Name   | Status | Networks                       |


| 2fbaad51-35f4-4ac4-b6a3-d338af5905d8 | vmi002 | ACTIVE | vmnet=,  |

| c70c7522-c8fb-4196-8161-6c6153549729 | vmi003 | ACTIVE | vmnet=,  |

| 9899b0ee-4cac-4588-891b-705a6cc95512 | vmi004 | ACTIVE | vmnet=,  |

| 41db335c-97a2-4335-9e91-7a0a4345b70a | vmi005 | ACTIVE | vmnet=, |

| 4d2698e2-88ba-42f6-9107-0380d89c3e89 | vmi006 | ACTIVE | vmnet=, |

| 9cf60a41-1a58-4dbd-a47c-b4f2c42e6117 | vmi007 | ACTIVE | vmnet=, |

| de257ab1-cf03-4f08-816f-c9e16ce2793a | vmi008 | ACTIVE | vmnet=, |

| 0c67d3bc-1ca1-4cfe-9653-eb3a5c94350f | vmi009 | ACTIVE | vmnet=, |

| 9b5601bf-969f-447f-9e2a-8727dd3d45e2 | vmi010 | ACTIVE | vmnet=, |

| 52cc6451-1c5c-462e-8d06-6f9465ce1a94 | vmi011 | ACTIVE | vmnet=,  |


stardust:openstack rilindo$ ssh -i mykey.pem ubuntu@

The authenticity of host ‘ (’ can’t be established.

RSA key fingerprint is 85:01:59:17:8d:11:4b:7c:60:72:c8:09:be:2d:45:73.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added ‘’ (RSA) to the list of known hosts.

Welcome to Ubuntu 12.04.2 LTS (GNU/Linux 3.2.0-40-virtual i686)

 * Documentation:

  System information as of Sun Apr 28 04:17:07 UTC 2013

  System load:  0.0              Processes:           61

  Usage of /:   6.9% of 9.84GB   Users logged in:     0

  Memory usage: 1%               IP address for eth0:

  Swap usage:   0%

  Graph this data and manage this system at

  Get cloud support with Ubuntu Advantage Cloud Guest:

  Use Juju to deploy your cloud instances and workloads:

0 packages can be updated.

0 updates are security updates.

The programs included with the Ubuntu system are free software;

the exact distribution terms for each program are described in the

individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by

applicable law.

To run a command as administrator (user “root”), use “sudo ”.

See “man sudo_root” for details.


URLs I used as reference:

Note: The book was written for Openstack Essex. Since then, Flosom and now Grizzly was released. Which mean that much of the results I found was for the latter, making things more for me, as I wasn’t sure if the solution was applicable to Essex or not. (For example, nova-network was replaced by Quantum, which apparently does networking a bit diffently).

Automatically set the hostname during Kickstart Installation

I hate having to manually set the hostname in kickstart file, so when I found a fix, I was very happy. I wish I can take credit, but it was originally made by somebody who was trying to figure out a way to automatically set the hostname for VMWare ESX machines. Unfortunately, I lost that link, so I can’t refer to the other page for credit. So the best I can do is to explain how it is done and hopefully I find that link later and update this post, so that the right person is properly attributed.

To explain how the solution works, its good to understand how Linux boots a system, which this article does a very good job of explaining. However, if you are impatient, this is short version:

  1. Computer turns on (DUH!)
  2. BIOS kick in, which performs POST, local device enumeration and initialization and then searches for active and bootable devices.
  3. Stage 1 (MBR) kicks in, looks for boot loader (in our case, GRUB)
  4. Grub (Stage 2) then loads kernel with an optional ramdisk.
  5. kernel boots, initializes and then starts init (or some other process) that then starts up other processes

Now with that mind, let’s take a look at our grub on jenkins:

[root@jenkins chef]# cat /etc/grub.conf 
# grub.conf generated by anaconda
# Note that you do not have to rerun grub after making changes to this file
# NOTICE:  You have a /boot partition.  This means that
#          all kernel and initrd paths are relative to /boot/, eg.
#          root (hd0,0)
#          kernel /vmlinuz-version ro root=/dev/mapper/vg_centos6-lv_root
#          initrd /initrd-[generic-]version.img
title CentOS (2.6.32-220.2.1.el6.x86_64)
	root (hd0,0)
	kernel /vmlinuz-2.6.32-220.2.1.el6.x86_64 ro root=/dev/mapper/vg_centos6-lv_root rd_NO_LUKS LANG=en_US.UTF-8 rd_LVM_LV=vg_centos6/lv_swap rd_NO_MD quiet SYSFONT=latarcyrheb-sun16 rhgb rd_LVM_LV=vg_centos6/lv_root  KEYBOARDTYPE=pc KEYTABLE=us crashkernel=auto rhgb quiet rd_NO_DM
	initrd /initramfs-2.6.32-220.2.1.el6.x86_64.img

As you can see, it boots the kernel, as well as set parameters such as root file system, language, keyboard and others things needs for the system to boot up properly. That information is actually still available in the running kernel by viewing the following file:

[root@jenkins chef]# cat /proc/cmdline 
ro root=/dev/mapper/vg_centos6-lv_root rd_NO_LUKS LANG=en_US.UTF-8 rd_LVM_LV=vg_centos6/lv_swap rd_NO_MD quiet SYSFONT=latarcyrheb-sun16 rhgb rd_LVM_LV=vg_centos6/lv_root  KEYBOARDTYPE=pc KEYTABLE=us  rhgb quiet rd_NO_DM
[root@jenkins chef]

Notice that in this file, you will find the same parameters as you find in the grub.conf. In some ways, if init (at least on System-V systems) is the mother of all process, the kernel is the grandmother, quietly hidden in the background.

What if you were to pass a parameter that it doesn’t recognize? In most cases, it will probably ignore it, but it will still in the kernel list. So lets insert:


to the kernel line right between “crashkernel=auto” and “rhgb” (either in grub or at kernel line at boot loader page during stage 2):

kernel /vmlinuz-2.6.32-220.2.1.el6.x86_64 ro root=/dev/mapper/vg_centos6-lv_root rd_NO_LUKS LANG=en_US.UTF-8 rd_LVM_LV=vg_centos6/lv_swap rd_NO_MD quiet SYSFONT=latarcyrheb-sun16 rhgb rd_LVM_LV=vg_centos6/lv_root  KEYBOARDTYPE=pc KEYTABLE=us crashkernel=auto FOO=BAR rhgb quiet rd_NO_DM

Now lets view /proc/cmdline again:

[root@jenkins ~]# cat /proc/cmdline 
ro root=/dev/mapper/vg_centos6-lv_root rd_NO_LUKS LANG=en_US.UTF-8 rd_LVM_LV=vg_centos6/lv_swap rd_NO_MD quiet SYSFONT=latarcyrheb-sun16 rhgb rd_LVM_LV=vg_centos6/lv_root  KEYBOARDTYPE=pc KEYTABLE=us  FOO=BAR rhgb quiet rd_NO_DM
[root@jenkins ~]# 

As we can see, FOO=BAR is in there, with no ill effects to the system boot.

So why would we want to pass a value that the kernel doesn’t use? So that we can do this:

[rilindo@jenkins ~]$ for x in `cat /proc/cmdline`
> do
> case $x in FOO*)
> eval $x
> echo "${FOO}" 
> ;;
> esac
> done
[rilindo@jenkins ~]$ 

What this script does is to get the output of /proc/cmdline as a series of positional elements (think of it like a list or an array) and loop through it. Then we will test each element through a case statement and if it matches (in this case, FOO), then it evaluates it to a variable. We then echo that variable, which will then return a value. In other words, we look for a section that has “FOO”, and get “BAR” out of it.

That is essentially how we automatically set the hostname in our installation. Using this technique, we put this script in our %pre section of our kickstart: 

for x in `cat /proc/cmdline`; do
        case $x in SERVERNAME*)
	        eval $x
		echo "network --device eth0 --bootproto dhcp --hostname ${SERVERNAME}" > /tmp/network.ks

Here, we look for a value called SERVERNAME and evaluates that value into a variable. We will then echo the network setup with the variable (which we will use as part of the hostname setup) and redirect into the file under /tmp. Then we will include that file in our installation section:

At this point, we are essentially done. To use it, we just need to pass SERVERNAME=X (where X is the name of the hostname you want to set) in our kickstart setup. In our case, we build virtual machines with KVM via virt-install, so we pass that in the following line:

virt-install --name jenkins --disk path=/home/vms/jenkins,size=50,bus=virtio --vnc --noautoconsole --vcpus=1 --ram=512 --network bridge=br0,mac=52:54:00:91:95:30 --location= -x "ks= SERVERNAME=jenkins"

Here is my entire kickstart file:

url --url
lang en_US.UTF-8
keyboard us
%include /tmp/network.ks

rootpw  --iscrypted PUTPASSWORDHERE
firewall --service=ssh
authconfig --enableshadow --passalgo=sha512 --enablefingerprint
selinux --enforcing
timezone --utc America/New_York
bootloader --location=mbr --driveorder=vda --append="crashkernel=auto rhgb quiet"
clearpart --all --drives=vda --initlabel

part /boot --fstype=ext4 --size=500
part pv.EPlgaf-h1b4-YqDI-2wfs-3C7I-SPPt-Agk5O7 --grow --size=1

volgroup vg_centos6 --pesize=4096 pv.EPlgaf-h1b4-YqDI-2wfs-3C7I-SPPt-Agk5O7
logvol / --fstype=ext4 --name=lv_root --vgname=vg_centos6 --grow --size=1024 --maxsize=51200
logvol swap --name=lv_swap --vgname=vg_centos6 --grow --size=1008 --maxsize=2016

repo --name="Local CentOS 6 - x86_64"  --baseurl=
repo --name="Local CentOS 6 - x86_64 - Updates"  --baseurl=
repo --name="Local Custom Installs" --baseurl=


for x in `cat /proc/cmdline`; do
        case $x in SERVERNAME*)
	        eval $x
		echo "network --device eth0 --bootproto dhcp --hostname ${SERVERNAME}" > /tmp/network.ks

%post --log=/root/my-post-log

setsebool -P use_nfs_home_dirs on
mkdir /home/users
mkdir /etc/chef

curl ${URLPOSTCONF}/6.2/repos/CentOS-Custom.repo -o /etc/yum.repos.d/CentOS-Custom.repo
curl ${URLPOSTCONF}/6.2/autofs/auto.master -o /etc/auto.master
curl ${URLPOSTCONF}/6.2/autofs/auto.home -o /etc/auto.home
curl ${URLPOSTCONF}/keys/cacert.pem -o /etc/openldap/cacerts/cacert.pem

curl ${URLPOSTCONF}/chef/validation.pem -o /etc/chef/validation.pem
curl ${URLPOSTCONF}/chef/client.rb -o /etc/chef/client.rb
curl ${URLPOSTCONF}/chef/first-run.json -o /etc/chef/first-run.json
rpm --import ${URLPOSTCONF}/keys/legacy.key
rpm --import ${URLPOSTCONF}/keys/custom.key

authconfig --enablesssd --enableldap --enableldaptls --ldapbasedn="dc=monzell,dc=com" --enableldapauth --update

echo "nameserver" >> /etc/resolv.conf
echo "nameserver" >> /etc/resolv.conf

gem install chef
chef-client -j /etc/chef/first-run.json
chkconfig chef-client on
chkconfig rpcbind on
chkconfig sssd on
chkconfig ntpd on



Let me know if this is useful. And again, I didn’t originally came up with this, so I plead innocent to charges of plagiarism. 🙂