racattack, meet ansible-oracle!

A while back I was approached by Jeremy Schneider, who is one of the original contributors to the racattack project and he wanted to know if I was interested in integrating ansible-oracle with the RAC Attack automation project, and of course I was!

The idea was to provide a completely hands off installation of an Oracle RAC cluster, from VM creation to a fully configured cluster, and that is what we’re happy to be able to provide.

So Alvaro Miranda and I have been working on getting racattack-ansible-oracle going. Alvaro wrote the original packer/vagrant code for the Vagrant version of Racattack and then we’ve integrated ansible-oracle with that.

The actual integration worked straight away, so we’ve been weeding out edge cases and making it as easy and flexible to use as possible.
As of now it is possible to install 11.2.0.4, 12.1.0.1 & 12.1.0.2, which are the releases currently supported by ansible-oracle. As more releases are supported by ansible-oracle, they will also work with racattack-ansible-oracle.

Setup

For this to work, you first need to clone the repository, and then you need to install and download a few things.

I’m not going to show the installation of Virtualbox or Vagrant, as they’re both pretty straightforward. Just remember to add the ‘insecure ssh keypairs‘ to your Vagrant installation, or the vagrant up command will just hang and finally fail.

Vagrantfile

After the repository has been cloned there are a couple of directories and files present. The file you should worry about in the first place is the file called Vagrantfile. This file describes the VM(s) you want to run. There are 3 types of machines you can configure.

  • HUB nodes. These are the nodes that always will run GI and a database. Configured with shared storage and an interconnect network.
  • LEAF nodes. These are nodes that will be used as leaf-nodes in a Flex Cluster config. Part of the interconnect network but no shared storage
  • APP nodes. These are just normal nodes that are available on the same public network as the the hub/leaf nodes. Can be used as an application servers if needed.

The following are the customization you can do for ‘your’ configuration

#############################
#### BEGIN CUSTOMIZATION ####
#############################
#define number of nodes
num_APPLICATION = 0    <-- Application nodes
num_LEAF_INSTANCES = 0 <-- GI Leaf nodes
num_DB_INSTANCES = 2   <-- GI Hub nodes
#
#define number of cores for guest
num_CORE = 1
#
#define memory for each type of node in MBytes
#
#for leaf nodes, the minimun can be 2300, otherwise pre-check will fail for
#automatic ulimit values calculated based on ram
#
#for database nodes, the minimum suggested is 3072 for standard cluster
#for flex cluster, consider 4500 or more
#
memory_APPLICATION = 1500
memory_LEAF_INSTANCES = 2300
memory_DB_INSTANCES = 3072
# 
#size of shared disk in GB
size_shared_disk = 5
#number of shared disks
count_shared_disk = 4   <-- The racattack.group_vars config is configured for 4 disks by default.   
#
#############################
##### END CUSTOMIZATION #####
###########################

By default, there are 2 diskgroups configured (DATA & FRA), and the ansible configuration (in stagefiles/racattack,group_vars) is configured for that.
If you want to do more advanced configurations, like adding more diskgroups or more disks you’d also have to modify the stagefiles/racattack,group_vars file, and specifically the datastructures called:

  • asm_diskgroups – contains the name of the diskgroups
  • asm_storage_layout – contains the mapping of devices to asm-labels for each diskgroup.

Where will all files be stored?

There are a few types of files that are in use when running this. The virtualbox part consists of the vmdk’s that make up the base OS. These will be placed in the default Virtualbox VM directory.
The shared disks that are created during the vagrant run will be placed inside the cloned repo.

So in my case, my VBOX default directory is:
/home/miksan/.apps/VBOX/ <– Here goes the actual VM’s in their own directory structure. (collab/<machine>

And the vagrant directory is:
/home/miksan/.apps/vagrant <– Here goes the shared disks + all repo info (racattack-ansible-oracle/…..)

Vagrant up!

So after the Vagrantfile has been modified to suit your needs it is time to kick this off. You basically run 2 commands and then you just sit back and wait.

  • vagrant up – builds the machine(s)
  • setup=standard vagrant provision – This is where Ansible takes over and builds the default configuration which is a 12.1.0.2 GI & DB
  • setup=<standard|flex> giver=<12.1.0.2|12.1.0.1|11.2.0.4> dbver=<12.1.0.2|12.1.0.1|11.2.0.4> – Builds your combination of GI & DB.
    If the GI-version is 11.2.0.4 the cluster type will be forced to standard
[miksan@blergh git]$ git clone --recursive https://github.com/racattack/racattack-ansible-oracle
Cloning into 'racattack-ansible-oracle'...
...
SNIP
...
[miksan@blergh git]$ cd racattack-ansible-oracle
[miksan@blergh racattack-ansible-oracle]$ vagrant up

collabn2 eth1 lanip :192.168.78.52
collabn2 eth2 privip :172.16.100.52
collabn2 dns server role is slave

collabn1 eth1 lanip :192.168.78.51
collabn1 eth2 privip :172.16.100.51
collabn1 dns server role is master

Bringing machine 'collabn2' up with 'virtualbox' provider...
Bringing machine 'collabn1' up with 'virtualbox' provider...
==> collabn2: Checking if box 'kikitux/oracle6-racattack' is up to date...
==> collabn2: Setting the name of the VM: collabn2.1412231414
==> collabn2: Clearing any previously set forwarded ports...
==> collabn2: Clearing any previously set network interfaces...
==> collabn2: Preparing network interfaces based on configuration...
 collabn2: Adapter 1: nat
 collabn2: Adapter 2: hostonly
 collabn2: Adapter 3: hostonly
==> collabn2: Forwarding ports...
 collabn2: 22 => 2222 (adapter 1)
==> collabn2: Running 'pre-boot' VM customizations...
==> collabn2: Booting VM...
==> collabn2: Waiting for machine to boot. This may take a few minutes...
 collabn2: SSH address: 127.0.0.1:2222
 collabn2: SSH username: vagrant
 collabn2: SSH auth method: private key
 collabn2: Warning: Connection timeout. Retrying...
 collabn2: Warning: Remote connection disconnect. Retrying...
 collabn2: Warning: Remote connection disconnect. Retrying...
==> collabn2: Machine booted and ready!
==> collabn2: Checking for guest additions in VM...
==> collabn2: Setting hostname...
==> collabn2: Configuring and enabling network interfaces...
==> collabn2: Mounting shared folders...
 collabn2: /vagrant => /home/miksan/.apps/vagrant/racattack-ansible-oracle
 collabn2: /media/sf_12cR1 => /home/miksan/.apps/vagrant/racattack-ansible-oracle/12cR1
 collabn2: /media/stagefiles => /home/miksan/.apps/vagrant/racattack-ansible-oracle/stagefiles
==> collabn2: Machine already provisioned. Run `vagrant provision` or use the `--provision`
==> collabn2: to force provisioning. Provisioners marked to run always will still run.
==> collabn1: Checking if box 'kikitux/oracle6-racattack' is up to date...
==> collabn1: Setting the name of the VM: collabn1.1412231414
==> collabn1: Clearing any previously set forwarded ports...
=> collabn1: Fixed port collision for 22 => 2222. Now on port 2200.
==> collabn1: Clearing any previously set network interfaces...
==> collabn1: Preparing network interfaces based on configuration...
 collabn1: Adapter 1: nat
 collabn1: Adapter 2: hostonly
 collabn1: Adapter 3: hostonly
==> collabn1: Forwarding ports...
 collabn1: 22 => 2200 (adapter 1)
==> collabn1: Running 'pre-boot' VM customizations...
==> collabn1: Booting VM...
==> collabn1: Waiting for machine to boot. This may take a few minutes...
 collabn1: SSH address: 127.0.0.1:2200
 collabn1: SSH username: vagrant
 collabn1: SSH auth method: private key
 collabn1: Warning: Connection timeout. Retrying...
 collabn1: Warning: Remote connection disconnect. Retrying...
==> collabn1: Machine booted and ready!
==> collabn1: Checking for guest additions in VM...
==> collabn1: Setting hostname...
==> collabn1: Configuring and enabling network interfaces...
==> collabn1: Mounting shared folders...
 collabn1: /vagrant => /home/miksan/.apps/vagrant/racattack-ansible-oracle
 collabn1: /media/sf_12cR1 => /home/miksan/.apps/vagrant/racattack-ansible-oracle/12cR1
 collabn1: /media/stagefiles => /home/miksan/.apps/vagrant/racattack-ansible-oracle/stagefiles
==> collabn1: Machine already provisioned. Run `vagrant provision` or use the `--provision`
==> collabn1: to force provisioning. Provisioners marked to run always will still run.
[miksan@blergh racattack-ansible-oracle]$ setup=standard vagrant provision

collabn2 eth1 lanip :192.168.78.52
collabn2 eth2 privip :172.16.100.52
collabn2 dns server role is slave

collabn1 eth1 lanip :192.168.78.51
collabn1 eth2 privip :172.16.100.51
collabn1 dns server role is master

==> collabn2: Running provisioner: shell...
 collabn2: Running: inline script
==> collabn2: PEERDNS=no
==> collabn2: overwriting /etc/resolv.conf
==> collabn2: Running provisioner: shell...
 collabn2: Running: inline script
==> collabn2: named already configured in collabn2.racattack
==> collabn1: Running provisioner: shell...
 collabn1: Running: inline script
==> collabn1: PEERDNS=no
==> collabn1: overwriting /etc/resolv.conf
==> collabn1: Running provisioner: shell...
 collabn1: Running: inline script
==> collabn1: named already configured in collabn1.racattack
==> collabn1: Running provisioner: shell...
 collabn1: Running: inline script
==> collabn1: GIVER VALID
==> collabn1: DBVER VALID
==> collabn1: Default install: GI version: 12.1.0.2 & DB version: 12.1.0.2, cluster type: standard
==> collabn1: [WARNING]: The version of gmp you have installed has a known issue regarding
==> collabn1: timing vulnerabilities when used with pycrypto. If possible, you should update
==> collabn1: it (ie. yum update gmp).
==> collabn1: 
==> collabn1: PLAY [Host configuration] ***************************************************** 
==> collabn1: 
==> collabn1: GATHERING FACTS *************************************************************** 
==> collabn1: ok: [collabn1]
==> collabn1: ok: [collabn2]
==> collabn1: 
==> collabn1: TASK: [orahost | Install packages required by Oracle] ************************* 
==> collabn1: skipping: [collabn1]
==> collabn1: skipping: [collabn2]
==> collabn1: 
==> collabn1: TASK: [orahost | Disable selinux (permanently)] ******************************* 
==> collabn1: skipping: [collabn1]
==> collabn1: skipping: [collabn2]
==> collabn1: 
==> collabn1: TASK: [orahost | Disable selinux (runtime)] *********************************** 
==> collabn1: skipping: [collabn1]
==> collabn1: skipping: [collabn2]
==> collabn1: 
==> collabn1: TASK: [orahost | User | Add group(s)] ***************************************** 
==> collabn1: changed: [collabn2] => (item={'gid': 54318, 'group': 'asmdba'})
==> collabn1: changed: [collabn2] => (item={'gid': 54319, 'group': 'asmoper'})
==> collabn1: changed: [collabn2] => (item={'gid': 54320, 'group': 'asmadmin'})
==> collabn1: ok: [collabn2] => (item={'gid': 54321, 'group': 'oinstall'})
.
.
.

Gotchas

The time it takes to complete an installation is mostly dependent on the HW you’re using (and naturally the number of nodes etc). But a normal 2 node installation (12.1.0.2 GI & DB) takes ~ 60 min on my 8GB laptop with a ssd.

During an 12.1.0.2 installation it may seem like the installation is hanging after the GI configuration is finished, but it is most likely the ‘configToolAllCommands’ run that takes time (~15min sometimes). It (among other things) creates the MGMTDB database and just seems to be ‘slow’. I’ve never bothered to debug it though.

I’ve also had installations where the output appears to hang, but it seems to be the Vagrant output not being able to keep up with the pace of the playbook

So that is it. Pretty sweet if I may say so myself.

Advertisements

3 thoughts on “racattack, meet ansible-oracle!

  1. Are you in the ‘racattack-ansible-oracle’ directory when running ‘vagrant up’?
    If you’re in the same directory as ‘Vagrantfile’ when you run ‘vagrant up’, it will create the machine as specified in Vagrantfile.

    If you don’t have a Vagrantfile, you need to do ‘vagrant init ‘ to create a default Vagrantfile, but that is not needed in this case as it already exists.
    I probably could have made that clearer in the post. I’ll fix that.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s