A while back I was approached by Jeremy Schneider, who is one of the original contributors to the racattack project and he wanted to know if I was interested in integrating ansible-oracle with the RAC Attack automation project, and of course I was!
The idea was to provide a completely hands off installation of an Oracle RAC cluster, from VM creation to a fully configured cluster, and that is what we’re happy to be able to provide.
So Alvaro Miranda and I have been working on getting racattack-ansible-oracle going. Alvaro wrote the original packer/vagrant code for the Vagrant version of Racattack and then we’ve integrated ansible-oracle with that.
The actual integration worked straight away, so we’ve been weeding out edge cases and making it as easy and flexible to use as possible.
As of now it is possible to install 184.108.40.206, 220.127.116.11 & 18.104.22.168, which are the releases currently supported by ansible-oracle. As more releases are supported by ansible-oracle, they will also work with racattack-ansible-oracle.
As of version 1.3 it is possible to create a RAC Flex Cluster using ansible-oracle. From a ansible-oracle configuration perspective there is not a huge difference from a normal ‘standard’ cluster, basically a few new parameters. There are other differences though, specifically in how you have to run the playbook and deal with the inventory configuration.
In a ‘standard’ cluster, you have your database cluster nodes and that’s it (basically). In a Flex Cluster configuration there are 2 types of nodes:
- Hub nodes. These have access to the shared storage and will house your databases
- Leaf nodes. These nodes are connected to the interconnect network, but not to the shared storage. So for instance, you could run your application on these nodes.
So given that, from an ansible-oracle perspective, it presents a few challenges. With a normal cluster, you could just run the playbook against all your cluster nodes as they are all the same.
Now, when building a Flex Cluster, there are a few things that should only be done on the hub nodes (configuring shared storage, install the database server and create the database(s)
And how do we do that? With a little Ansible inventory ninja-ism.
First off, I’ll be setting up page, where I intend to have a complete list of parameters used in all roles and a description of what they actually do. Up until now I’ve kept it in a file in the github repo, but I think it will be easier to maintain this way. The page is empty at the moment but I’ll try add to it as soon as possible
Now, this is the 3rd post on how to get started with ansible-oracle. You can find the other 2 posts here and here.
This time we will perform a RAC installation, and we will also introduce a few other options we haven’t explored before (mostly because they got added recently).
We will be introducing a concept known as GI role separation which means that the user grid will own the Grid Installation including ASM, and the user oracle will own the database installation.
The default is false, meaning the user oracle will own/run everything (well, except for the parts that run as root then).
We will be creating a container database including 2 pluggable databases
The installation will be performed over nfs this time, using a central repository where all the media is located, instead of copying the files to the installation hosts.
So in summary, this is what we will try to accomplish:
- Use group_vars & host_vars to alter the default values.
- Do the installation over nfs
- Setup RAC using role separation
- Create a container database with 2 pluggable databases
In this post we’ll be setting up a 2-node 22.214.171.124 RAC on Oracle Linux 6.5, and the end goal is to have a fully configured cluster with 2 databases available (1 RAC, 1 RAC One Node)
The machines are called orarac03/04 and they have just been kickstarted. As part of that process a ‘deploy’ user (ansible) has been added, with ssh-keys to make sure passwordless ssh is possible from the control machine. The user also have passwordless sudo privileges.
Note: You don’t necessarily need to have everything passwordless (login, sudo etc), as Ansible can ask for all that at the start of a play but it naturally makes things easier if you can.
The ansible user will be used in the playbooks and will sudo to the other users as needed.
The hosts have been equipped with:
- One 16GB device (/dev/sda)
- One 50GB device
- 6 devices for shared storage.
- 2 NIC’s, one for the public network and one for the interconnect. Only the public (eth0) is configured
- 2 cores/8 GB RAM
This turned out to be a rather lenghty post, so consider yourself warned.
This is a description of the parameters available to all the roles. I will not go into a lot of detail, as they may change. For the most up to date description look at the github page.
I made a few assumptions upfront when creating the roles, and I fully intend to make all of them configurable, but for now:
- The Oracle user only belongs to one group (dba). I’ll add more groups later
- Using ASMlib for the ASM disks. You might want to be able to use udev or something else
- Not bonding your network interfaces (using ansible facts to pick out information about eth0 & eth1)
- Using Flex ASM
- External redundancy for all diskgroups
- Use of AL32UTF8/AL16UTF16 (this might not change)
- Multipathing is not configured by the roles so that is something you’d have to do yourself.
- Only Admin managed databases
In this post I thought we’d quickly go through the existing roles that are used in doing the RAC-install and see what they do. There are a few other roles as well, which are not being used at the moment, so we’ll not go through them.
The following roles are also used when setting up a Grid Infrastructure in a Stand-alone configuration (Oracle Restart)
Below is pretty much a copy of the README.md from the Github page.
The different roles are:
common: This will configure stuff common to all machines
- Install some generic packages
- Configure ntp
- Possibly add a default/deploy user.
orahost: This will configure the host specific Oracle stuff:
- Add a user & group (at the moment only a dba group)
- Create directory structures
- Generate ssh-keys and set up passwordless ssh between clusternodes in case of RAC/RAC One node
- Handle filesystem storage (partition devices, creates vg/lv and a ext4 filesystem etc).
- Install required packages
- Change kernel parameters
- Set up pam.d/limits config
- Configures Hugepages (as a percentage of total RAM)
- Disables transparent hugepages
- Disables NUMA (if needed)
- Configures the interconnect network (if needed)
- Configures Oracle ASMLib
I’m a big fan of automation and configuration management. It makes life a lot easier when it comes to installing and configuring (whatever there is you’re installing and configuring), and also less error prone. And that is where configuration management tools like Ansible, Puppet, Chef & Saltstack comes into play. They help you enforce state on your hosts, which makes keeping things in line easy. You just update your manifest (in the Puppet-case) and soon after the agent on the managed hosts checks in with the master, all your hosts have applied the change. It doesn’t matter if you’re managing 1, 50, 500 or 5000 hosts, you just make the change once and the tool does all the rest. I mean, whats not to like about that?