First off, I’ll be setting up page, where I intend to have a complete list of parameters used in all roles and a description of what they actually do. Up until now I’ve kept it in a file in the github repo, but I think it will be easier to maintain this way. The page is empty at the moment but I’ll try add to it as soon as possible
Now, this is the 3rd post on how to get started with ansible-oracle. You can find the other 2 posts here and here.
This time we will perform a RAC installation, and we will also introduce a few other options we haven’t explored before (mostly because they got added recently).
We will be introducing a concept known as GI role separation which means that the user grid will own the Grid Installation including ASM, and the user oracle will own the database installation.
The default is false, meaning the user oracle will own/run everything (well, except for the parts that run as root then).
We will be creating a container database including 2 pluggable databases
The installation will be performed over nfs this time, using a central repository where all the media is located, instead of copying the files to the installation hosts.
So in summary, this is what we will try to accomplish:
- Use group_vars & host_vars to alter the default values.
- Do the installation over nfs
- Setup RAC using role separation
- Create a container database with 2 pluggable databases
So, I’ve been adding support for GI role separation to the roles so that is why it has taken so long to get this post out.
In the last post we created a single instance database with database storage on a filesystem. This time we’re going to take it a step further and create a single instance database, but now we’re going to use ASM for storage. This means we also have to install the Grid Infrastructure in a stand alone configuration, so we’re adding a few other roles to the playbook.
We’re also going to deploy this configuration on 2 machines in parallel (oradb01, oradb02).
We’re also going to deviate from the default ‘role configuration’, i.e not relying entirely on variable values in defaults/main.yml. You could of course change the default so they more suit your needs and just rely on the defaults, but that limits your options (unless you only have one system to deal with).
The easiest way to override the defaults is to ‘move’ the parameters to a higher priority location, i.e group_vars or host_vars. In this example we’re going to put our ‘host-group’ specifics in group_vars.
So what do I mean by specifics?
- Storage config (storage devices for filesystems. This time, we’re going to put /u01 on its own device instead of the ‘root’-device)
- Storage config (storage devices for ASM)
- We’re going to call the database something else.
- We may want to install a different versions of GI (or DB). So, this time we’re going to install 220.127.116.11 GI and a 18.104.22.168 database
This is where everything will be installed:
- GI – /u01/app/oracle/22.214.171.124/grid
- DB – /u01/app/oracle/126.96.36.199/myhomeasm
I thought I’d write a quick post on how to get started with ansible-oracle.
The reason I decided to use roles when putting this together was to make it easily reusable, and to be able to pick and choose which roles you want to use. If you want to do everything from the ground up you can, and also if you already have a properly configured server and just want to install Oracle and create a single instance database on a filesystem you can absolutely do just that, by using just the oraswdb-install and oradb-create roles.
So, we’re going to do both. And we’re just going to go with the defaults and create a single instance database with datafiles on a filesystem.
Note: The installation will be without a configured listener. I have not gotten around to fixing the listener issue with installations without GI.
1. Configuring the host + installing the database server & creating a database
First off, we’re going to take a newly installed machine and configure it from ground up.
- Oracle version 188.8.131.52
- ORACLE_HOME will be /u01/app/oracle/184.108.40.206/orcl
- One database will be created, ‘orcl’
- Datafiles/fra will reside in /u01/oradata & /u01/fra respectively. The u01 directory is created by Ansible and oradata + fra will be created by dbca
- The Oracle software (linuxamd64_12102_database_1of2.zip, linuxamd64_12102_database_2of2.zip) has been downloaded and the files are placed in /tmp on the control-machine.
And now, on to the good stuff.
This is no longer true, and it is perfectly fine to use Ansible 1.7 to run these roles.
The reason for this post yesterday was that a change in behaviour in the shell module caused the GI/DB server installations to fail. This because jobs that got put in the background where no longer being waited on in Ansible 1.7 (which is the correct behaviour, the behaviour in 1.6 was erroneous)
That meant that when runInstaller kicked off the installation and started the java program that performs the actual installation Ansible was only waiting for the foreground process (the shellscript runInstaller) to finish and then move on to the next task, which was running root.sh. But since the installation never finished, the script wasn’t there and the task failed -> the entire play failed.
This has been fixed and there should be no problems to run the roles.
This will be a really short post..
So, it turns out that the part of the roles that uses the shell module to kick off the Oracle installations via runInstaller stopped working in Ansible >= 1.7. Not sure why but for now, stay on 1.6.10 if you want this to work.
The easiest way to get to that version is to run this (you need to install any version of Ansible first, though):
ansible localhost -m pip -a "name=ansible version=1.6.10 state=present" -s
In this post we’ll be setting up a 2-node 220.127.116.11 RAC on Oracle Linux 6.5, and the end goal is to have a fully configured cluster with 2 databases available (1 RAC, 1 RAC One Node)
The machines are called orarac03/04 and they have just been kickstarted. As part of that process a ‘deploy’ user (ansible) has been added, with ssh-keys to make sure passwordless ssh is possible from the control machine. The user also have passwordless sudo privileges.
Note: You don’t necessarily need to have everything passwordless (login, sudo etc), as Ansible can ask for all that at the start of a play but it naturally makes things easier if you can.
The ansible user will be used in the playbooks and will sudo to the other users as needed.
The hosts have been equipped with:
- One 16GB device (/dev/sda)
- One 50GB device
- 6 devices for shared storage.
- 2 NIC’s, one for the public network and one for the interconnect. Only the public (eth0) is configured
- 2 cores/8 GB RAM
This turned out to be a rather lenghty post, so consider yourself warned.
This is a description of the parameters available to all the roles. I will not go into a lot of detail, as they may change. For the most up to date description look at the github page.
I made a few assumptions upfront when creating the roles, and I fully intend to make all of them configurable, but for now:
- The Oracle user only belongs to one group (dba). I’ll add more groups later
- Using ASMlib for the ASM disks. You might want to be able to use udev or something else
- Not bonding your network interfaces (using ansible facts to pick out information about eth0 & eth1)
- Using Flex ASM
- External redundancy for all diskgroups
- Use of AL32UTF8/AL16UTF16 (this might not change)
- Multipathing is not configured by the roles so that is something you’d have to do yourself.
- Only Admin managed databases
In this post I thought we’d quickly go through the existing roles that are used in doing the RAC-install and see what they do. There are a few other roles as well, which are not being used at the moment, so we’ll not go through them.
The following roles are also used when setting up a Grid Infrastructure in a Stand-alone configuration (Oracle Restart)
Below is pretty much a copy of the README.md from the Github page.
The different roles are:
common: This will configure stuff common to all machines
- Install some generic packages
- Configure ntp
- Possibly add a default/deploy user.
orahost: This will configure the host specific Oracle stuff:
- Add a user & group (at the moment only a dba group)
- Create directory structures
- Generate ssh-keys and set up passwordless ssh between clusternodes in case of RAC/RAC One node
- Handle filesystem storage (partition devices, creates vg/lv and a ext4 filesystem etc).
- Install required packages
- Change kernel parameters
- Set up pam.d/limits config
- Configures Hugepages (as a percentage of total RAM)
- Disables transparent hugepages
- Disables NUMA (if needed)
- Configures the interconnect network (if needed)
- Configures Oracle ASMLib