Automated Oracle RAC installation using Ansible (part 3)

In this post we’ll be setting up a 2-node 12.1.0.2 RAC on Oracle Linux 6.5, and the end goal is to have a fully configured cluster with 2 databases available (1 RAC, 1 RAC One Node)

The machines are called orarac03/04 and they have just been kickstarted. As part of that process a ‘deploy’ user (ansible) has been added, with ssh-keys to make sure passwordless ssh is possible from the control machine. The user also have passwordless sudo privileges.
Note: You don’t necessarily need to have everything passwordless (login, sudo etc), as Ansible can ask for all that at the start of a play but it naturally makes things easier if you can.

The ansible user will be used in the playbooks and will sudo to the other users as needed.

The hosts have been equipped with:

  • One 16GB device (/dev/sda)
  • One 50GB device
  • 6 devices for shared storage.
  • 2 NIC’s, one for the public network and one for the interconnect. Only the public (eth0) is configured
  • 2 cores/8 GB RAM

Disk devices

First of all, We make sure we’ve got the disk devices we expect to have, and that they’re present on both nodes.

  • 1x8GB – goes into CRS
  • 2x10GB – goes into DATA
  • 3x12GB – goes into FRA

The actual disk layout can be seen in storage_fs_layout & asm_storage_layout

[miksan@ponderstibbons ansible-oracle]$ ansible orarac-dc2 -m raw -a "fdisk -l |grep sd" -s
orarac03 | success | rc=0 >>
SUDO-SUCCESS-qwgrayprmpaivoybjpbbzjbpvawuhvgd

Disk /dev/sdg: 12.9 GB, 12884901888 bytes
Disk /dev/sdf: 12.9 GB, 12884901888 bytes
Disk /dev/sde: 10.7 GB, 10737418240 bytes
Disk /dev/sdd: 10.7 GB, 10737418240 bytes
Disk /dev/sdc: 8589 MB, 8589934592 bytes
Disk /dev/sdb: 53.7 GB, 53687091200 bytes
Disk /dev/sda: 17.2 GB, 17179869184 bytes
/dev/sda1 * 2 501 512000 83 Linux
/dev/sda2 502 16384 16264192 8e Linux LVM
Disk /dev/sdh: 12.9 GB, 12884901888 bytes
orarac04 | success | rc=0 >>
SUDO-SUCCESS-tmttmldrgahdvotssvqrynujjektvvck

Disk /dev/sdg: 12.9 GB, 12884901888 bytes
Disk /dev/sde: 10.7 GB, 10737418240 bytes
Disk /dev/sdd: 10.7 GB, 10737418240 bytes
Disk /dev/sdc: 8589 MB, 8589934592 bytes
Disk /dev/sdb: 53.7 GB, 53687091200 bytes
Disk /dev/sda: 17.2 GB, 17179869184 bytes
/dev/sda1 * 2 501 512000 83 Linux
/dev/sda2 502 16384 16264192 8e Linux LVM
Disk /dev/sdh: 12.9 GB, 12884901888 bytes
Disk /dev/sdf: 12.9 GB, 12884901888 bytes

Setting up the inventory

The inventory is where you specify the hostgroup and the hosts that should be part of that group. The inventory location defaults to /etc/ansible/hosts but an inventory file can be placed anywhere and called via the ‘-i’ flag.
I’ve created an inventory directory inside the repo which contains a file called lab (as I’m doing these tests in my lab)

The inventory file (inventory/lab) should contain

[orarac-dc2]  <-- Hostgroup name
orarac03          
orarac04

Cloning the repository

Then we clone the Github repo containing the code.

[miksan@ponderstibbons tmp]$ git clone http://github.com/oravirt/ansible-oracle.git
Cloning into 'ansible-oracle'...
remote: Counting objects: 507, done.
remote: Compressing objects: 100% (150/150), done.
remote: Total 507 (delta 75), reused 0 (delta 0)
Receiving objects: 100% (507/507), 165.26 KiB | 0 bytes/s, done.
Resolving deltas: 100% (178/178), done.
Checking connectivity... done
[miksan@ponderstibbons tmp]$
[miksan@ponderstibbons git]$ ls -ltr ansible-oracle/
total 96
-rw-rw-r--. 1 miksan miksan 4478 Sep 19 08:45 README.md
-rw-rw-r--. 1 miksan miksan 415 Sep 19 08:45 full-rac-install.yml
drwxrwxr-x. 5 miksan miksan 4096 Sep 19 08:45 common
-rwxrwxr-x. 1 miksan miksan 1092 Sep 19 08:45 clean.sh
drwxrwxr-x. 2 miksan miksan 4096 Sep 19 08:45 group_vars
drwxrwxr-x. 2 miksan miksan 4096 Sep 19 08:45 inventory
drwxrwxr-x. 2 miksan miksan 4096 Sep 19 08:45 host_vars
drwxrwxr-x. 5 miksan miksan 4096 Sep 19 08:45 oradb-create
drwxrwxr-x. 4 miksan miksan 4096 Sep 19 08:45 oraasm-createdg
drwxrwxr-x. 4 miksan miksan 4096 Sep 19 08:45 oraasm-configureasm
drwxrwxr-x. 3 miksan miksan 4096 Sep 19 08:45 oralsnr
drwxrwxr-x. 6 miksan miksan 4096 Sep 19 08:45 orahost-storage
drwxrwxr-x. 7 miksan miksan 4096 Sep 19 08:45 orahost
drwxrwxr-x. 5 miksan miksan 4096 Sep 19 08:45 oraswdb-install
drwxrwxr-x. 6 miksan miksan 4096 Sep 19 08:45 oraswgi-clone
drwxrwxr-x. 3 miksan miksan 4096 Sep 19 08:45 oraswgi-opatch
drwxrwxr-x. 5 miksan miksan 4096 Sep 19 08:45 oraswgi-install
-rw-rw-r--. 1 miksan miksan 382 Sep 19 08:45 single-instance-ha-install.yml
-rw-rw-r--. 1 miksan miksan 310 Sep 19 08:45 single-instance-fs-install.yml
-rw-rw-r--. 1 miksan miksan 9716 Sep 19 08:45 parameters
drwxrwxr-x. 5 miksan miksan 4096 Sep 19 08:45 oraswracdb-clone
[miksan@ponderstibbons git]$ 

host_vars

After the repo has been cloned, we need to configure host_vars & groups_vars.

Whatever you put in host_vars is specific for a certain host.
Use your favourite editor and make the entries in host_vars look like this:

[miksan@ponderstibbons ansible-oracle]$ cat host_vars/orarac03
---
 master_node: true
[miksan@ponderstibbons ansible-oracle]$ cat host_vars/orarac04
---
 master_node: false

The settings above means is that the parts that only have to be run on one node will be run on orarac03 and will be skipped on the other nodes

group_vars

Parameters that you put in group_vars should be things that are the same for all nodes in the cluster, so all GI-config, storage etc. Most of these parameters are also defined as defaults in each role, and also putting them in group_vars will override the default settings.

[miksan@ponderstibbons group_vars]$ cat orarac-dc2 
---
# Generic 
 hostgroup: orarac-dc2 <-- should match the name in the inventory/lab file 
 oracle_user: oracle 
 oracle_group: dba 
 oracle_user_home: "/home/{{ oracle_user }}"
 oracle_passwd: "$6$0xHoAXXF$K75HKb64Hcb/CEcr3YEj2LGERi/U2moJgsCK.ztGxLsKoaXc4UBiNZPL0hlxB5ng6GL.gyipfQOOXplzcdgvD0" <-- Oracle123
 oracle_sw_source_www: http://mywebserver/orasw 
 oracle_sw_source_local: /path/to/files/locally 
 is_sw_source_local: false <-- Meaning that all installation media is downloaded from 'oracle_sw_source_www'
 disable_numa_boot: true 
 percent_hugepages: 60 
 configure_interconnect: true 
 configure_ssh: true 
 configure_host_disks: true 
 configure_cluster: true
 
# Directory Structures 
 oracle_stage: /u01/stage 
 oracle_psu_stage: "{{ oracle_stage }}/psu" 
 oracle_rsp_stage: "{{ oracle_stage }}/rsp" 
 oracle_base: /u01/app/oracle 
 oracle_inventory_loc: /u01/app/oraInventory
# Grid Infrastructure install option
 oracle_install_option_gi: CRS_CONFIG <-- To install a cluster
 oracle_install_version: 12.1.0.2
# Software 
oracle_sw_image_gi: 
   - { filename: linuxamd64_12102_grid_1of2.zip, version: 12.1.0.2 }
   - { filename: linuxamd64_12102_grid_2of2.zip, version: 12.1.0.2 }
  # - { filename: linuxamd64_12c_grid_1of2.zip, version: 12.1.0.1 } 
  # - { filename: linuxamd64_12c_grid_2of2.zip, version: 12.1.0.1 } 
 
oracle_sw_image_db: # Installation media for the database installations 
   - { filename: linuxamd64_12102_database_1of2.zip, version: 12.1.0.2 } 
   - { filename: linuxamd64_12102_database_2of2.zip, version: 12.1.0.2 } 
  # - { filename: linuxamd64_12c_database_1of2.zip, version: 12.1.0.1 } 
  # - { filename: linuxamd64_12c_database_2of2.zip, version: 12.1.0.1 }
# Input for Grid Infrastructure responsefile
 oracle_password: Oracle123 
 oracle_scan: orarac-scan-dc2.discworld.lab 
 oracle_vip: -vip 
 oracle_scan_port: 1521 
 oracle_ic_net: 3.3.3.{{ ansible_all_ipv4_addresses[0].split(".")[-1] }}        
 oracle_asm_crs_diskgroup: crs
# ORACLE_HOMES & Databases to be installed
oracle_databases: 
  - home: home1 
    oracle_edition: EE 
    oracle_db_name: racdb 
    oracle_db_passwd: Oracle123 
    oracle_db_type: RAC 
    is_container: "false" 
    is_racone: "false" 
    storage_type: ASM 
    service_name: racdb_serv 
    oracle_init_params: "open_cursors=300,processes=700" 
    oracle_db_mem_percent: 20 
    oracle_database_type: MULTIPURPOSE 
    oracle_version_db: 12.1.0.2 
  - home: home2
    oracle_edition: EE
    oracle_db_name: racone
    oracle_db_passwd: Oracle123
    oracle_db_type: RACONENODE
    is_container: "false"
    is_racone: "true"
    storage_type: ASM
    service_name: racone_serv
    oracle_init_params: "open_cursors=1000,processes=400"
    oracle_db_mem_percent: 10
    oracle_database_type: MULTIPURPOSE 
    oracle_version_db: 12.1.0.2
# Datafile & recovery file locations
 oracle_dbf_dir_asm: "DATA"        
 oracle_reco_dir_asm: "FRA"
# ASM Storage / FS Storage Layout
 host_fs_layout: 
   u01:
     {mntp: /u01, device: /dev/sdb, vgname: vgora, pvname: /dev/sdb1, lvname: lvora, fstype: ext4}
asm_diskgroups: 
   - crs
   - data
   - fra
asm_storage_layout:     
  crs:
    - {device: /dev/sdc, asmlabel: CRS01}
  data:
    - {device: /dev/sdd, asmlabel: DATA01}
    - {device: /dev/sde, asmlabel: DATA02}
  fra:
    - {device: /dev/sdf, asmlabel: FRA01}
    - {device: /dev/sdg, asmlabel: FRA02}
    - {device: /dev/sdh, asmlabel: FRA03}

The playbook

The playbook is where you decide in which order the different roles should be executed.

[miksan@ponderstibbons ansible-oracle]$ cat full-rac-install.yml
---

- name: Host configuration
  hosts: orarac-dc2
  user: ansible
  sudo: yes
  roles:
    - common
    - orahost
    - orahost-storage
- name: Oracle Grid Infrastructure installation, ASM Configuration & Database Creation
  hosts: orarac-dc2
  user: ansible
  sudo: yes
  sudo_user: oracle
  roles:
   - oraswgi-install
   - oraasm-createdg
   - oraswdb-install
   - oradb-create

So after all that it’s finally time to kick this thing off. I’ll be adding few comments next to the tasks below.

I will not post the entire log as its pretty verbose (overly so at certain places, I’ll see if I can do anything about that). I also always throw in the ‘time‘ command when running things to be able to see how long it takes to run the playbook. There is no way (at least yet) to have Ansible spit out timing information during the run. But, If you configure logging to a file (configurable in /etc/ansible/ansible.cfg) you get the time information in the logfile.

[miksan@ponderstibbons ansible-oracle]$ time ansible-playbook full-rac-install.yml -i inventory/lab

PLAY [Host configuration] ****************************************************

GATHERING FACTS **************************************************************
ok: [orarac04]
ok: [orarac03]

TASK: [common | Install EPEL Repo] ******************************************** <-- This means that the role common is running the task 'Install EPEL Repo'
ok: [orarac03]
ok: [orarac04]

TASK: [common | Get newest repo-file for OL6 (public-yum)] ******************** 
ok: [orarac03]
ok: [orarac04]

TASK: [common | Install common packages] ************************************** 
changed: [orarac04] => (item=screen,facter,procps,module-init-tools,ethtool,bc,bind-utils,nfs-utils,make,sysstat,openssh-clients,compat-libcap1,twm,collectl,rlwrap,tigervnc-server,ntp,expect,git)
changed: [orarac03] => (item=screen,facter,procps,module-init-tools,ethtool,bc,bind-utils,nfs-utils,make,sysstat,openssh-clients,compat-libcap1,twm,collectl,rlwrap,tigervnc-server,ntp,expect,git)
....... SKIP A BUNCH OF LINES

TASK: [orahost-storage | ASMlib | Run script to create asm-labels] ************ 
skipping: [orarac04] => (item=crs) <-- Skipping [orarac04] in this role means that the task is being performed on the other host [orarac03]. This because master_node: true on orarac03 and master_node: false on orarac04, and the task has a conditional which says: "when master_node" which only is true for orarac03
skipping: [orarac04] => (item=data)
skipping: [orarac04] => (item=fra)
changed: [orarac03] => (item=crs)
changed: [orarac03] => (item=data)
changed: [orarac03] => (item=fra)

...... SKIP A BUNCH OF LINES

TASK: [oradb-create | Check if database is registered] ************************ 
changed: [orarac03]
changed: [orarac04]

TASK: [oradb-create | debug var=srvctlconfig.stdout_lines] ******************** 
ok: [orarac03] => {
 "srvctlconfig.stdout_lines": [
 "racdb",     <-- And here we have our 2 databases, which was the goal.
 "racone"
 ]
}
ok: [orarac04] => {
 "srvctlconfig.stdout_lines": [
 "racdb", 
 "racone"
 ]
}

PLAY RECAP ******************************************************************** 
orarac03 : ok=102 changed=63 unreachable=0 failed=0 
orarac04 : ok=96 changed=45 unreachable=0 failed=0 


real 59m55.445s
user 0m8.348s
sys 0m5.683s
[miksan@ponderstibbons ansible-oracle]$

[miksan@ponderstibbons ansible-oracle]$ ansible orarac-dc2 -a "/u01/app/12.1.0.2/grid/bin/crsctl stat res -t" -s --limit orarac03
orarac03 | success | rc=0 >>
--------------------------------------------------------------------------------
Name      Target     State     Server        State details 
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
          ONLINE    ONLINE     orarac03      STABLE
          ONLINE    ONLINE     orarac04      STABLE
ora.CRS.dg
          ONLINE    ONLINE     orarac03      STABLE
          ONLINE    ONLINE     orarac04      STABLE
ora.DATA.dg
          ONLINE    ONLINE     orarac03      STABLE
          ONLINE    ONLINE     orarac04      STABLE
ora.FRA.dg
          ONLINE    ONLINE     orarac03      STABLE
          ONLINE    ONLINE     orarac04      STABLE
ora.LISTENER.lsnr
          ONLINE    ONLINE     orarac03      STABLE
          ONLINE    ONLINE     orarac04      STABLE
ora.net1.network
          ONLINE    ONLINE     orarac03      STABLE
          ONLINE    ONLINE     orarac04      STABLE
ora.ons
         ONLINE     ONLINE     orarac03      STABLE
         ONLINE     ONLINE     orarac04      STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
       1 ONLINE     ONLINE     orarac04      STABLE
ora.LISTENER_SCAN2.lsnr
       1 ONLINE     ONLINE     orarac03      STABLE
ora.MGMTLSNR
      1 ONLINE      ONLINE     orarac03      169.254.122.139 3.3.3.60,STABLE
ora.asm
      1 ONLINE      ONLINE     orarac03      STABLE
      2 ONLINE      ONLINE     orarac04      Started,STABLE
      3 OFFLINE     OFFLINE                  STABLE
ora.cvu
      1 ONLINE      ONLINE     orarac03      STABLE
ora.mgmtdb
      1 ONLINE      ONLINE     orarac03      Open,STABLE
ora.oc4j
      1 ONLINE      ONLINE     orarac03      STABLE
ora.orarac03.vip
      1 ONLINE      ONLINE     orarac03      STABLE
ora.orarac04.vip
      1 ONLINE      ONLINE     orarac04      STABLE
ora.racdb.db
      1 ONLINE      ONLINE     orarac03      Open,STABLE
      2 ONLINE      ONLINE     orarac04      Open,STABLE
ora.racone.db
      1 ONLINE      ONLINE     orarac03      Open,STABLE
ora.racone.racone_serv.svc
      1 ONLINE      ONLINE     orarac03      STABLE
ora.scan1.vip
      1 ONLINE      ONLINE     orarac04      STABLE
ora.scan2.vip
      1 ONLINE      ONLINE     orarac03      STABLE
--------------------------------------------------------------------------------

[miksan@ponderstibbons ansible-oracle]$ 

So, there it is. One fully configured RAC-cluster with two databases, and I didn’t even have to log in to the cluster nodes to get there. Well, not really, but close enough. I had to make sure that the correct disk devices were available before the mapping in  host_fs_layout & asm_storage_layout could be done. And then the actual GI/RAC installation was completely hands off.

And the best part? Its repeatable and you can be sure it will look exactly the same every time.

So if anyone tries this let me know how it goes. I’m sure there are places where I could have done a better job at explaining.

(In a future post I’ll also add the cluster to an Enterprise Manager config)

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s