As of version 1.3 it is possible to create a RAC Flex Cluster using ansible-oracle. From a ansible-oracle configuration perspective there is not a huge difference from a normal ‘standard’ cluster, basically a few new parameters. There are other differences though, specifically in how you have to run the playbook and deal with the inventory configuration.
In a ‘standard’ cluster, you have your database cluster nodes and that’s it (basically). In a Flex Cluster configuration there are 2 types of nodes:
- Hub nodes. These have access to the shared storage and will house your databases
- Leaf nodes. These nodes are connected to the interconnect network, but not to the shared storage. So for instance, you could run your application on these nodes.
So given that, from an ansible-oracle perspective, it presents a few challenges. With a normal cluster, you could just run the playbook against all your cluster nodes as they are all the same.
Now, when building a Flex Cluster, there are a few things that should only be done on the hub nodes (configuring shared storage, install the database server and create the database(s)
And how do we do that? With a little Ansible inventory ninja-ism.
New Parameters
So the parameters that have been added are these.
- oracle_gi_cluster_type. The type of cluster(standard/flex)
- oracle_gi_gns_subdomain. The subdomain that has been delegated to the GNS.
- oracle_gi_gns_vip. The VIP-adress of the GNS
- hostgroup_hub: The default is “{{ hostgroup }}-hub”.
- hostgroup_leaf: The default is “{{ hostgroup }}-leaf”
I’m not going to go into any sort of detail regarding the Grid Naming Service (GNS), as others have written about it already, and the documentation is here, but GNS is basically a way to have the Grid Infrastructure Cluster provide names, ip-addresses and name resolution for the cluster nodes, instead of using a DNS. You can choose how much you want the GNS to do, but at a minimum you’d need a static ip-address for the GNS VIP.
Setting up the inventory
So this time we’ll call the hostgroup orarac-flex. We also need to define which hosts should be hub-nodes and which should be leaf-nodes
[orarac-flex-hub] <-- These are the hub-nodes orarac03 orarac04 [orarac-flex-leaf] <-- These are the leaf-nodes orarac06 [orarac-flex:children] <-- This is the hostgroup that should be specified in the configuration (i.e hostgroup=orarac-flex) orarac-flex-hub orarac-flex-leaf
When setting up a standard cluster, the nodes making up the GI cluster and database cluster are usually the same, and during the installation the hostnames are picked up automatically by looping through the members of the hostgroup that you define with the parameter ‘hostgroup’. With a flex cluster, the members of the GI cluster are hubnodes + leaf nodes but the database cluster consists only of the hub-nodes and that is why the inventory definition (and subsequently the playbook) have to look a bit different.
--- - name: Host configuration hosts: orarac-flex <- This runs on all nodes (maps to orarac-flex:children) user: ansible sudo: yes roles: - common - orahost - name: Storage Configuration hosts: orarac-flex-hub <-- Only runs on the HUB nodes (maps to orarac-flex-hub) user: ansible sudo: yes roles: - orahost-storage - name: Oracle Grid Infrastructure installation hosts: orarac-flex <-- Runs on all nodes (maps to orarac-flex:children) user: ansible sudo: yes roles: - oraswgi-install - name: ASM Configuration, Database Server Installation & Database Creation hosts: orarac-flex-hub <-- Only runs on the HUB nodes (maps to orarac-flex-hub) user: ansible sudo: yes roles: - oraasm-createdg - oraswdb-install - oradb-create
Note how we don’t not have any plays that will run only against the leaf nodes, but that could easily be added if you wanted to deploy an application to the leaf nodes after the cluster is configured.
But on the other hand, you’d probably do that in a different playbook.
Host_group configuration
This is what the configuration should look like. The variables in bold are the one’s that differentiates a Standard Cluster from a Flex Cluster, the rest is the same.
--- hostgroup: orarac-flex role_separation: true device_persistence: udev configure_interconnect: true configure_ssh: true configure_host_disks: true configure_cluster: true ################ Grid Infrastructure specifics ################ oracle_install_option_gi: CRS_CONFIG oracle_install_version_gi: 12.1.0.2 oracle_gi_cluster_type: flex oracle_gi_gns_subdomain: gns.discworld.lab oracle_gi_gns_vip: orarac-dc2-gns.gns.discworld.lab oracle_password: Oracle123 oracle_scan: orarac-scan-dc2.discworld.lab oracle_vip: -vip oracle_scan_port: 1521 oracle_asm_init_dg: crs oracle_databases: - home: rachome1 oracle_version_db: 12.1.0.2 oracle_edition: EE oracle_db_name: racdba oracle_db_passwd: Oracle123 oracle_db_type: RAC is_container: "false" pdb_prefix: pdb num_pdbs: 1 is_racone: "false" storage_type: ASM service_name: racdb_serv oracle_init_params: "open_cursors=300,processes=500" oracle_db_mem_percent: 25 oracle_database_type: MULTIPURPOSE redolog_size_in_mb: 100 delete_db: false oracle_dbf_dir_asm: "DATA" oracle_reco_dir_asm: "FRA" host_fs_layout: u01: {mntp: /u01, device: /dev/sdb, vgname: vgora, pvname: /dev/sdb1, lvname: lvora, fstype: ext4} asm_diskgroups: - crs - data - fra asm_storage_layout: crs: - {device: /dev/sdc, asmlabel: crs01} data: - {device: /dev/sdd, asmlabel: data01} - {device: /dev/sde, asmlabel: data02} fra: - {device: /dev/sdf, asmlabel: fra01} - {device: /dev/sdg, asmlabel: fra02} - {device: /dev/sdh, asmlabel: fra03}
Running the playbook
So the thing that we should pay attention to below is that the different plays will use the different parts of the inventory that we configured previously.
[miksan@ponderstibbons ansible-oracle]$ time ansible-playbook full-rac-flex-install.yml -i inventory/flex PLAY [Host configuration] ***************************************************** GATHERING FACTS *************************************************************** ok: [orarac03] ok: [orarac04] ok: [orarac06] TASK: [common | Install EPEL Repo] ******************************************** ok: [orarac06] ok: [orarac03] ok: [orarac04] TASK: [common | Get newest repo-file for OL6 (public-yum)] ******************** ok: [orarac04] ok: [orarac03] ok: [orarac06] TASK: [common | Install common packages] ************************************** ok: [orarac04] => (item=screen,facter,procps,module-init-tools,ethtool,bc,bind-utils,nfs-utils,make,sysstat,openssh-clients,compat-libcap1,twm,collectl,rlwrap,tigervnc-server,ntp,expect,git,lvm2,xfsprogs,btrfs-progs) ok: [orarac03] => (item=screen,facter,procps,module-init-tools,ethtool,bc,bind-utils,nfs-utils,make,sysstat,openssh-clients,compat-libcap1,twm,collectl,rlwrap,tigervnc-server,ntp,expect,git,lvm2,xfsprogs,btrfs-progs) ok: [orarac06] => (item=screen,facter,procps,module-init-tools,ethtool,bc,bind-utils,nfs-utils,make,sysstat,openssh-clients,compat-libcap1,twm,collectl,rlwrap,tigervnc-server,ntp,expect,git,lvm2,xfsprogs,btrfs-progs) ................. SKIP ............ PLAY [Storage Configuration] ************************************************** GATHERING FACTS *************************************************************** ok: [orarac04] ok: [orarac03] TASK: [orahost-storage | ASMlib | Create device to label mappings for asm] **** skipping: [orarac04] => (item=crs) skipping: [orarac04] => (item=data) skipping: [orarac04] => (item=fra) changed: [orarac03] => (item=crs) changed: [orarac03] => (item=data) changed: [orarac03] => (item=fra) ............ SKIP ...... PLAY [Oracle Grid Infrastructure installation] ******************************** ....... SKIP .... ok: [orarac06] => { "opatchls.stdout_lines": [ "Oracle Interim Patch Installer version 12.1.0.1.3", "Copyright (c) 2014, Oracle Corporation. All rights reserved.", "", "", "Oracle Home : /u01/app/12.1.0.2/grid", "Central Inventory : /u01/app/oraInventory", " from : /u01/app/12.1.0.2/grid/oraInst.loc", "OPatch version : 12.1.0.1.3", "OUI version : 12.1.0.2.0", "Log file location : /u01/app/12.1.0.2/grid/cfgtoollogs/opatch/opatch2014-11-26_13-05-48PM_1.log", "", "Lsinventory Output file location : /u01/app/12.1.0.2/grid/cfgtoollogs/opatch/lsinv/lsinventory2014-11-26_13-05-48PM.txt", "", "--------------------------------------------------------------------------------", "Installed Top-level Products (1): ", "", "Oracle Grid Infrastructure 12c 12.1.0.2.0", "There are 1 products installed in this Oracle Home.", "", "", "There are no Interim patches installed in this Oracle Home.", "", "", "Patch level status of Cluster nodes :", "", " Patching Level \t\t Nodes", " -------------- \t\t -----", " 0 \t\t orarac03,orarac04,orarac06", "", "--------------------------------------------------------------------------------", "", "OPatch succeeded." ] } TASK: [oraswgi-install | Check olsnodes (GI)] ********************************* changed: [orarac04] changed: [orarac03] changed: [orarac06] TASK: [oraswgi-install | Check olsnodes (GI)] ********************************* ok: [orarac03] => { "olsnodes.stdout_lines": [ "orarac03\t1\tActive\tHub\tUnpinned", "orarac04\t2\tActive\tHub\tUnpinned", "orarac06\t100\tActive\tLeaf\tUnpinned" ] } ok: [orarac06] => { "olsnodes.stdout_lines": [ "orarac03\t1\tActive\tHub\tUnpinned", "orarac04\t2\tActive\tHub\tUnpinned", "orarac06\t100\tActive\tLeaf\tUnpinned" ] } ok: [orarac04] => { "olsnodes.stdout_lines": [ "orarac03\t1\tActive\tHub\tUnpinned", "orarac04\t2\tActive\tHub\tUnpinned", "orarac06\t100\tActive\tLeaf\tUnpinned" ] } ......... SKIP ..... TASK: [oradb-create | debug var=psout.stdout_lines] *************************** ok: [orarac03] => { "psout.stdout_lines": [ "oracle 24346 1 0 13:23 ? 00:00:00 ora_pmon_racdba1", "grid 41140 1 0 12:47 ? 00:00:00 asm_pmon_+ASM1", "grid 54745 1 0 12:59 ? 00:00:00 mdb_pmon_-MGMTDB" ] } ok: [orarac04] => { "psout.stdout_lines": [ "grid 33032 1 0 12:53 ? 00:00:00 asm_pmon_+ASM2", "oracle 59452 1 0 13:23 ? 00:00:00 ora_pmon_racdba2" ] } PLAY RECAP ******************************************************************** orarac03 : ok=122 changed=77 unreachable=0 failed=0 orarac04 : ok=113 changed=55 unreachable=0 failed=0 orarac06 : ok=61 changed=36 unreachable=0 failed=0 real 60m43.143s user 0m13.592s sys 0m8.279s
Ok, the playbook is finished and we got our database, but what kind of cluster did we end up with?
[miksan@ponderstibbons ansible-oracle]$ ansible orarac-flex -a "/u01/app/12.1.0.2/grid/bin/olsnodes -a -s -n -t" -i inventory/flex --limit orarac03 -s orarac03 | success | rc=0 >> orarac03 1 Active Hub Unpinned orarac04 2 Active Hub Unpinned orarac06 100 Active Leaf Unpinned [miksan@ponderstibbons ansible-oracle]$ [miksan@ponderstibbons ansible-oracle]$ ansible orarac-flex -a "/u01/app/12.1.0.2/grid/bin/crsctl get node role config -all" -i inventory/flex --limit orarac03 -s orarac03 | success | rc=0 >> Node 'orarac03' configured role is 'hub' Node 'orarac04' configured role is 'hub' Node 'orarac06' configured role is 'leaf' [miksan@ponderstibbons ansible-oracle]$ [miksan@ponderstibbons ansible-oracle]$ [miksan@ponderstibbons ansible-oracle]$ ansible orarac-flex-hub -m shell -a "source /home/grid/.profile_grid; export ORACLE_SID=+ASM1; asmcmd showclustermode" -i inventory/flex -s orarac03 | success | rc=0 >> ASM cluster : Flex mode enabled orarac04 | success | rc=0 >> ASM cluster : Flex mode enabled [miksan@ponderstibbons ansible-oracle]$
And that’s it. One Flex Cluster configured and ready to use.