Automated Oracle RAC installation using Ansible (part 2)

This turned out to be a rather lenghty post, so consider yourself warned.

This is a description of the parameters available to all the roles. I will not go into a lot of detail, as they may change. For the most up to date description look at the github page.

Assumptions/limitations

I made a few assumptions upfront when creating the roles, and I fully intend to make all of them configurable, but for now:

  • The Oracle user only belongs to one group (dba). I’ll add more groups later
  • Using ASMlib for the ASM disks. You might want to be able to use udev or something else
  • Not bonding your network interfaces (using ansible facts to pick out information about eth0 & eth1)
  • Using Flex ASM
  • External redundancy for all diskgroups
  • Use of AL32UTF8/AL16UTF16 (this might not change)
  • Multipathing is not configured by the roles so that is something you’d have to do yourself.
  • Only Admin managed databases

Default paths

If you’re performing a 12.1.0.2 Grid Infrastructure installation the default path will be (both clustered and standalone)

/u01/app/12.1.0.2/grid (the path template is '/u01/app/{{ oracle_install_version }}/grid'

A 12.1.0.2 database installation will be in:

/u01/app/oracle/12.1.0.2/myhome1 (the path template is '{{ oracle_base }}/{{ oracle_databases.key.oracle_version_db }}/{{ oracle_databases.key }}'

oracle_databases is the dictionary describing the db-installations. At the moment there can be one database per home/key, so the key could just as well be the same as the oracle_db_name.
NOTE: The example above is not how you actually reference the dictionary but more of metacode.

Various other stage-areas

/u01/stage/      - All installation media ends up here. Is unpacked into <version>/database|grid|em etc
/u01/stage/rsp   - All responsefiles (for dbca, runInstaller, scripts etc ends up here)
/u01/stage/psu   - All psu's will be staged here (psu-management is not implemented yet though)
/u01/app/oracle  - $ORACLE_BASE


Variables

Variables in Ansible can be placed at a variety of places, they can be in the inventory (/etc/ansible/hosts by default), in <role>/defaults/main.yml, <role>/vars/main.yml and they can also be placed in group_vars/<hostgroup> and host_vars/<hostname>.

Variables in <role>/defaults are used for any host using that specific role. They are also of the lowest priority, so if you want your specific hostgroup or host to have a different value for any variable, you can set that variable in group_vars/<hostgroup> or host_vars/<host> to any other value and they will override the default value.
You can read all about variable precedence here.

The master_node variable should always be set to true, unless you’re doing a RAC-install. In the RAC case there are certain tasks that only have to performed on 1 node in the cluster, e.g dealing with shared storage, doing the actual installations, both GI and DB. So in the case of RAC you pick one node to be the ‘master_node’ (master_node: true) during the installations. All other nodes should be set to master_node: false.
NOTE: This variable has nothing to do with a the concept of a resource-master in the RAC-cluster , it is just used when installing the cluster (or the databases).

It is also the default setting.

master_node: true

The oracle_ic_net parameter is the address that should be used as the interconnect-address for that node. The default setting picks the last octet from the public ip to use for cluster-interconnect ip (e.g public: 192.168.0.51 -> ic: 3.3.3.51). This works well in a lab environment where it (maybe) doesnt really matter which address you use. In that case you can just put the example A (from the description below) in group_vars/<hostgroup> and it will set up a interconnect address on eth1.

If you have specific ip’s that you want to use for your nodes you can just put the variable in host_vars/<node> (for each node) and specify the ip that way.

Note: At the moment eth1 is hardcoded as the interconnect interface in the template and in the task. I intend to make this more dynamic in the future.

A: oracle_ic_net: 3.3.3.{{ ansible_all_ipv4_addresses[0].split(.)[-1] }} # Good for labs, goes into group_vars
B: oracle_ic_net: 172.43.12.123  # Maybe more production like, goes into host_vars with a specific IP per host

The following variables are (I think) pretty self explanatory. There are a few parameters that are specific to a RAC-install, and that is

  • configure_cluster – If this is set to false, it’ll short circuit a few things that is only really necessary in a cluster-config, like pulling facts about the interconnect interface (eth1, set up ssh between hosts)
  • configure_interconnect : If you want to configure the interconnect for yourself set it to false.
  • configure_ssh – Sets up passwordless ssh between the hosts. This can be configured by the role, but if want to use your own keys etc, then set this to false and configure it yourself. If not, the role generates unique keys/hostgroup, sets up authorized_keys & known_hosts
  • hostgroup – This should be set to the hostgroup you’re running the play against. Ansible uses this to dynamically pick up the hosts to build out the members of the cluster.
hostgroup: orarac-dc1                       # Inventory group-alias. This is used to populate cluster nodes for GI & RAC/RAC ONE DB installation
oracle_user: oracle                         # User that will own the Oracle Installations. No support for separation of duties yet
oracle_passwd: "yourencryptedpasswd"        # Encrypted password for oracle_user
oracle_user_home: "/home/{{ oracle_user }}" # Home directory for oracle_user. Needed for passing in ssh-keys, profiles etc
oracle_group: dba                           # Primary group for oracle_user.
oracle_user_id: 600                         # uid for oracle_user
oracle_group_id: 600                        # gid for oracle_users's primary group
oracle_sw_source_www: http://webserver/orasw  # address to all software if using the get_url module when putting software on the host(s)
oracle_sw_source_local: /path/to/files/locally # Path to all software if using the copy module when putting software on the host(s). This is the default.
is_sw_source_local: false                   # (true/false). true -> all software is available on the controlmachine. false -> Available on web-server
disable_numa_boot: true                     # (true/false) Sets numa=off in grub.conf.
percent_hugepages: 60                       # Percentage (0-100) of total RAM to dedicate to hugepages, Configures vm.nr_hugepages & memlock settings in /etc/security/limits.conf.
configure_interconnect: true                # (true/false) Should the Interconnect network be configured by Ansible (setup IP-address. IP-address 
                                               configurable by variable oracle_ic_net). Only applicable to RAC-installs
configure_ssh: true                         # (true/false). Should passwordless ssh be configured between clusternodes. Only applicable to RAC-installs
configure_host_disks: true                  # (true/false). Should the specified directories be on their on devices -> (true), or do they live in the 
                                              root-filesystem (/) -> (false). Mountpoints are described in host_fs_layout
configure_cluster: true

These are just basic directory structures that are used during the installations

oracle_stage: /u01/stage                    # All software is staged here
oracle_psu_stage: "{{ oracle_stage }}/psu"  # All psu's are staged here
oracle_rsp_stage: "{{ oracle_stage }}/rsp"  # All responsefiles are staged here (gi,db,dbca,asmca etc).
oracle_base: /u01/app/oracle                # Oracle Base for installations
oracle_inventory_loc: /u01/app/oraInventory # Oracle Inventory location

The following parameters are valid when installing Grid Infrastructure.

oracle_install_option_gi: CRS_CONFIG  # Valid (tested values are CRS_CONFIG,HA_CONFIG. Valid (untested) values are UPGRADE,CRS_SWONLY 
oracle_install_version: 12.1.0.2      # The version of GI you want to install

Installation media

Installation media and which versions to use for the GI installation. The version is important since its is used when unpacking the software, so that they end up in the correct location. {{ oracle_stage }}/{{ version }}/grid

oracle_sw_image_gi: i        # Files containing the installation media for Grid Infrastructure
     - { filename: linuxamd64_12102_grid_1of2.zip, version: 12.1.0.2 }
     - { filename: linuxamd64_12102_grid_2of2.zip, version: 12.1.0.2 }
     #- { filename: linuxamd64_12c_grid_1of2.zip, version: 12.1.0.1 }
     #- { filename: linuxamd64_12c_grid_2of2.zip, version: 12.1.0.1 }

Installation files and which versions for the database installations. The version is important since its is used when unpacking the software, so that it ends up in the correct location. When unpacking the database installation files they by default end up in a ‘database’ directory, so they could overwrite each other. Thats why I’m using the version as well -> the files end up in <version>/database

oracle_sw_image_db: 
      - { filename: linuxamd64_12102_database_1of2.zip, version: 12.1.0.2 }
      - { filename: linuxamd64_12102_database_2of2.zip, version: 12.1.0.2 }
      #- { filename: linuxamd64_12c_database_1of2.zip, version: 12.1.0.1 }
      #- { filename: linuxamd64_12c_database_2of2.zip, version: 12.1.0.1 }

Grid Infrastructure specifics

Cluster-nodes are automatically picked up via the hostgroup variable (at the top). So lets say you have an inventory file that looks like this:

[orarac-dc1]
orarac01
orarac02
orarac03

The template for discovering clustermembers is (where host & ansible_domainname are ansible facts):

{{ host }}.{{ ansible_domainname }}:{{ host }} {{ oracle_vip }}.{{ ansible_domainname }}

So with the help of some Jinja template magic, the result would be:

orarac01.example.com:orarac01-vip.example.com, orarac02.example.com:orarac02-vip.example.com, orarac03.example.com:orarac03-vip.example.com

These are variables that should be set for the GI installation.

oracle_password: Oracle123                    # Password used for all Grid Infrastructure related resources (e.g ASM)
oracle_scan: your.scan.address                # Scan address for Clustered Grid Infrastructure
oracle_vip: -vip                              # Suffix for the host VIP-addresses (e.g orarac01-vip)
oracle_scan_port: 1521                        # Listener port for scan-listener
oracle_ic_net: 3.3.3.{{ ansible_all_ipv4_addresses[0].split(".")[-1] }} # Picks the last octet from the public ip to use for cluster-interconnect ip (e.g 3.3.3.51)
oracle_cluster_mgmdb: true                    # Should the Grid Infrastructure Management database be created
oracle_asm_crs_diskgroup: crs                 # The diskgroup that gets created initially when creating a cluster (contains voting/ocr/mgmnt db etc) Also valid when setting up Stand Alone Grid Infrastructure. The template references the dictionary asm_storage_layout and picks up all disks that are part of the diskgroup.

This following dictionary (oracle_databases) describes the actual database installations. The way it is built at the moment is that each database gets its own ORACLE_HOME. So if you want to add another database, you add another key (e.g myhome3) and add the variables that makes up the actual database. The home will then go into /u01/app/oracle/<version>/<key>

oracle_databases:                    
    myhome1:                           # 'Last' directory in ORACLE_HOME path (e.g /u01/app/oracle/12.1.0.2/myhome1). This is also the 'key' into the dictionary
         oracle_edition: EE            # The edition of database-server (EE,SE,SEONE)
         oracle_db_name: racdb         # Database name
         oracle_db_passwd: Oracle123   # Passwords (sys/system/dbsnmp etc)
         oracle_db_type: RAC           # Type of database (RAC,RACONENODE,SI)
         is_container: "false"         # (true/false) Is the database a container database (not yet implemented)
         is_racone: "false"            # (true/false) Is the database a RAC ONE NODE Database
         storage_type: ASM             # Database storage to be used. ASM or FS. 
         servicename: racdb_serv       # Inital service to be created (not in use yet)
         oracle_init_params: "open_cursors=300,processes=700" # Specific parameters to be set during installation. Comma-separated list
         oracle_db_mem_percent: 30     # Amount of RAM to be used for SGA
         oracle_database_type:         # MULTIPURPOSE|DATA_WAREHOUSING|OLTP
         oracle_version_db: 12.1.0.2   # Oracle version to be installed
    myhome2:
         oracle_db_name: racone
         oracle_db_passwd: Oracle123
         oracle_db_type: RACONE
         is_container: "false"
         is_racone: "true"
         storage_type: ASM
         servicename: racone_serv
         oracle_init_params: "open_cursors=1000,processes=400"
         oracle_db_mem_percent: 20
         oracle_database_type: MULTIPURPOSE 
         oracle_version_db: 12.1.0.2

The follwing are just defaults to where the database files & recovery files are placed.

oracle_dbf_dir_fs: "/u01/oradata/"                            # If storage_type=FS this is where the database is placed.
oracle_reco_dir_fs: "/u01/fra/"                               # If storage_type=FS this is where the fast recovery area is placed
oracle_dbf_dir_asm: "DATA"                                   # If storage_type=ASM this is where the database is placed.
oracle_reco_dir_asm: "FRA"                                   # If storage_type=ASM this is where the fast recovery area is placed

Filesystem layout

Sets up filesystem(s) on your host. This is where the mapping between device/fs is described. At the moment there can only be one device/vg/lv. I might change this in the future.

This is also where you set up the ‘root’ directory for your Oracle installations, so even if you’re not going to put each directory on its own device, this is where you specify the path (mntp: /blergh)

host_fs_layout:
   u01:
     {mntp: /u01, device: /dev/sdb, vgname: vgora, pvname: /dev/sdb1, lvname: lvora}
   u02:
     {mntp: /u02, device: /dev/sdx, vgname: vgoradb, pvname: /dev/sdx1, lvname: lvoradb

ASM Storage

This is where you specify the mapping between physical device and asm-disk.

asm_diskgroups used for DB-storage. Should map to dict asm_storage_layout. If you want to add another diskgroup (or change the names) you need to change both asm_diskgroups and asm_storage_layout

asm_diskgroups:
   - crs
   - data
   - fra

Mapping between device & ASMlib label. This dictionary may be exapanded later on to add support for failgroups, but I havent decided yet.

asm_storage_layout: 
  crs:
    - {device: /dev/sdc, asmlabel: CRS01}
  data:
    - {device: /dev/sdd, asmlabel: DATA01}
    - {device: /dev/sde, asmlabel: DATA02}
  fra:
    - {device: /dev/sdf, asmlabel: FRA01}
    - {device: /dev/sdg, asmlabel: FRA02}
    - {device: /dev/sdh, asmlabel: FRA03}

Packages & Kernel Parameters

These are the packages that gets installed and the kernel parameters that gets set. I intend to make sure shmall/shmmax gets calculated from what ansible_facts returns, to make it more dynamic but I haven’t had time to yet.

oracle_packages:
 - libselinux-python
 - procps
 - module-init-tools
 - ethtool
 - bc
 - bind-utils
 - nfs-utils
 - util-linux-ng
 - xorg-x11-utils
 - xorg-x11-xauth
 - binutils
 - compat-libstdc++-33
 - gcc
 - gcc-c++
 - glibc
 - glibc-devel
 - libaio
 - libaio-devel
 - libgcc
 - libstdc++
 - libstdc++-devel
 - make
 - sysstat
 - openssh-clients
 - compat-libcap1
 - twm
 - collectl
 - rlwrap
 - kmod-oracleasm
 - "{{ asmlib_rpm }}"
 - oracleasm-support
 - xorg-x11-xauth
 - xorg-x11-xinit
 - libXtst
 - xdpyinfo
 - xterm
 - xsetroot
 - libXp
 - libXt
 - libXtst
 - ntp
 
 oracle_sysconfig:
 - { name: kernel.shmall, value: 4294967296 }
 - { name: kernel.shmmax, value: 68719476736 }
 - { name: kernel.shmmni, value: 4096 }
 - { name: kernel.sem, value: "250 32000 100 128" }
 - { name: fs.file-max, value: 6815744 }
 - { name: fs.aio-max-nr, value: 1048576 }
 - { name: net.ipv4.ip_local_port_range, value: "9000 65500" }
 - { name: net.core.rmem_default, value: 262144 }
 - { name: net.core.rmem_max, value: 4194304 }
 - { name: net.core.wmem_default, value: 262144 }
 - { name: net.core.wmem_max, value: 1048576 }
 - { name: vm.nr_hugepages, value: "{{ (((percent_hugepages/100) * ansible_memtotal_mb)/2)|round|int }}" }
oracle_seclimits:
 - "{{ oracle_user }} soft nproc 2047"
 - "{{ oracle_user }} hard nproc 16384"
 - "{{ oracle_user }} soft nofile 1024"
 - "{{ oracle_user }} hard nofile 65536"
 - "{{ oracle_user }} soft memlock {{ ((0.9 * ansible_memtotal_mb)*1024)|round|int }} "
 - "{{ oracle_user }} hard hardlock {{ ((0.9 * ansible_memtotal_mb)*1024)|round|int }}"

So, there you have it, lots of variables. If you have any questions, just ask and I’ll do my best to answer them.

In the next post we’ll walk through an actual installation.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s