Using packer to build a Vagrant box

Background

So, Oracle Linux 7.4 was just released. Previously when a new version was released I downloaded it and manually installed it to see what the fuss was about (and to make sure you could actually install Oracle on it).

These days I tend to use Vagrant (as written about previously). But how do you build a Vagrant box?
You could of course do it the manual way, which means you click through the installation of the OS, then customize it to your hearts desire and finally export the VM as a Vagrant box, and that is fine. But as with all manual tasks, it gets tedious after a while.
I also tend to make the box look sort of the same every time (with the same extra packages installed and whatnot), so how do you easily automate this process?

Enter Packer, a tool to ‘Build Automated Machine Images’, as the slogan goes. Packer is built by Hashicorp (who also built Vagrant, Terraform and bunch of other awesome tools), and it helps you create identical images for a variety of different platforms using the same source configuration.

The source configuration in this case is a json file, which describes the builder (Virtualbox, Vmware, Amazon ec2 etc), and also have different types of provisioners that can be called (shell, Ansible, Chef, Puppet, Powershell) to further customize the image.

Continue reading

Vagrant? Again? …Really?

(Yes. And Ansible. And Oracle…)

TL;DR. This is the repo I’ll be talking about. It can use Ansible to provision Oracle (SI/RAC), or it can just provision your infrastructure.

Like I’ve said before, I use Vagrant quite a lot and I basically have 2 configs that I use every time. One that uses an external ‘hosts.yml’ to define the hosts (ip, ram etc) and works really well for single instance type VM’s (be it for Oracle or something else).
Then I also had the config that was prepared for running RAC (based on the same configuration as we used for this)
It has a statically defined dns-setup (using bind), created shared disks and whatnot, and it’s been working fine for a long time.
What bugged me a little about the RAC config was:
1. All IP’s & the dns-setup were already defined, so if I wanted to set up 2 clusters I had to create another Vagrant config and hack the DNS config to use different IP’s and hostnames. Not a huge problem, but it bugged me a little.
2. All shared disks were created with the same size. If you wanted different sized disks there were ways around it though: you could set the size, run ‘vagrant up’, then shutdown the VM, change the config and alter the size of the disks, run ‘vagrant up’ again and repeat until you had the config you wanted.
Again, not a huge problem but I felt like it could be done differently.

Continue reading

Vagrant + Ansible + Oracle

So, I finally got my act together and created the repositories I’ve been meaning to create for ages, to automatically spin up a VM running Oracle.

They are:

and uses Vagrant to provision the machine, and then Ansible to automatically provision Oracle.

The readme’s for each repository should (hopefully) be enough to get going, but in short these are the steps required:

  • Clone the repositories
  • Download the 12.2 binaries and place them in the <reponame>/swrepo directory
  • vagrant up

This will (by default) give you a VM with:

  • Oracle Linux 7.3
  • Single instance 12.2.0.1 (cdb + 1 pdb)
  • Storage on either FS or ASM

If you want to test a different combination of OS version and Oracle version, just follow the instructions in the readme.

The Vagrant boxes are the same ones I talked about in this post.

If you decide to try this and have problems or just have questions, just ask here or open an issue for the corresponding repository.

Oracle Linux Vagrant boxes

I use Vagrant a lot. It is an awesome tool when it comes to quickly spin up a local VM for some testing.

All my boxes are stored on Hashicorp’s Vagrant Cloud.

I try to create one box per Oracle Linux release (starting with 6.5) and I create the box as soon as a new version is released. I use Packer to create the boxes, which makes it a really painless exercise. I’ll describe that process in later blog-post.

Vagrant supports Virtualbox by default but also a lot of other providers (VMware, AWS ec2 etc), and I build all my boxes for Virtualbox.

The boxes all come with the Oracle pre-req packages installed and a couple of other nice-to-have packages, Ansible among other things. I then usually use this to do the Oracle installations.

As for naming, I use the following standard: ol<releasenumber>
i.e: ol68, ol72 etc
In Vagrant Cloud, you can have different versions of a specific box, and I try to keep the version number to the date when the box was created (i.e 20170326). It makes it easy to see how up-to-date the specific box is. I usually don’t update a box unless there is something I’ve missed to add.

So how can you use these boxes?

Vagrant uses a ‘Vagrantfile’ to describe the VM you want to create (in terms of IP’s, number of cores, amount of RAM etc), and in its simplest form you can let Vagrant create that file for you by running the:
vagrant init <boxname> command. So if you wanted to create a VM with my latest ol73 box, you’d do this:

miksan-macbook-pro:vagrant miksan$ vagrant init oravirt/ol73
A `Vagrantfile` has been placed in this directory. You are now
ready to `vagrant up` your first virtual environment! Please read
the comments in the Vagrantfile as well as documentation on
`vagrantup.com` for more information on using Vagrant.
miksan-macbook-pro:vagrant miksan$ grep oravirt Vagrantfile

  config.vm.box = "oravirt/ol73" <-- This tells Vagrant which box to use

The above will not have any customizations in terms of cpu’s etc.
For more customized configurations I have 2 github repos containing Vagrant configurations for both a RAC config and a single instance config. I’ll go through them in more detail in a later blogpost.
They use 2 different ways of customizing the configuration, vbox-si uses an external file called hosts.yml where you specify the ip, name etc of the VM(s) you want to create.
Vbox-rac is based on the work Alvaro Miranda and I did for this, and in that case the customization is done in the actual Vagrantfile.

https://github.com/oravirt/vagrant-vbox-si
https://github.com/oravirt/vagrant-vbox-rac

Note:
https://github.com/oravirt/vagrant-vbox-si is deprecated and has been replaced by these 2 repositories:
https://github.com/oravirt/vagrant-vbox-si-fs
https://github.com/oravirt/vagrant-vbox-si-asm

 

 

These are the boxes I currently have available. They also include some Centos boxes.

Vagrant up!

racattack, meet ansible-oracle!

A while back I was approached by Jeremy Schneider, who is one of the original contributors to the racattack project and he wanted to know if I was interested in integrating ansible-oracle with the RAC Attack automation project, and of course I was!

The idea was to provide a completely hands off installation of an Oracle RAC cluster, from VM creation to a fully configured cluster, and that is what we’re happy to be able to provide.

So Alvaro Miranda and I have been working on getting racattack-ansible-oracle going. Alvaro wrote the original packer/vagrant code for the Vagrant version of Racattack and then we’ve integrated ansible-oracle with that.

The actual integration worked straight away, so we’ve been weeding out edge cases and making it as easy and flexible to use as possible.
As of now it is possible to install 11.2.0.4, 12.1.0.1 & 12.1.0.2, which are the releases currently supported by ansible-oracle. As more releases are supported by ansible-oracle, they will also work with racattack-ansible-oracle.

Continue reading

Creating a RAC Flex Cluster using ansible-oracle

As of version 1.3 it is possible to create a RAC Flex Cluster using ansible-oracle. From a ansible-oracle configuration perspective there is not a huge difference from a normal ‘standard’ cluster, basically a few new parameters. There are other differences though, specifically in how you have to run the playbook and deal with the inventory configuration.
In a ‘standard’ cluster, you have your database cluster nodes and that’s it (basically). In a Flex Cluster configuration there are 2 types of nodes:

  • Hub nodes. These have access to the shared storage and will house your databases
  • Leaf nodes. These nodes are connected to the interconnect network, but not to the shared storage. So for instance, you could run your application on these nodes.

So given that, from an ansible-oracle perspective, it presents a few challenges. With a normal cluster, you could just run the playbook against all your cluster nodes as they are all the same.
Now, when building a Flex Cluster, there are a few things that should only be done on the hub nodes (configuring shared storage, install the database server and create the database(s)
And how do we do that? With a little Ansible inventory ninja-ism.

Continue reading

Changes in ansible-oracle v1.2

This is just a quick heads-up of a change that is coming, which unfortunately will not be backwards compatible.

As of version 1.2 of ansible-oracle, it is possible to have more than one database running out of an ORACLE_HOME. To make this possible a change had to be made to the structure and code that deals with installation of db software/db creation.
The dictionary structure oracle_databases had to be changed to a list structure, and this means that any config involving the ‘oracle_databases’ structure that was made before v1.2 will stop working.
The affected roles are:

  • oraswdb-install
  • oradb-create

The defaults will be changed as of the release, but any custom config you have done will have to be changed.

SO what you need to do is this:

In your config the following needs to be changed from this structure:

 oracle_databases: 
    racdb: 
       oracle_version_db: 12.1.0.2
       oracle_edition: EE
       oracle_db_name: racdb
       oracle_db_passwd: Oracle123
       oracle_db_type: RAC
       is_container: "false"
       pdb_prefix: racpdb
       num_pdbs: 2
       is_racone: "false"
       storage_type: ASM
       service_name: racdb_serv
       oracle_init_params: "open_cursors=300,processes=700"
       oracle_db_mem_percent: 30
       oracle_database_type: MULTIPURPOSE
       redolog_size_in_mb: 100
       delete_db: false

to this structure:

oracle_databases: 
   - home: racdb 
     oracle_version_db: 12.1.0.2
     oracle_edition: EE
     oracle_db_name: racdb
     oracle_db_passwd: Oracle123
     oracle_db_type: RAC
     is_container: "false"
     pdb_prefix: racpdb
     num_pdbs: 2
     is_racone: "false"
     storage_type: ASM
     service_name: racdb_serv
     oracle_init_params: "open_cursors=300,processes=700"
     oracle_db_mem_percent: 30
     oracle_database_type: MULTIPURPOSE
     redolog_size_in_mb: 100
     delete_db: false

There is a new parameter ‘home‘ which marks the ‘ending’ directory for the ORACLE_HOME, e.g: /u01/app/oracle/12.1.0.2/racdb

Then you have to line up the rest of the parameters to the ‘home’ one. This is the way yaml works so correct indentation is important.

And if you want to add more database from the same home, and perhaps add a new home with a different (or the same) version, this is what it should look like:

oracle_databases: 
   - home: racdb <-- original home
     oracle_version_db: 12.1.0.2
     oracle_edition: EE
     oracle_db_name: racdba <-- First database
     oracle_db_passwd: Oracle123
     oracle_db_type: RAC
     is_container: "false"
     pdb_prefix: racpdb
     num_pdbs: 2
     is_racone: "false"
     storage_type: ASM
     service_name: racdba_serv
     oracle_init_params: "open_cursors=300,processes=700"
     oracle_db_mem_percent: 30
     oracle_database_type: MULTIPURPOSE
     redolog_size_in_mb: 100
     delete_db: false

   - home: racdb <-- original home
     oracle_version_db: 12.1.0.2 
     oracle_edition: EE
     oracle_db_name: racdbb <-- second database using the original home
     oracle_db_passwd: Oracle123
     oracle_db_type: RAC
     is_container: "false"
     pdb_prefix: racpdb
     num_pdbs: 2
     is_racone: "false"
     storage_type: ASM
     service_name: racdbb_serv
     oracle_init_params: "open_cursors=300,processes=700"
     oracle_db_mem_percent: 30
     oracle_database_type: MULTIPURPOSE
     redolog_size_in_mb: 100
     delete_db: false

   - home:  blehome <-- new (second) home
     oracle_version_db: 11.2.0.4 <-- new version
     oracle_edition: EE
     oracle_db_name: bledb <-- new db (from the new home)
     oracle_db_passwd: Oracle123
     oracle_db_type: RAC
     is_container: "false"
     pdb_prefix: racpdb
     num_pdbs: 2
     is_racone: "false"
     storage_type: ASM
     service_name: ble_serv
     oracle_init_params: "open_cursors=300,processes=700"
     oracle_db_mem_percent: 30
     oracle_database_type: MULTIPURPOSE
     redolog_size_in_mb: 100
     delete_db: false

And that’s pretty much it.

Let me know if anything else is broken.

ansible-oracle, the RAC edition

First off, I’ll be setting up page, where I intend to have a complete list of parameters used in all roles and a description of what they actually do. Up until now I’ve kept it in a file in the github repo, but I think it will be easier to maintain this way. The page is empty at the moment but I’ll try add to it as soon as possible

Now, this is the 3rd post on how to get started with ansible-oracle. You can find the other 2 posts here and here.

This time we will perform a RAC installation, and we will also introduce a few other options we haven’t explored before (mostly because they got added recently).
We will be introducing a concept known as GI role separation which means that the user grid will own the Grid Installation including ASM, and the user oracle will own the database installation.
The default is false, meaning the user oracle will own/run everything (well, except for the parts that run as root then).

We will be creating a container database including 2 pluggable databases

The installation will be performed over nfs this time, using a central repository where all the media is located, instead of copying the files to the installation hosts.

So in summary, this is what we will try to accomplish:

  • Use group_vars & host_vars to alter the default values.
  • Do the installation over nfs
  • Setup RAC using role separation
  • Create a container database with 2 pluggable databases

Continue reading

Ansible-oracle, the next step

So, I’ve been adding support for GI role separation to the roles so that is why it has taken so long to get this post out.

In the last post we created a single instance database with database storage on a filesystem. This time we’re going to take it a step further and create a single instance database, but now we’re going to use ASM for storage. This means we also have to install the Grid Infrastructure in a stand alone configuration, so we’re adding a few other roles to the playbook.
We’re also going to deploy this configuration on 2 machines in parallel (oradb01, oradb02).

We’re also going to deviate from the default ‘role configuration’, i.e not relying entirely on variable values in defaults/main.yml. You could of course change the default so they more suit your needs and just rely on the defaults, but that limits your options (unless you only have one system to deal with).
The easiest way to override the defaults is to ‘move’ the parameters to a higher priority location, i.e group_vars or host_vars. In this example we’re going to put our ‘host-group’ specifics in group_vars.

So what do I mean by specifics?

  • Storage config (storage devices for filesystems. This time, we’re going to put /u01 on its own device instead of the ‘root’-device)
  • Storage config (storage devices for ASM)
  • We’re going to call the database something else.
  • We may want to install a different versions of GI (or DB). So, this time we’re going to install 12.1.0.2 GI and a 11.2.0.4 database

This is where everything will be installed:

  • GI – /u01/app/oracle/12.1.0.2/grid
  • DB – /u01/app/oracle/11.2.0.4/myhomeasm

Continue reading

Getting started with ansible-oracle

I thought I’d write a quick post on how to get started with ansible-oracle.

The reason I decided to use roles when putting this together was to make it easily reusable, and to be able to pick and choose which roles you want to use. If you want to do everything from the ground up you can, and also if you already have a properly configured server and just want to install Oracle and create a single instance database on a filesystem you can absolutely do just that, by using just the oraswdb-install and oradb-create roles.

So, we’re going to do both. And we’re just going to go with the defaults and create a single instance database with datafiles on a filesystem.

Note: The installation will be without a configured listener. I have not gotten around to fixing the listener issue with installations without GI.

1. Configuring the host + installing the database server & creating a database

First off, we’re going to take a newly installed machine and configure it from ground up.

  • Oracle version 12.1.0.2
  • ORACLE_HOME will be /u01/app/oracle/12.1.0.2/orcl
  • One database will be created, ‘orcl’
  • Datafiles/fra will reside in /u01/oradata & /u01/fra respectively. The u01 directory is created by Ansible and oradata + fra will be created by dbca
  • The Oracle software (linuxamd64_12102_database_1of2.zip, linuxamd64_12102_database_2of2.zip) has been downloaded and the files are placed in /tmp on the control-machine.

And now, on to the good stuff.

Continue reading