101 Series of Oracle in Google Cloud – Part II : Explaining how I Built the GCP VM with Ansible

101 Series of Oracle in Google Cloud – Part II : Explaining how I Built the GCP VM with Ansible

I thank you for coming back to this series. I wrote a couple of weeks the first part of the series and I was going to write for part II the automation with Ansible of the first part. But since there is very little blog posts on how Ansible works with GCP Compute and since you are a beginner , I will explain better just the GCP VM creation with Ansible.

This post is broken down into two pieces

Setup the Gcloud Account Json File and the SSH Keys

We begin by creating a project’s service account that will be used by the gcloud CLI tool and Ansible to access and provision compute resources within the project. 

So you first need to login to https://console.cloud.google.com/ and move to the Project that you will be using, which in my case is oracle-migration.

Here we add a new service account to the project on the IAM & admin ⇒ Service accounts tab.

Now click on the create service Account 

Input the name of the Service Account Name, in my case I used oraclegcp. Once you do that, click on  “Create” . Once it is created, give it the role of Compute Admin.

Now create a service account private key. This private key is not the ssh key This private key contains the credentials for the service account. Create it as JSON file and save it in your machine to a secure location you. Remember this location as you will use it within the ansible project.

Now, we will create an rsa key for your oracle user on your machine so that the user and credentials are in place on your VM when it is created. 

rene@Renes-iMac OracleOnGCP % ssh-keygen -t rsa -b 4096 -f ~/.ssh/oracle -C "oracle"
Generating public/private rsa key pair.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /Users/rene/.ssh/oracle.
Your public key has been saved in /Users/rene/.ssh/oracle.pub.
The key fingerprint is:
SHA256:***************** oracle
The key's randomart image is:
+---[RSA 4096]----+
|       . .++     |
|      E ..o .    |
|         ...     |
|      .    ...   |
|   .  +S.o.o. . |
|  . B *o++.o  o .|
|   o .** o  o . |
|    =Xoo.... .  |
|    =o+ ooo     |
+----[SHA256]-----+
rene@Renes-iMac OracleOnGCP % cat ~/.ssh/oracle.pub | pbcopy

Once you have created this key, add the public key section (~/.ssh/oracle.pub)  to your clipboard and add it to the metadata of the Compute Engine section .

For this go to Compute Engine ⇒ Metadata ⇒ SSH Keys ⇒ Edit ⇒ Add Item

Compute Engine –> Metadata –> SSH Keys –> Edit

I recommend that before you install the packages ansible,requests and google-auth , you have Python 3 installed and the latest version of pip.

Follow the post below if you are using OS X. 

Now that you have python 3 and pip installed, we need to install the packages ansible,requests and google-auth .

  • pip install ansible
  • pip install requests google-auth

At the end I have the following versions installed

  • Ansible: ansible 2.9.6
  • pip: 20.0.2
  • python : 3.7.7

Explaining the Ansible GCP Compute Modules

I am not going to explain here what is ansible and how it is used, what I am going to do is to explain the ansible gcp modules, as when I was learning this, I had issues with how the examples are or any other blog posts are .

This project you can find it at :

git clone https://github.com/rene-ace/OracleOnGCP

So make sure you go there or clone it before following this post.

The fist thing to keep in mind is that this playbook doesn’t run against any hosts, so it has to be called on the localhost. So the ansible.cfg file is as below

[defaults]
host_key_checking = False
roles_path = roles
inventory = inventories/hosts
remote_user = oracle
private_key_file = ~/.ssh/oracle

[inventory]
enable_plugins = host_list, script, yaml, ini, auto, gcp_compute

And the hosts file in inventories/hosts looks like below

[defaults]
inventory   = localhost,

I have also created this ansible playbook with a role that has tasks in it and it is specified in /roles/gcp_instance/tasks/main.yml.

This will call create.yml so that you can run a creation of the GCP Compute Instance or call delete.yml to delete your GCP Compute Instance and environment that you are creating here.

You can call the ansible playbook you do it as below

  • To Create
    ansible-playbook -t create create_oracle_on_gcp.yml
  • To Delete
    ansible-playbook -t delete create_oracle_on_gcp.yml
---
- import_tasks: create.yml
  tags:
    - create

- import_tasks: delete.yml
  tags:
    - delete

The variables that I am using for the GCP modules can be found in the roles/gcp_instance/vars/main.yml, so be sure to change them accordingly to your environment.

The JSON file that you created above, the variable gcp_cred_file is where we will set the location and name

--
# Common vars for Role gcp_instance_creation
# Set Accordingly to the recion and Zone you desire
gcp_project_name:               "oracle-migration"
gcp_region:                     "us-central1"
gcp_zone:                       "us-central1-c"
gcp_cred_kind:                  "serviceaccount"
gcp_cred_file:                  "/Users/rene/Documents/GitHub/gcp_json_file/oracle-migration.json"

# Vars for task Task to create the ASM disk
gcp_asm_disk_name:              "rene-ace-disk-asm1"
gcp_asm_disk_type:              "pd-ssd"
gcp_asm_disk_size:              "150"
gcp_asm_disk_labels:            "item=rene-ace"

# Vars for task Task to create the Instance Boot disk
# We are creating a Centos 7 Instance
# Should you require a different image, change to the proper gcp_boot_disk_image
gcp_boot_disk_name:             "rene-ace-inst1-boot-disk"
gcp_boot_disk_type:             "pd-standard"
gcp_boot_disk_size:             "100"
gcp_boot_disk_labels:           "item=rene-ace"
gcp_boot_disk_image:            "projects/centos-cloud/global/images/centos-7-v20200309"

# Vars for task Task to create the Oracle VM Instance
# Change accordingly to the Machine type that you desire
gcp_instance_name:              "rene-ace-test-inst1"
gcp_machine_type:               "n1-standard-8"

#Vars for task network creation and Firewall creation
gcp_network_name:               "network-oracle-instances"
gcp_subnet_name:                "network-oracle-instances-subnet"
gcp_firewall_name:              "oracle-firewall"
gcp_ip_cidr_range:              "172.16.0.0/16"
© 2020 GitHub, Inc.

Now we create the boot disk with a Centos 7 image and the ASM disk for the OHAS installation that  we are doing. Nothing outstanding here, the only thing to keep in mind is how you register these disks , as those names we are going to use them when we create the GCP VM Instance

# Creation of the Boot disk
- name: Task to create the Instance Boot disk
  gcp_compute_disk:
         name: "{{ gcp_boot_disk_name }}"
         size_gb: "{{ gcp_boot_disk_size }}"
         type: "{{ gcp_boot_disk_type }}"
         source_image: "{{ gcp_boot_disk_image }}"
         zone: "{{ gcp_zone }}"
         project: "{{ gcp_project_name }}"
         auth_kind: "{{ gcp_cred_kind }}"
         service_account_file: "{{ gcp_cred_file }}"
         scopes:
           - https://www.googleapis.com/auth/compute
         state: present
  register: disk_boot

# Creation of the ASM disk
- name: Task to create the ASM disk
  gcp_compute_disk:
    name: "{{ gcp_asm_disk_name }}"
    type: "{{ gcp_asm_disk_type }}"
    size_gb: "{{ gcp_asm_disk_size }}"
    zone: "{{ gcp_zone }}"
    project: "{{ gcp_project_name }}"
    auth_kind: "{{ gcp_cred_kind }}"
    service_account_file: "{{ gcp_cred_file }}"
    state: present
  register: disk_asm_1

The next thing that I am going to create is the VPC Network , it’s subnet and an External IP. Again the only thing to keep in mind is how you register them, as for example in the gcp_compute_subnetwork module, when you mention the network , it is how you registered it in gcp_compute_network.

- name: Task to create a network
  gcp_compute_network:
         name: 'network-oracle-instances'
         auto_create_subnetworks: 'true'
         project: "{{ gcp_project_name }}"
         auth_kind: "{{ gcp_cred_kind }}"
         service_account_file: "{{ gcp_cred_file }}"
         scopes:
           - https://www.googleapis.com/auth/compute
         state: present
  register: network

# Creation of a Sub Network 
- name: Task to create a subnetwork
  gcp_compute_subnetwork:
    name: network-oracle-instances-subnet
    region: "{{ gcp_region }}"
    network: "{{ network }}"
    ip_cidr_range: "{{ gcp_ip_cidr_range }}"
    project: "{{ gcp_project_name }}"
    auth_kind: "{{ gcp_cred_kind }}"
    service_account_file: "{{ gcp_cred_file }}"
    state: present
  register: subnet

# Creation of the Network address
- name: Task to create a address
  gcp_compute_address:
         name: "{{ gcp_instance_name }}"
         region: "{{ gcp_region }}"
         project: "{{ gcp_project_name }}"
         auth_kind: "{{ gcp_cred_kind }}"
         service_account_file: "{{ gcp_cred_file }}"
         scopes:
           - https://www.googleapis.com/auth/compute
         state: present
  register: address

The next part is we create a firewall to open port  22 for this VM that we are going to create. Also the source ranges, in my case since this is a test environment . I am going to allow all IP addresses to ingress. For me one thing that you need to keep in mind is how you set your Network Tags (target_tags), this is very important, as if they are not set correctly you will not be able to ssh to your VM. Since I am assuming you are a beginner both to GCP and Ansible, nowhere in the ansible examples that you find out there will mention that this is critical for you to connect to your VM.

# Creation of the Firewall Rule
- name: Task to create a firewall
  gcp_compute_firewall:
    name: oracle-firewall
    network: "{{ network }}"
    allowed:
    - ip_protocol: tcp
      ports: ['22']
    source_ranges: ['0.0.0.0/0']
    target_tags: 
    - oracle-ssh
    project: "{{ gcp_project_name }}"
    auth_kind: "{{ gcp_cred_kind }}"
    service_account_file: "{{ gcp_cred_file }}"
    scopes:
           - https://www.googleapis.com/auth/compute
    state: present
  register: firewall

Once we have created the disks, network and firewall, it is time for us to create the GCP VM instance.

Here I assign both disks created above and the registered disk_boot I give it a value of true to be used in this VM. The disk_asm_1 I give it a value of boot equals false.

As you can see below, in the network_interfaces I assign the registered names created above (network, subnet and address). 

The access configs of name and type are the only values accepted at the moment through the Ansible GCP module, so that is why those are hard values.

In the tags section , I assign the same Network Tag created in the firewall module above so that we can connect to port 22 of the VM instance that we are creating.

# Creation of the Oracle Instance
- name: Task to create the Oracle Instance
  gcp_compute_instance:
         state: present
         name: "{{ gcp_instance_name }}"
         machine_type: "{{ gcp_machine_type }}"
         disks:
           - auto_delete: true
             boot: true
             source: "{{ disk_boot }}"
           - auto_delete: true
             boot: false
             source: "{{ disk_asm_1 }}"
         network_interfaces:
             - network: "{{ network }}"
               subnetwork: "{{ subnet }}"
               access_configs:
                 - name: External NAT
                   nat_ip: "{{ address }}"
                   type: ONE_TO_ONE_NAT
         tags:                                                                                     
          items: oracle-ssh
         zone: "{{ gcp_zone }}"
         project: "{{ gcp_project_name }}"
         auth_kind: "{{ gcp_cred_kind }}"
         service_account_file: "{{ gcp_cred_file }}"
         scopes:
           - https://www.googleapis.com/auth/compute
  register: instance

Once you create your GCP VM, now the only thing that we do is check and wait for SSH to be established and add the host to the to the ansible-playbook in-memory inventory. 

- name: Wait for SSH to come up
  wait_for: host={{ address.address }} port=22 delay=10 timeout=60
- name: Add host to groupname
  add_host: hostname={{ address.address }} groupname=oracle_instances

If everything was done correctly , we will run it the playbook with the task create and you will see an output like below when you run your ansible playbook.

rene@Renes-iMac OracleOnGCP % ansible-playbook -t create create_oracle_on_gcp.yml                                         
[WARNING]: Unable to parse /Users/rene/Documents/GitHub/OracleOnGCP/inventories/hosts as an inventory source
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'

PLAY [Playbook to create Oracle on Google Cloud] *********************************************************************************************************************************************************************************************************

TASK [gcp_instance : Task to create the Instance Boot disk] **********************************************************************************************************************************************************************************************
changed: [localhost]

TASK [gcp_instance : Task to create the ASM disk] ********************************************************************************************************************************************************************************************************
changed: [localhost]

TASK [gcp_instance : Task to create a network] ***********************************************************************************************************************************************************************************************************
changed: [localhost]

TASK [gcp_instance : Task to create a subnetwork] ********************************************************************************************************************************************************************************************************
changed: [localhost]

TASK [gcp_instance : Task to create a address] ***********************************************************************************************************************************************************************************************************
changed: [localhost]

TASK [gcp_instance : Task to create a firewall] **********************************************************************************************************************************************************************************************************
changed: [localhost]

TASK [gcp_instance : Task to create the Oracle Instance] *************************************************************************************************************************************************************************************************
changed: [localhost]

TASK [gcp_instance : Wait for SSH to come up] ************************************************************************************************************************************************************************************************************
ok: [localhost]

TASK [gcp_instance : Add host to groupname] **************************************************************************************************************************************************************************************************************
changed: [localhost]

PLAY RECAP ***********************************************************************************************************************************************************************************************************************************************
localhost                  : ok=9    changed=7    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   

And the last thing to do is to test the connectivity via ssh and you should have now your GCP VM created.

rene@Renes-iMac OracleOnGCP % ssh -i ~/.ssh/oracle oracle@34.***.***.**8 
The authenticity of host '34.***.***.**8 (34.***.***.**8)' can't be established.
ECDSA key fingerprint is *******************.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '34.***.***.**8' (ECDSA) to the list of known hosts.
-bash: warning: setlocale: LC_CTYPE: cannot change locale (UTF-8): No such file or directory
[oracle@rene-ace-test-inst1 ~]$ id
uid=1000(oracle) gid=1001(oracle) groups=1001(oracle),4(adm),39(video),1000(google-sudoers) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023

As you can see, it is not that complicated, but not knowing certain things about the Network Tags or how the registered names in ansible work, can give you a big headache and hours of researching and troubleshooting. In our next section of this series we are going to automate the OHAS and DB creation, so don’t forget to come back.

Tags:
,
Rene Antunez
[email protected]
1 Comment