Step by Step approach for installing Oracle 19c RAC on Linux

This post covers a step-by-step approach for installing Oracle 19c Real Application Cluster (RAC) on a Linux Operating System (OS). Additionally, we describe the server hardware checklist, require packages, and operating system parameters for installing 19c RAC. Finally, we show the guide to installing the oracle 19c database on RAC. 


Steps to Install Oracle 19c RAC

1. Oracle Grid Infrastructure Installation Server Hardware Checklist.

2. Configuring Servers for Oracle Grid Infrastructure and Oracle RAC

3. Packages for Oracle Linux 7 and Red Hat Enterprise Linux 7

4. Prepare the cluster nodes for Oracle RAC

5. Partition the Shared Disks

6. Installing and Configuring ASMLib

7.     Make ssh setup on both nodes & Verify the preinstall check

8. Oracle 19c grid infrastructure installation steps

9. Oracle ASM Disk Group Create Steps

10. Oracle 19c RDBMS Software Only Installation

11. Create Oracle 19c Database Installation Step


Step 1: Oracle Grid Infrastructure Installation Server Hardware Checklist

(a) Network Switches:

Public network switch, at least 1 GbE, connected to a public gateway. 

Private network switch, at least 1 GbE, with 10 GbE recommended, dedicated for use only with other cluster member nodes. The interface must support the user datagram protocol (UDP) using high-speed network adapters and switches that support TCP/IP. Alternatively, use InfiniBand for the interconnect.


(b) Runlevel: 

Servers should be either in runlevel 3 or runlevel 5.

(c) Random Access Memory (RAM): 

At least 8 GB of RAM for Oracle Grid Infrastructure for a Cluster installation, including installations where you plan to install Oracle RAC.

(d) Temporary disk space allocation: 

At least 1 GB allocated to/tmp. 

(e) Storage hardware: 

Either Storage Area Network (SAN) or Network-Attached Storage (NAS).

(f) Local Storage Space for Oracle Software

At least 8 GB of space for the Oracle Grid Infrastructure for a cluster home (Grid home). Oracle recommends that you allocate 100 GB to allow additional space for patches. 

At least 12 GB of space for the Oracle base of the Oracle Grid Infrastructure installation owner (Grid user). The Oracle base includes Oracle Clusterware and Oracle ASM log files. 

For Linux x86-64 platforms, if you intend to install Oracle Database, then allocate 6.4 GB of disk space for the Oracle home (the location for the Oracle Database software binaries)


Step 2: Configuring Servers for Oracle Grid Infrastructure and Oracle RAC

(a) Checking Server Hardware and Memory Configuration


1. To determine the physical RAM size, enter the following command: 

# grep MemTotal /proc/meminfo

If the size of the physical RAM installed in the system is less than the required size, then you must install more memory before continuing.

2. To determine the size of the configured swap space, enter the following command: 

# grep SwapTotal /proc/meminfo

If necessary, see your operating system documentation for information about how to configure additional swap space.

3. To determine the amount of space available in the /tmp directory, enter the following command: 

# df -h /tmp

4. To determine the amount of free RAM and disk swap space on the system, enter the following command: 

# free 

5. To determine if the system architecture can run the software, enter the following command: 

# uname -m

Verify that the processor architecture matches the Oracle software release to install. For example, you should see the following for a x86-64 bit system: 

x86_64 If you do not see the expected output, then you cannot install the software on this system.

6. Verify that shared memory (/dev/shm) is mounted properly with sufficient size using the following command: 

# df -h /dev/shm


(b) Supported Oracle Linux 7 and Red Hat Enterprise Linux 7 Distributions for x86-64

Use the following information to check supported Oracle Linux 7 and Red Hat Linux 7 distributions:  Oracle Linux 7 

1. Supported distributions: 

Oracle Linux 7 with the Unbreakable Enterprise kernel: 3.8.13-33.el7uek.x86_64 or later 

Oracle Linux 7 with the Red Hat Compatible kernel: 3.10.0-54.0.1.el7.x86_64 or later Red Hat Enterprise Linux 7 Supported distribution: 

Red Hat Enterprise Linux 7: 3.10.0-54.0.1.el7.x86_64 or later 


Step3: Packages for Oracle Linux 7 and Red Hat Enterprise Linux 7

binutils-2.23.52.0.1-12.el7.x86_64  

compat-libcap1-1.10-3.el7.x86_64  

gcc-4.8.2-3.el7.x86_64  

gcc-c++-4.8.2-3.el7.x86_64  

glibc-2.17-36.el7.i686  

glibc-2.17-36.el7.x86_64  

glibc-devel-2.17-36.el7.i686  

glibc-devel-2.17-36.el7.x86_64  

ksh libaio-0.3.109-9.el7.i686  

libaio-0.3.109-9.el7.x86_64  

libaio-devel-0.3.109-9.el7.i686  

libaio-devel-0.3.109-9.el7.x86_64  

libgcc-4.8.2-3.el7.i686  

libgcc-4.8.2-3.el7.x86_64  

libstdc++-4.8.2-3.el7.i686  

libstdc++-4.8.2-3.el7.x86_64  

libstdc++-devel-4.8.2-3.el7.i686  

libstdc++-devel-4.8.2-3.el7.x86_64  

libXi-1.7.2-1.el7.i686  

libXi-1.7.2-1.el7.x86_64  

libXtst-1.2.2-1.el7.i686  

libXtst-1.2.2-1.el7.x86_64  

make-3.82-19.el7.x86_64  

sysstat-10.1.5-1.el7.x86_64  


The following command can be run on the system to list the currently installed packages: 

rpm -q --qf '%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n' binutils \ 

compat-libstdc++-33 \ 

elfutils-libelf \ 

elfutils-libelf-devel \ 

gcc \ 

gcc-c++ \ 

glibc \ 

glibc-common \ 

glibc-devel \ 

glibc-headers \ 

ksh \ 

libaio \ 

libaio-devel \ 

libgcc \ 

libstdc++ \ 

libstdc++-devel \ 

make \ 

sysstat \ 

unixODBC \ 

unixODBC-devel


Any missing RPM from the list above should be added using the "--aid" of "/bin/rpm" option to ensure all dependent packages are resolved and installed as well. 

 

NOTE: Be sure to check on all nodes that the Linux Firewall and SE Linux is disabled


Step 4: Prepare the cluster nodes for Oracle RAC

(a) User Accounts

NOTE: Oracle recommends different users for installing the Grid Infrastructure (GI) and the Oracle RDBMS home. The GI will be installed in a separate Oracle base, owned by the user 'grid.' After the grid install the GI home will be owned by root, and inaccessible to unauthorized users. 

 

1. Create OS groups using the command below Enter commands as the root user: 

#/usr/sbin/groupadd oinstall 

#/usr/sbin/groupadd dba 

#/usr/sbin/groupadd asmadmin 

#/usr/sbin/groupadd asmdba 

#/usr/sbin/groupadd asmoper 

 

2. Create the users that will own the Oracle software using the commands:

 #/usr/sbin/useradd -g oinstall -G asmadmin,asmdba,asmoper -d /home/grid -m grid 

#/usr/sbin/useradd -g oinstall -G dba,asmdba -d /home/oracle -m oracle 

 

3. Set the password for the oracle account using the following command. Replace password with your own password. 


# passwd oracle 

Changing password for user oracle. 

New UNIX password: password 

retype new UNIX password: password 

passwd: all authentication tokens updated successfully. 


 # passwd grid 

Changing password for user oracle. 

New UNIX password: password 

retype new UNIX password: password 

passwd: all authentication tokens updated successfully

Repeat Step 1 through Step 3 on each node in your cluster.


(b) Configuring Kernel Parameters

As the root user add the following kernel parameter settings to /etc/sysctl.conf. If any of the parameters are already in the /etc/sysctl.conf file, the higher of the 2 values should be used. 

kernel.shmmni = 4096

 kernel.sem = 250 32000 100 128 

fs.file-max = 6553600 

net.ipv4.ip_local_port_range = 9000 65500 

net.core.rmem_default = 262144 

net.core.rmem_max = 4194304 

net.core.wmem_default = 262144 

net.core.wmem_max = 1048576 


1. Add the following lines to the /etc/security/limits.conf file:

grid soft nproc 2047 

grid hard nproc 16384 

grid soft nofile 1024 

grid hard nofile 65536 

oracle soft nproc 2047 

oracle hard nproc 16384 

oracle soft nofile 1024 

oracle hard nofile 65536

2. Add or edit the following line in the /etc/pam.d/login file, if it does not already exist:

session required pam_limits.so


(c) Create the Oracle Inventory Directory 

To create the Oracle Inventory directory, enter the following commands as the root user: 

# mkdir -p /u01/app/oraInventory 

# chown -R grid:oinstall /u01/app/oraInventory 

# chmod -R 775 /u01/app/oraInventory 

2.7. Creating the Oracle Grid Infrastructure Home Directory 

 

To create the Grid Infrastructure home directory, enter the following commands as the root user: 

# mkdir -p /u01/app/19c/grid 

# chown -R grid:oinstall /u01/app/19c/grid 

# chmod -R 775 /u01/app/19c/grid 


2.8. Creating the Oracle Base Directory 

 

To create the Oracle Base directory, enter the following commands as the root user: 

# mkdir -p /u01/app/oracle 

# chown -R oracle:oinstall /u01/app/oracle 

# chmod -R 775 /u01/app/oracle 


2.9. Creating the Oracle RDBMS Home Directory 

 

To create the Oracle RDBMS Home directory, enter the following commands as the root user: 

# mkdir -p /u01/app/oracle/product/19c/db_1 

# chown -R oracle:oinstall /u01/app/oracle/product/19c/db_1 

# chmod -R 775 /u01/app/oracle/product/19c/db_1


(d) Environment Profile Setup for Grid and Oracle User


1. Grid user environment setup

export ORACLE_SID=+ASM1

export ORACLE_BASE=/u01/app/19c

export ORACLE_HOME=/u01/app/19c/grid

export PATH=$ORACLE_HOME/bin:$PATH:$HOME/bin:/usr/local/bin

export TEMP=/tmp

export TMP=/tmp

export TMPDIR=/tmp

export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib

export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib


2. Oracle User Environment Setup

export ORACLE_SID=DBTEST

export ORACLE_BASE=/u01/app/oracle

export ORACLE_HOME=/u01/app/oracle/product/19c/db_1

export PATH=$ORACLE_HOME/bin:$PATH:$HOME/bin:/usr/local/bin

export TEMP=/tmp

export TMP=/tmp

export TMPDIR=/tmp

export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib

export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib


Step 5: Partition the Shared Disks

Once the LUNs have been presented from the SAN to ALL servers in the cluster, partition the LUNs from one node only.

fdisk /dev/sda 

Command (m for help): u 

Changing display/entry units to sectors 

Command (m for help): n 

Command action 

e extended 

p primary partition (1-4) p 

Partition number (1-4): 1 

First sector (61-1048575, default 61): 2048 

Last sector or +size or +sizeM or +sizeK (2048-1048575, default 1048575): 

Using default value 1048575 Command (m for help): w 

The partition table has been altered!

Calling ioctl() to re-read partition table. 

Syncing disks. 

Following above procedure, we will partition more share disk


Load the updated block device partition tables by running the following on ALL servers participating in the cluster: 

# /sbin/partprobe


Step 6: Installing and Configuring ASMLib

The ASMLib is highly recommended for those systems that will be using ASM for shared storage within the cluster due to the performance and manageability benefits that it provides. Perform the following steps to install and configure ASMLib on the cluster nodes:

(a) ASMLib Packages Installe

be sure to download the set of RPMs which pertain to your platform architecture, in our case this is x86_64.

oracleasm-support-2.1.3-1.el5x86_64.rpm 

oracleasmlib-2.0.4-1.el5.x86_64.rpm 

oracleasm-2.6.18-92.1.17.0.2.el5-2.0.5-1.el5.x86_64.rpm 

 

Install the RPMs by running the following as the root user: 

# rpm -ivh oracleasm-support-2.1.3-1.el5x86_64.rpm \ 

oracleasmlib-2.0.4-1.el5.x86_64.rpm \ 

oracleasm-2.6.18-92.1.17.0.2.el5-2.0.5-1.el5.x86_64.rpm 


(b) Configure ASMLib by running the following as the root user:

NOTE: If using user and group separation for the installation (as documented here), the ASMLib driver interface owner is 'grid' and the group to own the driver interface is 'asmadmin'. If a more simplistic installation using only the Oracle user is performed, the owner will be 'oracle' and the group owner will be 'dba'.


# /etc/init.d/oracleasm configure -i

Configuring the Oracle ASM library driver. 

This will configure the on-boot properties of the Oracle ASM library driver. The following questions will determine whether the driver is loaded on boot and what permissions it will have. The current values will be shown in brackets ('[]'). Hitting <ENTER> without typing an answer will keep that current value. Ctrl-C will abort. 

Default user to own the driver interface []: grid 

Default group to own the driver interface []: asmadmin 

Start Oracle ASM library driver on boot (y/n) [n]: y 

Scan for Oracle ASM disks on boot (y/n) [y]: y 

Writing Oracle ASM library driver configuration: done 

Initializing the Oracle ASMLib driver: [ OK ] 

Scanning the system for Oracle ASMLib disks: [ OK ]


(c) Using ASMLib to Mark the Shared Disks as Candidate Disks 

 

To create ASM disks using ASMLib: 1. As the root user, use oracleasm to create ASM disks using the following syntax: 

# oracleasm createdisk OCR_VOTE01 /dev/sdb1

Writing disk header: done 

Instantiating disk: done

# oracleasm createdisk FLASH_VOTE01 /dev/sdc1

Writing disk header: done 

Instantiating disk: done

# oracleasm createdisk DATA_VOTE01 /dev/sdd1

Writing disk header: done 

Instantiating disk: done

# oracleasm createdisk DATA_VOTE02 /dev/sde1

Writing disk header: done 

Instantiating disk: done


If you need to unmark a disk that was used in a createdisk command, you can use the following syntax as the root user: 

# /usr/sbin/oracleasm deletedisk disk_name 

 

2. Repeat step 1 for each disk that will be used by Oracle ASM. 

3. After you have created all the ASM disks for your cluster, use the listdisks command to verify their availability:

 # /usr/sbin/oracleasm scandisks 

Reloading disk partitions: done 

Cleaning any stale ASM disks... 

Scanning system for ASM disks... 

# /usr/sbin/oracleasm listdisks 

OCR_VOTE01 

FLASH_VOTE01 

DATE_VOTE01 

DATA_VOTE02 


3. On all the other nodes in the cluster, use the scandisks command as the root user to pickup the newly created ASM disks. You do not need to create the ASM disks on each node, only on one node in the cluster


# /usr/sbin/oracleasm scandisks 

Reloading disk partitions: done 

Cleaning any stale ASM disks... 

Scanning system for ASM disks... 

Instantiating disk "DATA_VOTE01" 

Instantiating disk "DATA_VOTE02" 

Instantiating disk "OCR_VOTE01" 

Instantiating disk "FLASH_VOTE01" 

5. After scanning for ASM disks, display the available ASM disks on each node to verify their availability: 

 # /usr/sbin/oracleasm listdisks 

OCR_VOTE01 

FLASH_VOTE01 

DATE_VOTE01 

DATA_VOTE02 


Step 7: Make ssh setup on both nodes & Verify the preinstall check


cd /orasoft/ultimus/19c/grid/deinstall

# ./sshUserSetup.sh -user grid -hosts "dc-dg1 dc-dg2" -noPromptPassphrase -confir m -advanced

Verify the preinstall check 

# ./runcluvfy.sh stage -pre crsinst -n node1_hostname, node2_hostname -verbose -fixup

# ./runcluvfy.sh stage -pre crsinst -n node1_hostname, node2_hostname -method root


Step 8. Oracle Grid Infrastructure Install

Basic Grid Infrastructure Install (without GNS and IPMI) 

As the grid user (Grid Infrastructure software owner) start the installer by running " gridSetup.sh " from the staged installation media. 

NOTE: Be sure the installer is run as the intended software owner, the only supported method to change the software owner is to reinstall.

cd into the folder where you staged the Grid Infrastructure software 

# unzip LINUX.X64_193000_grid_home.zip -d $GRID_HOME

# ./gridSetup.sh

 


Action: 

Select the radio button 'Configure oracle grid Infrastructure for a New Cluster’ and click ' Next> '


Action: 

Select the radio button ‘Configuration and Oracle Standalone Cluster’ and click ' Next> '

Action: 

Select the radio button ‘Create Local SCAN insert Cluster Name & SCAN Name and click ' Next> '

Action: 

Use the Edit and Add buttons to specify the node names and virtual IP addresses you configured previously in  your /etc/hosts file. Use the 'SSH Connectivity' button to configure/test the passwordless  SSH connectivity between your nodes.

ACTION: 

Type in the OS password for the user 'grid' and press 'Setup'

After clicking ‘OK’

Action: 

Click on 'Interface Type' next to the Interfaces you want to use for your cluster and select the correct values for 'Public' and  'Private '. When finished click ' Next> '

Action: 

Select the radio button 'Use Oracle Flex ASM for storage’ and click ' Next> '

 


Action: 

Select the 'DiskGroup Name' specify the 'Redundancy' and tick the disks you want to use, when done click ' Next> ' 

NOTE: The number of voting disks that will be created depend on the redundancy level you specify: 

EXTERNAL will create 1 voting disk, NORMAL will create 3 voting disks, HIGH will create 5 voting disks. 

NOTE: If you see an empty screen for your candidate disks it is likely that ASMLib has not been properly configured. If you are sure that ASMLib has been properly configured click on 'Change Discovery Path' and provide the correct destination.

 

Action: 

Select NOT to use IPMI and click ' Next> '

 


Action: 

Select if you wish to Register with EM Cloud control and click ' Next> '

 


Action: 

Assign the correct OS groups for OS authentication and click ' Next> '

 


Action: 

Specify the locations for your ORACLE_BASE and for the Software location and click ' Next> '

 


Action: 

Specify the required credential if you wish to automatically run configuration scripts and click 'Next> '

 


Action: 

Check that status of all checks is Succeeded and click ' Next> ' 

Note: 

If you have failed checks marked as 'Fixable' click 'Fix & Check again'. This will bring up the window  

Action: 

Execute the runfixup.sh script as described on the screen as root user

 



Action:

Click ‘Yes’

Action: 

You should see the confirmation that installation of the Grid Infrastructure was successful. Click 'Close' to finish the install. 

Check cluster resource status executing following command

# cd $GRID_HOME/bin

# ./crsctl stat res -t

--------------------------------------------------------------------------------

Name           Target  State        Server                   State details

--------------------------------------------------------------------------------

Local Resources

--------------------------------------------------------------------------------

ora.LISTENER.lsnr

               ONLINE  ONLINE       dc-vt-testdb1            STABLE

               ONLINE  ONLINE       dc-vt-testdb2            STABLE

ora.chad

               ONLINE  ONLINE       dc-vt-testdb1            STABLE

               ONLINE  ONLINE       dc-vt-testdb2            STABLE

ora.net1.network

               ONLINE  ONLINE       dc-vt-testdb1            STABLE

               ONLINE  ONLINE       dc-vt-testdb2            STABLE

ora.ons

               ONLINE  ONLINE       dc-vt-testdb1            STABLE

               ONLINE  ONLINE       dc-vt-testdb2            STABLE

ora.proxy_advm

               OFFLINE OFFLINE      dc-vt-testdb1            STABLE

               OFFLINE OFFLINE      dc-vt-testdb2            STABLE

--------------------------------------------------------------------------------

Cluster Resources

--------------------------------------------------------------------------------

ora.ASMNET1LSNR_ASM.lsnr(ora.asmgroup)

      1        ONLINE  ONLINE       dc-vt-testdb1            STABLE

      2        ONLINE  ONLINE       dc-vt-testdb2            STABLE

      3        OFFLINE OFFLINE                               STABLE

ora.LISTENER_SCAN1.lsnr

      1        ONLINE  ONLINE       dc-vt-testdb2            STABLE

ora.LISTENER_SCAN2.lsnr

      1        ONLINE  ONLINE       dc-vt-testdb1            STABLE

ora.LISTENER_SCAN3.lsnr

      1        ONLINE  ONLINE       dc-vt-testdb1            STABLE

ora.OCR.dg(ora.asmgroup)

      1        ONLINE  ONLINE       dc-vt-testdb1            STABLE

      2        ONLINE  ONLINE       dc-vt-testdb2            STABLE

      3        OFFLINE OFFLINE                               STABLE

ora.asm(ora.asmgroup)

      1        ONLINE  ONLINE       dc-vt-testdb1            Started,STABLE

      2        ONLINE  ONLINE       dc-vt-testdb2            Started,STABLE

      3        OFFLINE OFFLINE                               STABLE

ora.asmnet1.asmnetwork(ora.asmgroup)

      1        ONLINE  ONLINE       dc-vt-testdb1            STABLE

      2        ONLINE  ONLINE       dc-vt-testdb2            STABLE

      3        OFFLINE OFFLINE                               STABLE

ora.cvu

      1        ONLINE  ONLINE       dc-vt-testdb1            STABLE

ora.dc-vt-testdb1.vip

      1        ONLINE  ONLINE       dc-vt-testdb1            STABLE

ora.dc-vt-testdb2.vip

      1        ONLINE  ONLINE       dc-vt-testdb2            STABLE

ora.qosmserver

      1        ONLINE  ONLINE       dc-vt-testdb1            STABLE

ora.scan1.vip

      1        ONLINE  ONLINE       dc-vt-testdb2            STABLE

ora.scan2.vip

      1        ONLINE  ONLINE       dc-vt-testdb1            STABLE

ora.scan3.vip

      1        ONLINE  ONLINE       dc-vt-testdb1            STABLE

-----------------------------------------------------------------------------


Step 9: Create Disk Group using ASMCA Utility 

As the grid user starts the ASM Configuration Assistant (ASMCA) 

$ cd /u01/app/12.1.0/grid/bin 

./asmca 

 


Action: 

Click 'Create' to create a new diskgroup

 


Type in a name for the disk group, select the redundancy you want to provide, and mark the tick box for the disks you want to assign to the new disk group.

 

It is Oracle's Best Practice to have an OCR mirror stored in a second disk group. To follow these recommendations add an OCR mirror. Mind that you can only have one OCR in a disk group.


Step 10: RDBMS Software Only Installation Steps

As the oracle user (RDBMS software owner) start the installer by running "runInstaller" from the staged installation media. 

Before database installation run the below command from the grid user

# su - grid

# cd $ORACLE_HOME/deinstall

# ./sshUserSetup.sh -user oracle -hosts "dc-dg1 dc-dg2" -noPromptPassphrase -confir m -advanced

NOTE: Be sure the installer is run as the intended software owner, the only supported method to change the software owner is to reinstall. Change into the directory where you staged the RDBMS software

Go to the software backup location and issue the below command

# unzip Oracle_19.3_Linux_x86-64_DB_V982063-01.zip -d $ORACLE_HOME

# cd $ORACLE_HOME

# ./runInstaller

 


Action: 

Select the option 'Install Database software only' and click ' Next> '

 


Action: 

Select the option Oracle Real Application Clusters database Installation' and click ' Next> '

 


Action: 

Select all nodes.  

Use the 'SSH Connectivity' button to configure/test the passwordless SSH connectivity between your nodes ' 

Type in the OS password for the oracle user and click 'Setup'

 


Action: 

Specify the path to your Oracle Base and below to the location where you want to store the software (Oracle home). Click ' Next> '

 


Action: 

Use the drop-down menu to select the names of the Database Administrators and Database Operators group and click ' Next> '

 

 

Action: 

Check that the status of all checks is 'Succeeded' and click ' Next> ' 

Note: 

If you are sure the unsuccessful checks can be ignored tick the box 'Ignore All' before you click ' Next> '


Action: 

Click ' Close ' to finish the installation of the RDBMS Software.


Step 11. Create Database Using DBCA Utility

As the oracle user starts the Database Configuration Assistant (DBCA)

# cd $ORACLE_HOME/bin

# dbca 

Action: 

Choose the option 'Create a Database' and click 'Next'

 

Action: 

Select 'Oracle Real Application Clusters (RAC) database' and ‘Admin Managed’ and click 'Next'

 

Action: 

Type in the name you want to use for your database and select “Create as Container Database” if you want to create a container database otherwise unselect and click 'Next >'

 










Action: 

The database is now created, you can either change or unlock your passwords or just click Exit to finish the database creation.






Post a Comment

0 Comments

3