My previous post was about the Zero-Downtime Oracle Grid Infrastructure Patching (ZDOGIP) for 23ai using the gold image. In that case, I used the GUI interface to do the installation and patch, but as you know, this is not good for the automation process. So, here in this post, I will describe how to do the same operation using the silent mode for the installation. I will show what parameters you need to set in the response file and all the other steps.
Important details
The focus of this post is to show how to do the same process as my previous post using the silent mode. I will not “prove” (like I made in the last one) that databases continue to receive inserts or details about the AFD/ACFS drivers not being updated. I really recommend that you read my previous post to understand all of these details. Here I will show how to do in silent mode what I made in the previous post.
Current Environment
The running system is:
- OEL 8.9 Kernel 5.4.17-2136.324.5.3.el8uek.x86_64.
- Oracle GI 23ai, version 23.5.0.24.07 with no one-off or patches installed.
- Oracle Database 23ai (23.5.0.24.07) and 19c (19.23.0.0.0).
- Nodes are not using Transparent HugePages.
- It is a RAC installation, with two nodes.
Preparing for the patch
Needed file
The file that is needed is: Grid Infrastructure Gold Image, patch number is 37037934.
Creating the folders
The next step is to create the folders in both (all) nodes of the cluster. So, we execute this as the root user:
###################################### # #Node 01 # ###################################### [root@o23c1n1s1 ~]# mkdir -p /u01/app/23.6.0.0/grid [root@o23c1n1s1 ~]# chown grid:oinstall /u01/app/23.6.0.0/grid [root@o23c1n1s1 ~]# ###################################### # #Node 02 # ###################################### [root@o23c1n2s1 ~]# mkdir -p /u01/app/23.6.0.0/grid [root@o23c1n2s1 ~]# chown grid:oinstall /u01/app/23.6.0.0/grid [root@o23c1n2s1 ~]#
Unzip the patch
The next is executed as the grid user in the first node, and is unzip the patch:
[root@o23c1n1s1 ~]# su - grid [grid@o23c1n1s1 ~]$ cd /u01/app/23.6.0.0/grid/ [grid@o23c1n1s1 grid]$ unzip -q /u01/install/p37037934_230000_Linux-x86-64.zip [grid@o23c1n1s1 grid]$ cd [grid@o23c1n1s1 ~]$
Running processes
Just to show, these are the running processes in my environment, you can see the startup time for the databases, ASM, and listener:
###################################### # #Node 01 # ###################################### [grid@o23c1n1s1 ~]$ date Wed Nov 6 16:54:42 CET 2024 [grid@o23c1n1s1 ~]$ ps -ef |grep smon root 4417 1 0 14:47 ? 00:00:57 /u01/app/23.5.0.0/grid/bin/osysmond.bin grid 5791 1 0 14:47 ? 00:00:00 asm_smon_+ASM1 oracle 65855 1 0 16:03 ? 00:00:00 ora_smon_o19c1 grid 96499 93625 0 16:54 pts/0 00:00:00 grep --color=auto smon [grid@o23c1n1s1 ~]$ ps -ef |grep lsnr root 4518 4469 0 14:47 ? 00:00:00 /u01/app/23.5.0.0/grid/bin/crfelsnr -n o23c1n1s1 grid 5355 1 0 14:47 ? 00:00:00 /u01/app/23.5.0.0/grid/bin/tnslsnr LISTENER -no_crs_notify -inherit grid 5398 1 0 14:47 ? 00:00:00 /u01/app/23.5.0.0/grid/bin/tnslsnr LISTENER_SCAN3 -no_crs_notify -inherit grid 5442 1 0 14:47 ? 00:00:00 /u01/app/23.5.0.0/grid/bin/tnslsnr ASMNET1LSNR_ASM -no_crs_notify -inherit grid 96684 93625 0 16:54 pts/0 00:00:00 grep --color=auto lsnr [grid@o23c1n1s1 ~]$ ###################################### # #Node 02 # ###################################### [root@o23c1n2s1 ~]# date Wed Nov 6 16:55:16 CET 2024 [root@o23c1n2s1 ~]# ps -ef |grep smon grid 4178 1 0 14:46 ? 00:00:00 asm_smon_+ASM2 root 4368 1 0 14:47 ? 00:00:58 /u01/app/23.5.0.0/grid/bin/osysmond.bin oracle 59640 1 0 16:03 ? 00:00:00 ora_smon_o19c2 root 90480 58993 0 16:55 pts/0 00:00:00 grep --color=auto smon [root@o23c1n2s1 ~]# ps -ef |grep lsnr root 4541 4442 0 14:47 ? 00:00:00 /u01/app/23.5.0.0/grid/bin/crfelsnr -n o23c1n2s1 grid 5003 1 0 14:47 ? 00:00:00 /u01/app/23.5.0.0/grid/bin/tnslsnr ASMNET1LSNR_ASM -no_crs_notify -inherit grid 5158 1 0 14:47 ? 00:00:00 /u01/app/23.5.0.0/grid/bin/tnslsnr LISTENER -no_crs_notify -inherit grid 5221 1 0 14:47 ? 00:00:00 /u01/app/23.5.0.0/grid/bin/tnslsnr LISTENER_SCAN1 -no_crs_notify -inherit grid 5255 1 0 14:47 ? 00:00:00 /u01/app/23.5.0.0/grid/bin/tnslsnr LISTENER_SCAN2 -no_crs_notify -inherit root 90489 58993 0 16:55 pts/0 00:00:00 grep --color=auto lsnr [root@o23c1n2s1 ~]#
ZDOGIP with Gold Image in Silent mode
As mentioned in my previous post, the Gold Image is installed in two steps. The first one is the installation itself, and the second is the switch between the old and new home for GI. And here, since we are the silent mode, we will use one dedicated response file for each step.
Step 01 – Installing the software
The first step is the installation of the software, we will install it in both nodes and update the oraInventory properly. To do that we use the gridSetup.sh and the response file below. Pay attention to these important options:
- installOption: Needs to be CRS_SWONLY
- executeRootScript: We call the script after the installation (you can specify sudo and use it if you want, but please check the parameters)
- OSDBA, OSOPER, OSASM: define the current O.S users for the GI installation. You can check the current one in this file: CURRENT_GI_HOME/rdbms/lib/config.c (/u01/app/23.5.0.0/grid/rdbms/lib/config.c in my case)
If you want to change some parameters or check the values, please check for the file CURRENT_GI_HOME/ install/response/gridsetup.rsp that contains all the information about the values that can be set (/u01/app/23.6.0.0/grid/install/response/gridsetup.rsp in my case).
So, my response file is below (please adapt the path for your Oracle folders and the name of the hosts). Some of these parameters are not needed (like the diksGroups and scan information), I just set them to avoid any modification of my current installation (and please adapt them for your case – like GNS, IPMI):
[grid@o23c1n1s1 ~]$ cat /u01/install/grid-crs.rsp oracle.install.responseFileVersion=/oracle/install/rspfmt_crsinstall_response_schema_v23.0.0 INVENTORY_LOCATION=/u01/app/oraInventory installOption=CRS_SWONLY ORACLE_BASE=/u01/app/grid clusterUsage=RAC OSDBA=asmdba OSOPER=asmoper OSASM=asmadmin scanType=LOCAL_SCAN configureGNS=false configureDHCPAssignedVIPs=false clusterNodes=o23c1n1s1.oralocal,o23c1n2s1.oralocal storageOption=FLEX_ASM_STORAGE useIPMI=false diskGroupName=DATA redundancy=NORMAL auSize=1 configureAFD=false ignoreDownNodes=false configureBackupDG=false backupDGName=RECO backupDGRedundancy=NORMAL backupDGAUSize=1 managementOption=NONE executeRootScript=false enableAutoFixup=false [grid@o23c1n1s1 ~]$
After that, I can call the gridSetup.sh (at the first node only) and passing the response file as a parameter:
[grid@o23c1n1s1 ~]$ cd /u01/app/23.6.0.0/grid/ [grid@o23c1n1s1 grid]$ ./gridSetup.sh -waitforcompletion -silent -responseFile /u01/install/grid-crs.rsp Launching Oracle Grid Infrastructure Setup Wizard... ********************************************* Swap Size: This is a prerequisite condition to test whether sufficient total swap space is available on the system. Severity: IGNORABLE Overall status: VERIFICATION_FAILED Error message: PRVF-7573 : Sufficient swap size is not available on node "o23c1n2s1" [Required = 11.6753GB (1.224248E7KB) ; Found = 3.9648GB (4157436.0KB)] Cause: The swap size found does not meet the minimum requirement. Action: Increase swap size to at least meet the minimum swap space requirement. ----------------------------------------------- Error message: PRVF-7573 : Sufficient swap size is not available on node "o23c1n1s1" [Required = 11.6753GB (1.224248E7KB) ; Found = 3.9648GB (4157436.0KB)] Cause: The swap size found does not meet the minimum requirement. Action: Increase swap size to at least meet the minimum swap space requirement. ----------------------------------------------- [WARNING] [INS-13014] Target environment does not meet some optional requirements. CAUSE: Some of the optional prerequisites are not met. See logs for details. /u01/app/oraInventory/logs/GridSetupActions2024-11-06_04-55-41PM/gridSetupActions2024-11-06_04-55-41PM.log. ACTION: Identify the list of failed prerequisite checks from the log: /u01/app/oraInventory/logs/GridSetupActions2024-11-06_04-55-41PM/gridSetupActions2024-11-06_04-55-41PM.log. Then either from the log file or from installation manual find the appropriate configuration to meet the prerequisites and fix it manually. The response file for this session can be found at: /u01/app/23.6.0.0/grid/install/response/grid_2024-11-06_04-55-41PM.rsp You can find the log of this install session at: /u01/app/oraInventory/logs/GridSetupActions2024-11-06_04-55-41PM/gridSetupActions2024-11-06_04-55-41PM.log As a root user, run the following script(s): 1. /u01/app/23.6.0.0/grid/root.sh Run /u01/app/23.6.0.0/grid/root.sh on the following nodes: [o23c1n1s1, o23c1n2s1] Successfully Setup Software with warning(s). [grid@o23c1n1s1 grid]$ [grid@o23c1n1s1 grid]$ cd [grid@o23c1n1s1 ~]$
As you can see the parameter was “-silent -responseFile” was used and installation completed. The warnings in my case were related to the swap space (and I ignored it).
At the end, was asked to call the root.sh:
###################################### # #Node 01 # ###################################### [root@o23c1n1s1 ~]# /u01/app/23.6.0.0/grid/root.sh Check /u01/app/23.6.0.0/grid/install/root_o23c1n1s1.oralocal_2024-11-06_17-01-09-556780504.log for the output of root script [root@o23c1n1s1 ~]# cat /u01/app/23.6.0.0/grid/install/root_o23c1n1s1.oralocal_2024-11-06_17-01-09-556780504.log Performing root user operation. The following environment variables are set as: ORACLE_OWNER= grid ORACLE_HOME= /u01/app/23.6.0.0/grid Copying dbhome to /usr/local/bin ... Copying oraenv to /usr/local/bin ... Copying coraenv to /usr/local/bin ... Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. To configure Grid Infrastructure for a Cluster execute the following command as grid user: /u01/app/23.6.0.0/grid/gridSetup.sh This command launches the Grid Infrastructure Setup Wizard. The wizard also supports silent operation, and the parameters can be passed through the response file that is available in the installation media. [root@o23c1n1s1 ~]# ###################################### # #Node 02 # ###################################### [root@o23c1n2s1 ~]# /u01/app/23.6.0.0/grid/root.sh Check /u01/app/23.6.0.0/grid/install/root_o23c1n2s1.oralocal_2024-11-06_17-01-39-106163860.log for the output of root script [root@o23c1n2s1 ~]# cat /u01/app/23.6.0.0/grid/install/root_o23c1n2s1.oralocal_2024-11-06_17-01-39-106163860.log Performing root user operation. The following environment variables are set as: ORACLE_OWNER= grid ORACLE_HOME= /u01/app/23.6.0.0/grid Copying dbhome to /usr/local/bin ... Copying oraenv to /usr/local/bin ... Copying coraenv to /usr/local/bin ... Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. To configure Grid Infrastructure for a Cluster execute the following command as grid user: /u01/app/23.6.0.0/grid/gridSetup.sh This command launches the Grid Infrastructure Setup Wizard. The wizard also supports silent operation, and the parameters can be passed through the response file that is available in the installation media. [root@o23c1n2s1 ~]#
Step 02 – ZDOGIP and Home Switch
The next step is the ZDOGIP itself, the switch of the GI home. Since is an additional step I used a dedicated response file with some special/required parameters compared with the previous one:
- installOption: Needs to be defined at PATCH
- zeroDowntimeGIPatching: Needs to be defined as true
- skipDriverUpdate: If not defined as false, it will stop your databases. So, to have zero downtime, we define it as true. Doing that will not update your ACFS/AFD drivers (as I already explained in my first post)
So, my response file for this step was:
[grid@o23c1n1s1 ~]$ cat /u01/install/grid-patch.rsp oracle.install.responseFileVersion=/oracle/install/rspfmt_crsinstall_response_schema_v23.0.0 INVENTORY_LOCATION=/u01/app/oraInventory installOption=PATCH ORACLE_BASE=/u01/app/grid clusterUsage=RAC zeroDowntimeGIPatching=true skipDriverUpdate=true OSDBA=asmdba OSOPER=asmoper OSASM=asmadmin scanType=LOCAL_SCAN configureAsExtendedCluster=false configureGNS=false configureDHCPAssignedVIPs=false clusterNodes=o23c1n1s1.oralocal,o23c1n2s1.oralocal storageOption=FLEX_ASM_STORAGE useIPMI=false diskGroupName=DATA redundancy=NORMAL auSize=1 configureAFD=false ignoreDownNodes=false configureBackupDG=false backupDGName=RECO backupDGRedundancy=NORMAL backupDGAUSize=1 managementOption=NONE executeRootScript=false enableAutoFixup=false [grid@o23c1n1s1 ~]$
And now we can call the gridSetup.sh again specifying the new response file:
[grid@o23c1n1s1 ~]$ date Wed Nov 6 17:03:03 CET 2024 [grid@o23c1n1s1 ~]$ /u01/app/23.6.0.0/grid/gridSetup.sh -waitforcompletion -silent -responseFile /u01/install/grid-patch.rsp Launching Oracle Grid Infrastructure Setup Wizard... As a root user, run the following script(s): 1. /u01/app/23.6.0.0/grid/root.sh Run /u01/app/23.6.0.0/grid/root.sh on the following nodes: [o23c1n1s1, o23c1n2s1] Run the scripts on the local node first. After successful completion, run the scripts in sequence on all other nodes. Successfully Setup Software. [grid@o23c1n1s1 ~]$ date Wed Nov 6 17:05:01 CET 2024 [grid@o23c1n1s1 ~]$
Above you can see that I just specified the response file. There was no need to add any additional parameters because they were in the response file contents.
At the end we need to call the root.sh. Here the ZDOGIP will happen at the end and the GI home will be switched:
###################################### # #Node 01 # ###################################### [root@o23c1n1s1 ~]# date Wed Nov 6 17:05:45 CET 2024 [root@o23c1n1s1 ~]# /u01/app/23.6.0.0/grid/root.sh Check /u01/app/23.6.0.0/grid/install/root_o23c1n1s1.oralocal_2024-11-06_17-05-50-153163899.log for the output of root script [root@o23c1n1s1 ~]# [root@o23c1n1s1 ~]# date Wed Nov 6 17:09:52 CET 2024 [root@o23c1n1s1 ~]# [root@o23c1n1s1 ~]# cat /u01/app/23.6.0.0/grid/install/root_o23c1n1s1.oralocal_2024-11-06_17-05-50-153163899.log Performing root user operation. The following environment variables are set as: ORACLE_OWNER= grid ORACLE_HOME= /u01/app/23.6.0.0/grid Copying dbhome to /usr/local/bin ... Copying oraenv to /usr/local/bin ... Copying coraenv to /usr/local/bin ... Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. RAC option enabled on: Linux Executing command '/u01/app/23.6.0.0/grid/perl/bin/perl -I/u01/app/23.6.0.0/grid/perl/lib -I/u01/app/23.6.0.0/grid/crs/install /u01/app/23.6.0.0/grid/crs/install/rootcrs.pl -dstcrshome /u01/app/23.6.0.0/grid -transparent -nodriverupdate -prepatch' Using configuration parameter file: /u01/app/23.6.0.0/grid/crs/install/crsconfig_params The log of current session can be found at: /u01/app/grid/crsdata/o23c1n1s1/crsconfig/crs_prepatch_apply_oop_o23c1n1s1_2024-11-06_05-05-50PM.log Performing following verification checks ... cluster upgrade state ...PASSED OLR Integrity ...PASSED Hosts File ...PASSED Free Space: o23c1n1s1:/ ...PASSED Software home: /u01/app/23.5.0.0/grid ...PASSED Pre-check for Patch Application was successful. CVU operation performed: stage -pre patch Date: Nov 6, 2024, 5:06:12 PM CVU version: 23.5.0.24.7 (070324x8664) Clusterware version: 23.0.0.0.0 CVU home: /u01/app/23.5.0.0/grid Grid home: /u01/app/23.5.0.0/grid User: grid Operating system: Linux5.4.17-2136.324.5.3.el8uek.x86_64 2024/11/06 17:06:36 CLSRSC-671: Pre-patch steps for patching GI home successfully completed. Executing command '/u01/app/23.6.0.0/grid/perl/bin/perl -I/u01/app/23.6.0.0/grid/perl/lib -I/u01/app/23.6.0.0/grid/crs/install /u01/app/23.6.0.0/grid/crs/install/rootcrs.pl -dstcrshome /u01/app/23.6.0.0/grid -transparent -nodriverupdate -postpatch' Using configuration parameter file: /u01/app/23.6.0.0/grid/crs/install/crsconfig_params The log of current session can be found at: /u01/app/grid/crsdata/o23c1n1s1/crsconfig/crs_postpatch_apply_oop_o23c1n1s1_2024-11-06_05-06-36PM.log 2024/11/06 17:07:16 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd_dummy.service' 2024/11/06 17:07:43 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.service' 2024/11/06 17:08:29 CLSRSC-4015: Performing install or upgrade action for Oracle Autonomous Health Framework (AHF). 2024/11/06 17:08:29 CLSRSC-4012: Shutting down Oracle Autonomous Health Framework (AHF). 2024/11/06 17:09:41 CLSRSC-4013: Successfully shut down Oracle Autonomous Health Framework (AHF). 2024/11/06 17:09:46 CLSRSC-672: Post-patch steps for patching GI home successfully completed. [root@o23c1n1s1 ~]# ###################################### # #Node 02 # ###################################### [root@o23c1n2s1 ~]# date Wed Nov 6 17:11:17 CET 2024 [root@o23c1n2s1 ~]# /u01/app/23.6.0.0/grid/root.sh Check /u01/app/23.6.0.0/grid/install/root_o23c1n2s1.oralocal_2024-11-06_17-11-18-970667622.log for the output of root script [root@o23c1n2s1 ~]# date Wed Nov 6 17:16:48 CET 2024 [root@o23c1n2s1 ~]# [root@o23c1n2s1 ~]# cat /u01/app/23.6.0.0/grid/install/root_o23c1n2s1.oralocal_2024-11-06_17-11-18-970667622.log Performing root user operation. The following environment variables are set as: ORACLE_OWNER= grid ORACLE_HOME= /u01/app/23.6.0.0/grid Copying dbhome to /usr/local/bin ... Copying oraenv to /usr/local/bin ... Copying coraenv to /usr/local/bin ... Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. RAC option enabled on: Linux Executing command '/u01/app/23.6.0.0/grid/perl/bin/perl -I/u01/app/23.6.0.0/grid/perl/lib -I/u01/app/23.6.0.0/grid/crs/install /u01/app/23.6.0.0/grid/crs/install/rootcrs.pl -dstcrshome /u01/app/23.6.0.0/grid -transparent -nodriverupdate -prepatch' Using configuration parameter file: /u01/app/23.6.0.0/grid/crs/install/crsconfig_params The log of current session can be found at: /u01/app/grid/crsdata/o23c1n2s1/crsconfig/crs_prepatch_apply_oop_o23c1n2s1_2024-11-06_05-11-20PM.log Performing following verification checks ... cluster upgrade state ...PASSED OLR Integrity ...PASSED Hosts File ...PASSED Free Space: o23c1n2s1:/ ...PASSED Software home: /u01/app/23.5.0.0/grid ...PASSED Pre-check for Patch Application was successful. CVU operation performed: stage -pre patch Date: Nov 6, 2024, 5:11:43 PM CVU version: 23.5.0.24.7 (070324x8664) Clusterware version: 23.0.0.0.0 CVU home: /u01/app/23.5.0.0/grid Grid home: /u01/app/23.5.0.0/grid User: grid Operating system: Linux5.4.17-2136.324.5.3.el8uek.x86_64 2024/11/06 17:12:01 CLSRSC-671: Pre-patch steps for patching GI home successfully completed. Executing command '/u01/app/23.6.0.0/grid/perl/bin/perl -I/u01/app/23.6.0.0/grid/perl/lib -I/u01/app/23.6.0.0/grid/crs/install /u01/app/23.6.0.0/grid/crs/install/rootcrs.pl -dstcrshome /u01/app/23.6.0.0/grid -transparent -nodriverupdate -postpatch' Using configuration parameter file: /u01/app/23.6.0.0/grid/crs/install/crsconfig_params The log of current session can be found at: /u01/app/grid/crsdata/o23c1n2s1/crsconfig/crs_postpatch_apply_oop_o23c1n2s1_2024-11-06_05-12-02PM.log 2024/11/06 17:12:53 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd_dummy.service' 2024/11/06 17:13:23 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.service' 2024/11/06 17:14:17 CLSRSC-4015: Performing install or upgrade action for Oracle Autonomous Health Framework (AHF). 2024/11/06 17:14:17 CLSRSC-4012: Shutting down Oracle Autonomous Health Framework (AHF). 2024/11/06 17:15:35 CLSRSC-4013: Successfully shut down Oracle Autonomous Health Framework (AHF). Initializing ... Performing following verification checks ... cluster upgrade state ...PASSED Post-check for Patch Application was successful. CVU operation performed: stage -post patch Date: Nov 6, 2024, 5:15:59 PM CVU version: 23.6.0.24.10 (100824x8664) Clusterware version: 23.0.0.0.0 CVU home: /u01/app/23.6.0.0/grid Grid home: /u01/app/23.6.0.0/grid User: grid Operating system: Linux5.4.17-2136.324.5.3.el8uek.x86_64 2024/11/06 17:16:42 CLSRSC-672: Post-patch steps for patching GI home successfully completed. [root@o23c1n2s1 ~]#
To double-check, we can see that the ASM and listeners were restarted, but the database continued to run without downtime. Compare the time for each process and with the same commands that I executed previously:
###################################### # #Node 01 # ###################################### [root@o23c1n1s1 ~]# date Wed Nov 6 17:10:00 CET 2024 [root@o23c1n1s1 ~]# ps -ef |grep smon oracle 65855 1 0 16:03 ? 00:00:00 ora_smon_o19c1 root 120680 1 1 17:07 ? 00:00:01 /u01/app/23.6.0.0/grid/bin/osysmond.bin grid 122679 1 0 17:08 ? 00:00:00 asm_smon_+ASM1 root 129081 114991 0 17:10 pts/1 00:00:00 grep --color=auto smon [root@o23c1n1s1 ~]# ps -ef |grep lsnr root 120740 120723 0 17:07 ? 00:00:00 /u01/app/23.6.0.0/grid/bin/crfelsnr -n o23c1n1s1 grid 122476 1 0 17:08 ? 00:00:00 /u01/app/23.6.0.0/grid/bin/tnslsnr LISTENER -no_crs_notify -inherit grid 122482 1 0 17:08 ? 00:00:00 /u01/app/23.6.0.0/grid/bin/tnslsnr LISTENER_SCAN3 -no_crs_notify -inherit grid 122515 1 0 17:08 ? 00:00:00 /u01/app/23.6.0.0/grid/bin/tnslsnr ASMNET1LSNR_ASM -no_crs_notify -inherit root 129145 114991 0 17:10 pts/1 00:00:00 grep --color=auto lsnr [root@o23c1n1s1 ~]# date Wed Nov 6 17:10:08 CET 2024 [root@o23c1n1s1 ~]# ###################################### # #Node 02 # ###################################### [root@o23c1n2s1 ~]# date Wed Nov 6 17:16:54 CET 2024 [root@o23c1n2s1 ~]# ps -ef |grep smon oracle 59640 1 0 16:03 ? 00:00:00 ora_smon_o19c2 root 122846 1 1 17:13 ? 00:00:02 /u01/app/23.6.0.0/grid/bin/osysmond.bin grid 128047 1 0 17:14 ? 00:00:00 asm_smon_+ASM2 root 145174 114895 0 17:16 pts/1 00:00:00 grep --color=auto smon [root@o23c1n2s1 ~]# ps -ef |grep lsnr root 123060 122936 0 17:13 ? 00:00:00 /u01/app/23.6.0.0/grid/bin/crfelsnr -n o23c1n2s1 grid 126254 1 0 17:13 ? 00:00:00 /u01/app/23.6.0.0/grid/bin/tnslsnr LISTENER -no_crs_notify -inherit grid 126291 1 0 17:13 ? 00:00:00 /u01/app/23.6.0.0/grid/bin/tnslsnr LISTENER_SCAN3 -no_crs_notify -inherit grid 126396 1 0 17:13 ? 00:00:00 /u01/app/23.6.0.0/grid/bin/tnslsnr ASMNET1LSNR_ASM -no_crs_notify -inherit root 145367 114895 0 17:16 pts/1 00:00:00 grep --color=auto lsnr [root@o23c1n2s1 ~]# date Wed Nov 6 17:16:59 CET 2024 [root@o23c1n2s1 ~]#
ACFS and AFD kernel drivers
As we defined during the installation, they were not updated. The response file specified the skipDriverUpdate as true, so, they got installed but the kernel drivers were not touched. In one additional post, I will show how to update it. Again, I recommend to read my previous post to see more details about installed and active versions.
Conclusion
It is simple, we can use the silent mode to do the Zero-Downtime Oracle Grid Infrastructure Patching even with GOLD IMAGE. And by the way, the same process can be used for the RU version as well, the process is more or less the same.
Using the silent mode is an easy way to do remotely, or with automation scripts. Is a simple process that we can install first, and later call the switch. If you look above, I added some outputs from the Linux date command (before and after script calls) to allow us to identify the time for each command (easy to properly plan the needed time).
Disclaimer: “The postings on this site are my own and don’t necessarily represent my actual employer positions, strategies, or opinions. The information here was edited to be useful for general purposes, and specific data and identifications were removed to allow reach the generic audience and to be useful for the community. Post protected by copyright.”
Pingback: 23ai, Zero-Downtime Oracle Grid Infrastructure Patching using Release Update with Silent Install - Fernando Simon