Oracle 21c delivered a lot of new features and for Grid infrastructure one of the most interesting is the zero-downtime patch (zeroDowntimeGIPatching). This basically allows your database continues to be running while you patch/upgrade your GI. The official doc can be seen here. Let’s say that is an evolution of the Out of Place (OOP) patch for GI.
In this post I will show how to do that, but some details before starting:
- This post shows how to do the zero-downtime patch using GUI mode.
- I will do another post showing how to do in silent mode the same procedure. So, it can be automatized.
- In a third post, I will detail how the zero-downtime works behind the scenes and will discuss some logs.
Current Environment
My current environment is:
- OEL 8.4 Kernel 5.4.17-2102.201.3.el8uek.x86_64.
- Oracle GI 21c, version 21.3 with no one-off or patches installed.
- Oracle Database 21c, RU 21.5 (with OCW 21.5).
- TFA version is 21.4 (last available at March 2022).
- Nodes are not using Transparent HugePages.
- Is a RAC installation, with two nodes.
You can see the output for the info above in this txt file.
And I will apply the RU 21.5 (21.5.0.0.220118) for GI which is patch 33531909.
ACFS and AFD kernel drivers (pre-patch)
— Please read my post dedicated to ACFS and AFD Kernel drivers here —
One important detail for the patch process is to be aware that probably the RU will include new kernel drivers for ACFS, AFD, and even asmlib. But if we update directly (and do not take care of that), the new drivers will be installed at the system and the CRS will not start without a complete reboot of the system. And since we want to have zero database downtime here, this will not work. So, I will show you how to do this correctly too.
My system is using ASM filter (AFD) so it is installed kernel modules for the 21.3 version at both nodes:
################################################################################## # #Checking the current AFD and ACFS drivers at node 01 # ################################################################################## [grid@oel8n1-21c ~]$ acfsdriverstate version ACFS-9325: Driver OS kernel version = 5.4.17-2011.0.7.el8uek.x86_64. ACFS-9326: Driver build number = 210701. ACFS-9212: Driver build version = 21.0.0.0.0 (21.3.0.0.0). ACFS-9547: Driver available build number = 210701. ACFS-9548: Driver available build version = 21.0.0.0.0 (21.3.0.0.0). [grid@oel8n1-21c ~]$ [grid@oel8n1-21c ~]$ /u01/app/21.0.0.0/grid/bin/afddriverstate version AFD-9325: Driver OS kernel version = 5.4.17-2011.0.7.el8uek.x86_64. AFD-9326: Driver build number = 210701. AFD-9212: Driver build version = 21.0.0.0.0. AFD-9547: Driver available build number = 210701. AFD-9548: Driver available build version = 21.0.0.0.0. [grid@oel8n1-21c ~]$ ################################################################################## # #Checking the current AFD and ACFS drivers at node 02 # ################################################################################## [grid@oel8n2-21c ~]$ /u01/app/21.0.0.0/grid/bin/acfsdriverstate version ACFS-9325: Driver OS kernel version = 5.4.17-2011.0.7.el8uek.x86_64. ACFS-9326: Driver build number = 210701. ACFS-9212: Driver build version = 21.0.0.0.0 (21.3.0.0.0). ACFS-9547: Driver available build number = 210701. ACFS-9548: Driver available build version = 21.0.0.0.0 (21.3.0.0.0). [grid@oel8n2-21c ~]$ [grid@oel8n2-21c ~]$ /u01/app/21.0.0.0/grid/bin/afddriverstate version AFD-9325: Driver OS kernel version = 5.4.17-2011.0.7.el8uek.x86_64. AFD-9326: Driver build number = 210701. AFD-9212: Driver build version = 21.0.0.0.0. AFD-9547: Driver available build number = 210701. AFD-9548: Driver available build version = 21.0.0.0.0. [grid@oel8n2-21c ~]$
As you can see above, my drivers for both nodes are 21.3. And we can check this using the CRS as well:
################################################################################## # #Check the current ACFS and AFD drivers version for all nodes # ################################################################################## [grid@oel8n1-21c ~]$ crsctl query driver activeversion -all Node Name : oel8n1-21c Driver Name : ACFS BuildNumber : 210701 BuildVersion : 21.0.0.0.0 (21.3.0.0.0) Node Name : oel8n1-21c Driver Name : AFD BuildNumber : 210701 BuildVersion : 21.0.0.0.0 (21.3.0.0.0) Node Name : oel8n2-21c Driver Name : ACFS BuildNumber : 210701 BuildVersion : 21.0.0.0.0 (21.3.0.0.0) Node Name : oel8n2-21c Driver Name : AFD BuildNumber : 210701 BuildVersion : 21.0.0.0.0 (21.3.0.0.0) [grid@oel8n1-21c ~]$ [grid@oel8n1-21c ~]$ [grid@oel8n1-21c ~]$ [grid@oel8n1-21c ~]$ crsctl query driver softwareversion -all Node Name : oel8n1-21c Driver Name : ACFS BuildNumber : 210701 BuildVersion : 21.0.0.0.0 (21.3.0.0.0) Node Name : oel8n1-21c Driver Name : AFD BuildNumber : 210701 BuildVersion : 21.0.0.0.0 (21.3.0.0.0) Node Name : oel8n2-21c Driver Name : ACFS BuildNumber : 210701 BuildVersion : 21.0.0.0.0 (21.3.0.0.0) Node Name : oel8n2-21c Driver Name : AFD BuildNumber : 210701 BuildVersion : 21.0.0.0.0 (21.3.0.0.0) [grid@oel8n1-21c ~]$
Patch Process
Unzip files and OPatch
The files that you will need are:
- The base version of GI 21.3.
- GI RU 21.5.
- The latest version of OPatch for 21c.
The process starts creating (at all nodes) the folder that will store the GI (be careful with the ownership):
################################################################################## # #Creating the new directories for GI at node01 # ################################################################################## [root@oel8n1-21c ~]# mkdir -p /u01/app/21.5.0.0/grid [root@oel8n1-21c ~]# chown grid /u01/app/21.5.0.0/grid [root@oel8n1-21c ~]# chgrp -R oinstall /u01/app/21.5.0.0/grid [root@oel8n1-21c ~]# ################################################################################## # #Creating the new directories for GI at node02 # ################################################################################## [root@oel8n2-21c ~]# mkdir -p /u01/app/21.5.0.0/grid [root@oel8n2-21c ~]# chown grid /u01/app/21.5.0.0/grid [root@oel8n2-21c ~]# chgrp -R oinstall /u01/app/21.5.0.0/grid [root@oel8n2-21c ~]#
And after that, with the GI home user, we can unzip version 21.3 at the new folder (only at the first node):
################################################################################## # #Unzip the binaries as GRID user at node01 # ################################################################################## [root@oel8n1-21c ~]# su - grid [grid@oel8n1-21c ~]$ [grid@oel8n1-21c ~]$ [grid@oel8n1-21c ~]$ cd /u01/install/21.5 [grid@oel8n1-21c 21.5]$ [grid@oel8n1-21c 21.5]$ unzip -q V1011504-01.zip -d /u01/app/21.5.0.0/grid [grid@oel8n1-21c 21.5]$
After that we can update the OPatch at the new unzipped GI home:
################################################################################## # #Updating opatch with the last version for 21c # ################################################################################## [grid@oel8n1-21c 21.5]$ cp -R /u01/app/21.5.0.0/grid/OPatch ./OPatch-ORG [grid@oel8n1-21c 21.5]$ [grid@oel8n1-21c 21.5]$ [grid@oel8n1-21c 21.5]$ unzip -q p6880880_210000_Linux-x86-64.zip -d /u01/app/21.5.0.0/grid replace /u01/app/21.5.0.0/grid/OPatch/README.txt? [y]es, [n]o, [A]ll, [N]one, [r]ename: A [grid@oel8n1-21c 21.5]$ [grid@oel8n1-21c 21.5]$ [grid@oel8n1-21c 21.5]$ /u01/app/21.5.0.0/grid/OPatch/opatch version OPatch Version: 12.2.0.1.28 OPatch succeeded. [grid@oel8n1-21c 21.5]$
Now, we can unzip the RU (as GI user) at his own dedicated folder at node 01 (not at the GI home):
################################################################################## # #Continuing to unzip the files (now the patch 21.5) # ################################################################################## [grid@oel8n1-21c 21.5]$ pwd /u01/install/21.5 [grid@oel8n1-21c 21.5]$ [grid@oel8n1-21c 21.5]$ unzip -q p33531909_210000_Linux-x86-64.zip [grid@oel8n1-21c 21.5]$
At his moment we have:
- 21.3 GI installed and running at /u01/app/21.0.0.0
- 21.3 GI unzipped at /u01/app/21.5.0.0
- OPatch 12.2.0.1.28 unzipped at /u01/app/21.5.0.0
- 21.5 RU unzipped at /u01/install/21.5/33531909
Running systems
Before starting the patch, I would like to show the current running systems. We have:
################################################################################## # #This show the current SMOM and the Listeners running at node 01. #PLEASE look the times that they started to run # ################################################################################## [root@oel8n1-21c 21.5]# date Sat Mar 12 21:10:42 CET 2022 [root@oel8n1-21c 21.5]# [root@oel8n1-21c 21.5]# ps -ef |grep smon root 3292 1 1 17:50 ? 00:02:05 /u01/app/21.0.0.0/grid/bin/osysmond.bin grid 4171 1 0 17:51 ? 00:00:00 asm_smon_+ASM1 oracle 173111 1 0 21:06 ? 00:00:00 ora_smon_orcl21c1 root 176337 10902 0 21:10 pts/0 00:00:00 grep --color=auto smon [root@oel8n1-21c 21.5]# [root@oel8n1-21c 21.5]# ps -ef |grep lsnr grid 5411 1 0 17:51 ? 00:00:00 /u01/app/21.0.0.0/grid/bin/tnslsnr ASMNET1LSNR_ASM -no_crs_notify -inherit grid 5516 1 0 17:51 ? 00:00:05 /u01/app/21.0.0.0/grid/bin/tnslsnr LISTENER -no_crs_notify -inherit grid 5611 1 0 17:51 ? 00:00:04 /u01/app/21.0.0.0/grid/bin/tnslsnr LISTENER_SCAN2 -no_crs_notify -inherit grid 5629 1 0 17:51 ? 00:00:05 /u01/app/21.0.0.0/grid/bin/tnslsnr LISTENER_SCAN1 -no_crs_notify -inherit root 176390 10902 0 21:10 pts/0 00:00:00 grep --color=auto lsnr [root@oel8n1-21c 21.5]# [root@oel8n1-21c 21.5]# date Sat Mar 12 21:10:56 CET 2022 [root@oel8n1-21c 21.5]# ################################################################################## # #This show the current SMON and the Listeners running at node 02. #PLEASE look the times that they started to run # ################################################################################## [root@oel8n2-21c ~]# date Sat Mar 12 21:11:18 CET 2022 [root@oel8n2-21c ~]# [root@oel8n2-21c ~]# ps -ef |grep smon root 3045 1 0 17:50 ? 00:01:42 /u01/app/21.0.0.0/grid/bin/osysmond.bin grid 20878 1 0 17:53 ? 00:00:00 asm_smon_+ASM2 oracle 218419 1 0 21:06 ? 00:00:00 ora_smon_orcl21c2 root 221493 221424 0 21:11 pts/1 00:00:00 grep --color=auto smon [root@oel8n2-21c ~]# [root@oel8n2-21c ~]# ps -ef |grep lsnr grid 5843 1 0 17:52 ? 00:00:13 /u01/app/21.0.0.0/grid/bin/tnslsnr LISTENER -no_crs_notify -inherit grid 18182 1 0 17:53 ? 00:00:00 /u01/app/21.0.0.0/grid/bin/tnslsnr ASMNET1LSNR_ASM -no_crs_notify -inherit grid 18706 1 0 17:53 ? 00:00:00 /u01/app/21.0.0.0/grid/bin/tnslsnr LISTENER_SCAN3 -no_crs_notify -inherit root 221659 221424 0 21:11 pts/1 00:00:00 grep --color=auto lsnr [root@oel8n2-21c ~]# [root@oel8n2-21c ~]# date Sat Mar 12 21:11:34 CET 2022 [root@oel8n2-21c ~]#
As you can see above the database started around 21:06 at both nodes, and the listeners are running since 17:50 more or less. Please remember these timeframes for the next steps.
As an example, I created one table in the database and left two scripts running:
- The first is one loop that connects at the database using scan and inserts at the table and saves the instances from where it connects. This simulates connections that are coming from the applications side and are load-balanced by the listener and sometimes go to node01 and other to node02. During the patch you will see that node being patched does not receive connection due to listener restart.
- The second example is an open connection at instance 01, and this runs a PL/SQL. This simulates one established connection at the database and you will see that it continues to run uninterrupted even during the GI patch.
The scripts:
[oracle@orcloel7 ~]$ for i in {1..100000} > do > echo "Insert Data $i "`date +%d-%m-%Y-%H%M%S` > sqlplus -s sys/oracle@oel8-21c-scan.oralocal/PDB21C as sysdba<<EOF > set heading on feedback on; > insert into t1(c1, c2, c3) values (SYS_CONTEXT ('USERENV', 'INSTANCE'), 'Loop - EZconnect', sysdate); > commit; > EOF > done Insert Data 1 12-03-2022-214357 1 row created. Commit complete. Insert Data 2 12-03-2022-214358 … … [oracle@oel8n1-21c ~]$ sqlplus / as sysdba SQL*Plus: Release 21.0.0.0.0 - Production on Sat Mar 12 21:49:04 2022 Version 21.5.0.0.0 Copyright (c) 1982, 2021, Oracle. All rights reserved. Connected to: Oracle Database 21c Enterprise Edition Release 21.0.0.0.0 - Production Version 21.5.0.0.0 SQL> alter session set container = PDB21C; Session altered. SQL> SET SERVEROUTPUT ON SQL> DECLARE 2 lDatMax DATE := (sysdate + 40/1440); 3 BEGIN 4 WHILE (sysdate <= (lDatMax)) LOOP 5 insert into t1(c1, c2, c3) values (SYS_CONTEXT ('USERENV', 'INSTANCE'), 'Loop - Sqlplus', sysdate); 6 commit; 7 dbms_session.sleep(0.5); 8 END LOOP; 9 END; 10 /
After some time running them, we have:
SQL> select count(*), c1, c2, to_char(max(c3), 'DD/MM/RRRR HH24:MI:SS') as last_ins, to_char(min(c3), 'DD/MM/RRRR HH24:MI:SS') as first_ins from t1 group by c1, c2; COUNT(*) C1 C2 LAST_INS FIRST_INS ---------- ---------- ------------------------------ ------------------- ------------------- 903 2 Loop - EZconnect 12/03/2022 21:50:47 12/03/2022 21:43:58 1239 1 Loop - EZconnect 12/03/2022 21:50:47 12/03/2022 21:44:07 27 1 Loop - Sqlplus 12/03/2022 21:50:46 12/03/2022 21:50:33 SQL> SQL> / COUNT(*) C1 C2 LAST_INS FIRST_INS ---------- ---------- ------------------------------ ------------------- ------------------- 1395 2 Loop - EZconnect 12/03/2022 21:52:25 12/03/2022 21:43:58 1349 1 Loop - EZconnect 12/03/2022 21:52:20 12/03/2022 21:44:07 223 1 Loop - Sqlplus 12/03/2022 21:52:25 12/03/2022 21:50:33 SQL>
So, you can see that are inserts from EZConnect at both instances, and from Sqlplus just instance 01.
Patching
To call the patch, we just use the griSetup.sh and pass the parameters:
- applyRU: This will apply the RU patch over the current home before starting the installation of the GI itself.
- switchGridHome: This informs the install process that the GI will move from the older home to the new one. Is basically the OOP.
- zeroDowntimeGIPatching: This is the new feature and informs the patch process that databases will continue to be run.
- skipDriverUpdate: This set to not install the kernel modules directly. They will be at GI home but not applied. My hint is: always assume that the GI patch will update the kernel drivers. So, always call using this option to avoid unexpected problems.
To call the patch we do (as GI owner):
[grid@oel8n1-21c 21.5]$ cd /u01/app/21.5.0.0/grid [grid@oel8n1-21c grid]$ [grid@oel8n1-21c grid]$ unset ORACLE_BASE [grid@oel8n1-21c grid]$ unset ORACLE_HOME [grid@oel8n1-21c grid]$ unset ORACLE_SID [grid@oel8n1-21c grid]$ [grid@oel8n1-21c grid]$ [grid@oel8n1-21c grid]$ ./gridSetup.sh -applyRU /u01/install/21.5/33531909 -switchGridHome -zeroDowntimeGIPatching -skipDriverUpdate ERROR: Unable to verify the graphical display setup. This application requires X display. Make sure that xdpyinfo exist under PATH variable. Preparing the home to patch... Applying the patch /u01/install/21.5/33531909... Successfully applied the patch. The log can be found at: /u01/app/oraInventory/logs/GridSetupActions2022-03-12_09-30-52PM/installerPatchActions_2022-03-12_09-30-52PM.log Launching Oracle Grid Infrastructure Setup Wizard...
Calling this the GUI interface will appear and the commands are basically Next/Next/Finish. Look at the gallery below (you can open each image in a new windows to see all the details):
The installation will request to call the root.sh in each node. So, for node01 I called the root.sh (please connect using administrative network interface and not from one virtual interface linked with CRS), and this was the output (it started at 21:52 and finished around 21:57):
[root@oel8n1-21c ~]# /u01/app/21.5.0.0/grid/root.sh Performing root user operation. The following environment variables are set as: ORACLE_OWNER= grid ORACLE_HOME= /u01/app/21.5.0.0/grid Enter the full pathname of the local bin directory: [/usr/local/bin]: The contents of "dbhome" have not changed. No need to overwrite. The contents of "oraenv" have not changed. No need to overwrite. The contents of "coraenv" have not changed. No need to overwrite. Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. Relinking oracle with rac_on option LD_LIBRARY_PATH='/u01/app/21.0.0.0/grid/lib:/u01/app/21.5.0.0/grid/lib:' Using configuration parameter file: /u01/app/21.5.0.0/grid/crs/install/crsconfig_params The log of current session can be found at: /u01/app/grid/crsdata/oel8n1-21c/crsconfig/rootcrs_oel8n1-21c_2022-03-12_09-52-40PM.log Using configuration parameter file: /u01/app/21.5.0.0/grid/crs/install/crsconfig_params The log of current session can be found at: /u01/app/grid/crsdata/oel8n1-21c/crsconfig/crs_prepatch_apply_oop_oel8n1-21c_2022-03-12_09-52-41PM.log This software is "247" days old. It is a best practice to update the CRS home by downloading and applying the latest release update. Refer to MOS note 2731675.1 for more details. Performing following verification checks ... cluster upgrade state ...PASSED OLR Integrity ...PASSED Hosts File ...PASSED Free Space: oel8n1-21c:/ ...PASSED Free Space: oel8n2-21c:/ ...PASSED Pre-check for Patch Application was successful. CVU operation performed: stage -pre patch Date: Mar 12, 2022 9:52:43 PM Clusterware version: 21.0.0.0.0 CVU home: /u01/app/21.0.0.0/grid Grid home: /u01/app/21.0.0.0/grid User: grid Operating system: Linux5.4.17-2102.201.3.el8uek.x86_64 2022/03/12 21:53:17 CLSRSC-347: Successfully unlock /u01/app/21.5.0.0/grid 2022/03/12 21:53:17 CLSRSC-671: Pre-patch steps for patching GI home successfully completed. Using configuration parameter file: /u01/app/21.5.0.0/grid/crs/install/crsconfig_params The log of current session can be found at: /u01/app/grid/crsdata/oel8n1-21c/crsconfig/crs_postpatch_apply_oop_oel8n1-21c_2022-03-12_09-53-18PM.log Oracle Clusterware active version on the cluster is [21.0.0.0.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [0]. CRS-1151: The cluster was successfully set to rolling patch mode. 2022/03/12 21:53:39 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd_dummy.service' 2022/03/12 21:54:42 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.service' 2022/03/12 21:56:04 CLSRSC-4015: Performing install or upgrade action for Oracle Autonomous Health Framework (AHF). 2022/03/12 21:56:04 CLSRSC-4012: Shutting down Oracle Autonomous Health Framework (AHF). 2022/03/12 21:57:12 CLSRSC-4013: Successfully shut down Oracle Autonomous Health Framework (AHF). Oracle Clusterware active version on the cluster is [21.0.0.0.0]. The cluster upgrade state is [ROLLING PATCH]. The cluster active patch level is [0]. 2022/03/12 21:57:17 CLSRSC-672: Post-patch steps for patching GI home successfully completed. [root@oel8n1-21c ~]# 2022/03/12 21:57:43 CLSRSC-4003: Successfully patched Oracle Autonomous Health Framework (AHF).
After that, you can see that the database continues to be run, and the CRS/Listener got restarted. Look below that database startup time continued the same (21:06) and the others are new:
[root@oel8n1-21c ~]# date Sat Mar 12 22:05:19 CET 2022 [root@oel8n1-21c ~]# ps -ef |grep smon oracle 173111 1 0 21:06 ? 00:00:00 ora_smon_orcl21c1 root 242362 1 1 21:55 ? 00:00:08 /u01/app/21.5.0.0/grid/bin/osysmond.bin grid 247444 1 0 21:55 ? 00:00:00 asm_smon_+ASM1 root 277152 220872 0 22:05 pts/3 00:00:00 grep --color=auto smon [root@oel8n1-21c ~]# ps -ef |grep lsnr grid 243605 1 0 21:55 ? 00:00:02 /u01/app/21.5.0.0/grid/bin/tnslsnr LISTENER -no_crs_notify -inherit grid 243935 1 0 21:55 ? 00:00:00 /u01/app/21.5.0.0/grid/bin/tnslsnr ASMNET1LSNR_ASM -no_crs_notify -inherit grid 243959 1 0 21:55 ? 00:00:01 /u01/app/21.5.0.0/grid/bin/tnslsnr LISTENER_SCAN3 -no_crs_notify -inherit root 277222 220872 0 22:05 pts/3 00:00:00 grep --color=auto lsnr [root@oel8n1-21c ~]# date Sat Mar 12 22:05:24 CET 2022 [root@oel8n1-21c ~]#
And if we look at node02 the database continues the same but we see that scan listeners (LISTENERR_SCAN1 and LISTENER_SCAN2) started at this node:
[root@oel8n2-21c ~]# date Sat Mar 12 22:08:00 CET 2022 [root@oel8n2-21c ~]# [root@oel8n2-21c ~]# ps -ef |grep smon root 3045 1 0 17:50 ? 00:02:16 /u01/app/21.0.0.0/grid/bin/osysmond.bin grid 20878 1 0 17:53 ? 00:00:00 asm_smon_+ASM2 oracle 218419 1 0 21:06 ? 00:00:00 ora_smon_orcl21c2 root 293156 292560 0 22:08 pts/3 00:00:00 grep --color=auto smon [root@oel8n2-21c ~]# [root@oel8n2-21c ~]# ps -ef |grep lsnr grid 5843 1 0 17:52 ? 00:00:17 /u01/app/21.0.0.0/grid/bin/tnslsnr LISTENER -no_crs_notify -inherit grid 18182 1 0 17:53 ? 00:00:00 /u01/app/21.0.0.0/grid/bin/tnslsnr ASMNET1LSNR_ASM -no_crs_notify -inherit grid 264386 1 0 21:53 ? 00:00:00 /u01/app/21.0.0.0/grid/bin/tnslsnr LISTENER_SCAN1 -no_crs_notify -inherit grid 264403 1 0 21:53 ? 00:00:00 /u01/app/21.0.0.0/grid/bin/tnslsnr LISTENER_SCAN2 -no_crs_notify -inherit root 293390 292560 0 22:08 pts/3 00:00:00 grep --color=auto lsnr [root@oel8n2-21c ~]# [root@oel8n2-21c ~]# date Sat Mar 12 22:08:19 CET 2022 [root@oel8n2-21c ~]#
And now we can call root.sh at node02 (it started around 22:09 and finished at 22:15):
[root@oel8n2-21c ~]# date Sat Mar 12 22:09:21 CET 2022 [root@oel8n2-21c ~]# [root@oel8n2-21c ~]# /u01/app/21.5.0.0/grid/root.sh Performing root user operation. The following environment variables are set as: ORACLE_OWNER= grid ORACLE_HOME= /u01/app/21.5.0.0/grid Enter the full pathname of the local bin directory: [/usr/local/bin]: The contents of "dbhome" have not changed. No need to overwrite. The contents of "oraenv" have not changed. No need to overwrite. The contents of "coraenv" have not changed. No need to overwrite. Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. Relinking oracle with rac_on option LD_LIBRARY_PATH='/u01/app/21.0.0.0/grid/lib:/u01/app/21.5.0.0/grid/lib:' Using configuration parameter file: /u01/app/21.5.0.0/grid/crs/install/crsconfig_params The log of current session can be found at: /u01/app/grid/crsdata/oel8n2-21c/crsconfig/rootcrs_oel8n2-21c_2022-03-12_10-10-52PM.log Using configuration parameter file: /u01/app/21.5.0.0/grid/crs/install/crsconfig_params The log of current session can be found at: /u01/app/grid/crsdata/oel8n2-21c/crsconfig/crs_prepatch_apply_oop_oel8n2-21c_2022-03-12_10-10-53PM.log This software is "247" days old. It is a best practice to update the CRS home by downloading and applying the latest release update. Refer to MOS note 2731675.1 for more details. Performing following verification checks ... cluster upgrade state ...PASSED OLR Integrity ...PASSED Hosts File ...PASSED Free Space: oel8n1-21c:/ ...PASSED Free Space: oel8n2-21c:/ ...PASSED Pre-check for Patch Application was successful. CVU operation performed: stage -pre patch Date: Mar 12, 2022 10:10:56 PM Clusterware version: 21.0.0.0.0 CVU home: /u01/app/21.0.0.0/grid Grid home: /u01/app/21.0.0.0/grid User: grid Operating system: Linux5.4.17-2102.201.3.el8uek.x86_64 2022/03/12 22:11:17 CLSRSC-347: Successfully unlock /u01/app/21.5.0.0/grid 2022/03/12 22:11:18 CLSRSC-671: Pre-patch steps for patching GI home successfully completed. Using configuration parameter file: /u01/app/21.5.0.0/grid/crs/install/crsconfig_params The log of current session can be found at: /u01/app/grid/crsdata/oel8n2-21c/crsconfig/crs_postpatch_apply_oop_oel8n2-21c_2022-03-12_10-11-19PM.log Oracle Clusterware active version on the cluster is [21.0.0.0.0]. The cluster upgrade state is [ROLLING PATCH]. The cluster active patch level is [0]. CRS-1152: The cluster is in rolling patch mode. CRS-4000: Command Start failed, or completed with errors. 2022/03/12 22:11:37 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd_dummy.service' 2022/03/12 22:12:35 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.service' 2022/03/12 22:14:03 CLSRSC-4015: Performing install or upgrade action for Oracle Autonomous Health Framework (AHF). 2022/03/12 22:14:03 CLSRSC-4012: Shutting down Oracle Autonomous Health Framework (AHF). 2022/03/12 22:15:09 CLSRSC-4013: Successfully shut down Oracle Autonomous Health Framework (AHF). Oracle Clusterware active version on the cluster is [21.0.0.0.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [1452993786]. Performing following verification checks ... cluster upgrade state ...PASSED Post-check for Patch Application was successful. CVU operation performed: stage -post patch Date: Mar 12, 2022 10:15:25 PM Clusterware version: 21.0.0.0.0 CVU home: /u01/app/21.5.0.0/grid Grid home: /u01/app/21.5.0.0/grid User: grid Operating system: Linux5.4.17-2102.201.3.el8uek.x86_64 2022/03/12 22:15:56 CLSRSC-672: Post-patch steps for patching GI home successfully completed. [root@oel8n2-21c ~]# 2022/03/12 22:15:59 CLSRSC-4003: Successfully patched Oracle Autonomous Health Framework (AHF).
And we can see that neither node01 nor node02 the database restarted:
[root@oel8n1-21c ~]# date Sat Mar 12 22:45:59 CET 2022 [root@oel8n1-21c ~]# [root@oel8n1-21c ~]# ps -ef |grep smon oracle 173111 1 0 21:06 ? 00:00:00 ora_smon_orcl21c1 root 242362 1 1 21:55 ? 00:00:33 /u01/app/21.5.0.0/grid/bin/osysmond.bin grid 247444 1 0 21:55 ? 00:00:00 asm_smon_+ASM1 root 348370 220872 0 22:46 pts/3 00:00:00 grep --color=auto smon [root@oel8n1-21c ~]# ps -ef |grep lsnr grid 243605 1 0 21:55 ? 00:00:07 /u01/app/21.5.0.0/grid/bin/tnslsnr LISTENER -no_crs_notify -inherit grid 243935 1 0 21:55 ? 00:00:00 /u01/app/21.5.0.0/grid/bin/tnslsnr ASMNET1LSNR_ASM -no_crs_notify -inherit grid 289735 1 0 22:11 ? 00:00:00 /u01/app/21.5.0.0/grid/bin/tnslsnr LISTENER_SCAN1 -no_crs_notify -inherit grid 289757 1 0 22:11 ? 00:00:00 /u01/app/21.5.0.0/grid/bin/tnslsnr LISTENER_SCAN2 -no_crs_notify -inherit root 348397 220872 0 22:46 pts/3 00:00:00 grep --color=auto lsnr [root@oel8n1-21c ~]# date Sat Mar 12 22:46:10 CET 2022 [root@oel8n1-21c ~]# [root@oel8n1-21c ~]# [root@oel8n2-21c ~]# date Sat Mar 12 22:46:21 CET 2022 [root@oel8n2-21c ~]# ps -ef |grep smon oracle 218419 1 0 21:06 ? 00:00:00 ora_smon_orcl21c2 root 316071 1 0 22:13 ? 00:00:19 /u01/app/21.5.0.0/grid/bin/osysmond.bin grid 317880 1 0 22:13 ? 00:00:00 asm_smon_+ASM2 root 366229 292560 0 22:46 pts/3 00:00:00 grep --color=auto smon [root@oel8n2-21c ~]# ps -ef |grep lsnr grid 317058 1 0 22:13 ? 00:00:03 /u01/app/21.5.0.0/grid/bin/tnslsnr LISTENER -no_crs_notify -inherit grid 317269 1 0 22:13 ? 00:00:00 /u01/app/21.5.0.0/grid/bin/tnslsnr ASMNET1LSNR_ASM -no_crs_notify -inherit grid 317509 1 0 22:13 ? 00:00:03 /u01/app/21.5.0.0/grid/bin/tnslsnr LISTENER_SCAN3 -no_crs_notify -inherit root 366234 292560 0 22:46 pts/3 00:00:00 grep --color=auto lsnr [root@oel8n2-21c ~]# date Sat Mar 12 22:46:28 CET 2022 [root@oel8n2-21c ~]#
Remember the two inserts that I left running before? So, one example is that around 21:54 (while the root.sh from node01 was running) you can see that just database was running at node01 and that the inserts from sqlplus that was connected at node01 continue to insert data. And that just instance 02 was receiving connections using EZConnect from scan listener. This show that the database continues to run and insert database at node01 even without ASM/CRS (look at the column LAST_INS that is a date column taken from the insert statement):
[root@oel8n1-21c ~]# date Sat Mar 12 21:54:37 CET 2022 [root@oel8n1-21c ~]# ps -ef |grep smon root 3292 1 1 17:50 ? 00:02:34 /u01/app/21.0.0.0/grid/bin/osysmond.bin oracle 173111 1 0 21:06 ? 00:00:00 ora_smon_orcl21c1 root 231277 220872 0 21:54 pts/3 00:00:00 grep --color=auto smon [root@oel8n1-21c ~]# ps -ef |grep lsnr root 231765 220872 0 21:54 pts/3 00:00:00 grep --color=auto lsnr [root@oel8n1-21c ~]# date Sat Mar 12 21:54:41 CET 2022 [root@oel8n1-21c ~]# [root@oel8n1-21c ~]# SQL> select count(*), c1, c2, to_char(max(c3), 'DD/MM/RRRR HH24:MI:SS') as last_ins, to_char(min(c3), 'DD/MM/RRRR HH24:MI:SS') as first_ins from t1 group by c1, c2; COUNT(*) C1 C2 LAST_INS FIRST_INS ---------- ---------- ------------------------------ ------------------- ------------------- 1878 2 Loop - EZconnect 12/03/2022 21:53:50 12/03/2022 21:43:58 1353 1 Loop - EZconnect 12/03/2022 21:53:15 12/03/2022 21:44:07 428 1 Loop - Sqlplus 12/03/2022 21:54:09 12/03/2022 21:50:33 SQL> / COUNT(*) C1 C2 LAST_INS FIRST_INS ---------- ---------- ------------------------------ ------------------- ------------------- 1878 2 Loop - EZconnect 12/03/2022 21:53:50 12/03/2022 21:43:58 1353 1 Loop - EZconnect 12/03/2022 21:53:15 12/03/2022 21:44:07 500 1 Loop - Sqlplus 12/03/2022 21:54:45 12/03/2022 21:50:33 SQL> / COUNT(*) C1 C2 LAST_INS FIRST_INS ---------- ---------- ------------------------------ ------------------- ------------------- 1878 2 Loop - EZconnect 12/03/2022 21:53:50 12/03/2022 21:43:58 1353 1 Loop - EZconnect 12/03/2022 21:53:15 12/03/2022 21:44:07 503 1 Loop - Sqlplus 12/03/2022 21:54:46 12/03/2022 21:50:33 SQL> / COUNT(*) C1 C2 LAST_INS FIRST_INS ---------- ---------- ------------------------------ ------------------- ------------------- 1878 2 Loop - EZconnect 12/03/2022 21:53:50 12/03/2022 21:43:58 1353 1 Loop - EZconnect 12/03/2022 21:53:15 12/03/2022 21:44:07 506 1 Loop - Sqlplus 12/03/2022 21:54:48 12/03/2022 21:50:33 SQL> l 1* select count(*), c1, c2, to_char(max(c3), 'DD/MM/RRRR HH24:MI:SS') as last_ins, to_char(min(c3), 'DD/MM/RRRR HH24:MI:SS') as first_ins from t1 group by c1, c2 SQL> / COUNT(*) C1 C2 LAST_INS FIRST_INS ---------- ---------- ------------------------------ ------------------- ------------------- 1879 2 Loop - EZconnect 12/03/2022 21:54:50 12/03/2022 21:43:58 1353 1 Loop - EZconnect 12/03/2022 21:53:15 12/03/2022 21:44:07 516 1 Loop - Sqlplus 12/03/2022 21:54:53 12/03/2022 21:50:33 SQL>
The full output from the inserts can be seen in this file, and here. You can see that no errors were got during the root.sh call (check the LAST_INS column). I recommend that you look above the root.sh execution from both nodes (check the times at output) and find inside of the file to match the times and check that no errors were reported due to bad connection or database unavailability.
ACFS and AFD kernel drivers (pos-patch)
— Please read my post dedicated to ACFS and AFD Kernel drivers here —
Since we called the GI patch with the option skipDriverUpdate the ACFS and AFD was not updated. In my environment just the AFD is in use, so, the result is:
################################################################################## # #You can see that ACFS and AFD are in different versions #This is expected because the AFD was not uploaded at kernel as requested - node01 # ################################################################################## [root@oel8n1-21c ~]# [root@oel8n1-21c ~]# /u01/app/21.5.0.0/grid/bin/acfsdriverstate version ACFS-9325: Driver OS kernel version = 5.4.17-2011.0.7.el8uek.x86_64. ACFS-9326: Driver build number = 211031. ACFS-9212: Driver build version = 21.0.0.0.0 (21.4.0.0.0). ACFS-9547: Driver available build number = 211031. ACFS-9548: Driver available build version = 21.0.0.0.0 (21.4.0.0.0). [root@oel8n1-21c ~]# [root@oel8n1-21c ~]# /u01/app/21.5.0.0/grid/bin/afddriverstate version AFD-9325: Driver OS kernel version = 5.4.17-2011.0.7.el8uek.x86_64. AFD-9326: Driver build number = 210701. AFD-9212: Driver build version = 21.0.0.0.0. AFD-9547: Driver available build number = 211031. AFD-9548: Driver available build version = 21.0.0.0.0. [root@oel8n1-21c ~]# [root@oel8n1-21c ~]# ################################################################################## # #You can see that ACFS and AFD are in different versions #This is expected because the AFD was not uploaded at kernel as requested - node02 # ################################################################################## [root@oel8n2-21c ~]# /u01/app/21.5.0.0/grid/bin/acfsdriverstate version ACFS-9325: Driver OS kernel version = 5.4.17-2011.0.7.el8uek.x86_64. ACFS-9326: Driver build number = 211031. ACFS-9212: Driver build version = 21.0.0.0.0 (21.4.0.0.0). ACFS-9547: Driver available build number = 211031. ACFS-9548: Driver available build version = 21.0.0.0.0 (21.4.0.0.0). [root@oel8n2-21c ~]# [root@oel8n2-21c ~]# /u01/app/21.5.0.0/grid/bin/afddriverstate version AFD-9325: Driver OS kernel version = 5.4.17-2011.0.7.el8uek.x86_64. AFD-9326: Driver build number = 210701. AFD-9212: Driver build version = 21.0.0.0.0. AFD-9547: Driver available build number = 211031. AFD-9548: Driver available build version = 21.0.0.0.0. [root@oel8n2-21c ~]#
And if we check at CRS we can see that it knows that the active version is 21.3 for AFD (version 210701) and 21.4 for ACFS (version 211031). But it knows too that the available version for both (inside GI home for 21.5) is the last version (the 21.4, version 211031):
################################################################################## # #Check the current ACFS and AFD drivers version for all nodes # ################################################################################## [root@oel8n1-21c ~]# /u01/app/21.5.0.0/grid/bin/crsctl query driver activeversion -all Node Name : oel8n1-21c Driver Name : ACFS BuildNumber : 211031 BuildVersion : 21.0.0.0.0 (21.4.0.0.0) Node Name : oel8n1-21c Driver Name : AFD BuildNumber : 210701 BuildVersion : 21.0.0.0.0 (21.3.0.0.0) Node Name : oel8n2-21c Driver Name : ACFS BuildNumber : 211031 BuildVersion : 21.0.0.0.0 (21.4.0.0.0) Node Name : oel8n2-21c Driver Name : AFD BuildNumber : 210701 BuildVersion : 21.0.0.0.0 (21.3.0.0.0) [root@oel8n1-21c ~]# [root@oel8n1-21c ~]# /u01/app/21.5.0.0/grid/bin/crsctl query driver softwareversion -all Node Name : oel8n1-21c Driver Name : ACFS BuildNumber : 211031 BuildVersion : 21.0.0.0.0 (21.4.0.0.0) Node Name : oel8n1-21c Driver Name : AFD BuildNumber : 211031 BuildVersion : 21.0.0.0.0 (21.4.0.0.0) Node Name : oel8n2-21c Driver Name : ACFS BuildNumber : 211031 BuildVersion : 21.0.0.0.0 (21.4.0.0.0) Node Name : oel8n2-21c Driver Name : AFD BuildNumber : 211031 BuildVersion : 21.0.0.0.0 (21.4.0.0.0) [root@oel8n1-21c ~]#
And as an example, even if I restart one node (node01) you can see that the ASM/CRS restart and the old driver continues to be used at kernel level:
[root@oel8n1-21c ~]# uptime 22:49:47 up 5:00, 3 users, load average: 1.02, 1.36, 1.60 [root@oel8n1-21c ~]# [root@oel8n1-21c ~]# reboot login as: root root@10.160.10.70's password: Activate the web console with: systemctl enable --now cockpit.socket Last login: Sat Mar 12 22:53:39 2022 [root@oel8n1-21c ~]# [root@oel8n1-21c ~]# date Sat Mar 12 22:54:03 CET 2022 [root@oel8n1-21c ~]# [root@oel8n1-21c ~]# ps -ef |grep smon root 3651 1 2 22:52 ? 00:00:02 /u01/app/21.5.0.0/grid/bin/osysmond.bin grid 4814 1 0 22:53 ? 00:00:00 asm_smon_+ASM1 oracle 6437 1 0 22:53 ? 00:00:00 ora_smon_orcl21c1 root 7070 6589 0 22:54 pts/0 00:00:00 grep --color=auto smon [root@oel8n1-21c ~]# date Sat Mar 12 22:54:22 CET 2022 [root@oel8n1-21c ~]# /u01/app/21.5.0.0/grid/bin/acfsdriverstate version ACFS-9325: Driver OS kernel version = 5.4.17-2011.0.7.el8uek.x86_64. ACFS-9326: Driver build number = 211031. ACFS-9212: Driver build version = 21.0.0.0.0 (21.4.0.0.0). ACFS-9547: Driver available build number = 211031. ACFS-9548: Driver available build version = 21.0.0.0.0 (21.4.0.0.0). [root@oel8n1-21c ~]# /u01/app/21.5.0.0/grid/bin/afddriverstate version AFD-9325: Driver OS kernel version = 5.4.17-2011.0.7.el8uek.x86_64. AFD-9326: Driver build number = 210701. AFD-9212: Driver build version = 21.0.0.0.0. AFD-9547: Driver available build number = 211031. AFD-9548: Driver available build version = 21.0.0.0.0. [root@oel8n1-21c ~]# [root@oel8n1-21c ~]# /u01/app/21.5.0.0/grid/bin/crsctl query driver activeversion -all Node Name : oel8n1-21c Driver Name : ACFS BuildNumber : 211031 BuildVersion : 21.0.0.0.0 (21.4.0.0.0) Node Name : oel8n1-21c Driver Name : AFD BuildNumber : 210701 BuildVersion : 21.0.0.0.0 (21.3.0.0.0) Node Name : oel8n2-21c Driver Name : ACFS BuildNumber : 211031 BuildVersion : 21.0.0.0.0 (21.4.0.0.0) Node Name : oel8n2-21c Driver Name : AFD BuildNumber : 210701 BuildVersion : 21.0.0.0.0 (21.3.0.0.0) [root@oel8n1-21c ~]#
So, the solution, in this case, is to follow the documentation (here) and call the rootcrs.sh with the parameter updateosfiles. The installation is easy and needs to be done in each node in a separate way. Be aware that doing this you will need to have downtime because you will restart the databases and the entire CRS stack during the process. And going deeper, the CRS will not start if you do not reboot the system because the kernel drivers (in the case of Linux) will not be uploaded (to memory). I have not tried with ksplice (maybe) can be used, but not tested. — Please read my post dedicated to ACFS and AFD Kernel drivers here, more updated information was provided.
So, at node 01 you can see that I made:
- Stop instance 01 of database and left just ASM running.
- Called the rootcrs.sh -updateosfiles from GI 21.5 (after that you can see that CRS stack came down).
- Reboot the server and check that ASM became up.
- Validated that AFD driver was updated from 210701 to 211031.
- Checked that CRS detected that just node01 got updated the drivers.
All of this output you can see below:
[root@oel8n1-21c ~]# su - oracle [oracle@oel8n1-21c ~]$ [oracle@oel8n1-21c ~]$ export ORACLE_HOME=/u01/app/oracle/product/21.5.0.0/dbhome_1 [oracle@oel8n1-21c ~]$ export PATH=$ORACLE_HOME/bin:$PATH [oracle@oel8n1-21c ~]$ [oracle@oel8n1-21c ~]$ srvctl status database -d orcl21c Instance orcl21c1 is running on node oel8n1-21c Instance orcl21c2 is running on node oel8n2-21c [oracle@oel8n1-21c ~]$ [oracle@oel8n1-21c ~]$ srvctl stop instance -d orcl21c -i orcl21c1 -o immediate [oracle@oel8n1-21c ~]$ [oracle@oel8n1-21c ~]$ exit logout [root@oel8n1-21c ~]# [root@oel8n1-21c ~]# date Sat Mar 12 22:56:42 CET 2022 [root@oel8n1-21c ~]# ps -ef |grep smon root 3651 1 1 22:52 ? 00:00:03 /u01/app/21.5.0.0/grid/bin/osysmond.bin grid 4814 1 0 22:53 ? 00:00:00 asm_smon_+ASM1 root 8359 6589 0 22:56 pts/0 00:00:00 grep --color=auto smon [root@oel8n1-21c ~]# date Sat Mar 12 22:56:48 CET 2022 [root@oel8n1-21c ~]# [root@oel8n1-21c ~]# [root@oel8n1-21c ~]# /u01/app/21.5.0.0/grid/crs/install/rootcrs.sh -updateosfiles Using configuration parameter file: /u01/app/21.5.0.0/grid/crs/install/crsconfig_params The log of current session can be found at: /u01/app/grid/crsdata/oel8n1-21c/crsconfig/crsupdate_osfiles_oel8n1-21c_2022-03-12_10-57-12PM.log 2022/03/12 22:57:15 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.service' [root@oel8n1-21c ~]# [root@oel8n1-21c ~]# [root@oel8n1-21c ~]# ps -ef |grep smon root 14675 6589 0 23:00 pts/0 00:00:00 grep --color=auto smon [root@oel8n1-21c ~]# [root@oel8n1-21c ~]# reboot login as: root root@10.160.10.70's password: Activate the web console with: systemctl enable --now cockpit.socket Last login: Sat Mar 12 23:08:17 2022 [root@oel8n1-21c ~]# [root@oel8n1-21c ~]# [root@oel8n1-21c ~]# ps -ef |grep smon root 3850 1 2 23:07 ? 00:00:02 /u01/app/21.5.0.0/grid/bin/osysmond.bin grid 5257 1 0 23:08 ? 00:00:00 asm_smon_+ASM1 root 6831 6568 0 23:08 pts/0 00:00:00 grep --color=auto smon [root@oel8n1-21c ~]# [root@oel8n1-21c ~]# date Sat Mar 12 23:09:00 CET 2022 [root@oel8n1-21c ~]# [root@oel8n1-21c ~]# [root@oel8n1-21c ~]# [root@oel8n1-21c ~]# /u01/app/21.5.0.0/grid/bin/acfsdriverstate version ACFS-9325: Driver OS kernel version = 5.4.17-2011.0.7.el8uek.x86_64. ACFS-9326: Driver build number = 211031. ACFS-9212: Driver build version = 21.0.0.0.0 (21.4.0.0.0). ACFS-9547: Driver available build number = 211031. ACFS-9548: Driver available build version = 21.0.0.0.0 (21.4.0.0.0). [root@oel8n1-21c ~]# [root@oel8n1-21c ~]# /u01/app/21.5.0.0/grid/bin/afddriverstate version AFD-9325: Driver OS kernel version = 5.4.17-2011.0.7.el8uek.x86_64. AFD-9326: Driver build number = 211031. AFD-9212: Driver build version = 21.0.0.0.0. AFD-9547: Driver available build number = 211031. AFD-9548: Driver available build version = 21.0.0.0.0. [root@oel8n1-21c ~]# [root@oel8n1-21c ~]# /u01/app/21.5.0.0/grid/bin/crsctl query driver activeversion -all Node Name : oel8n1-21c Driver Name : ACFS BuildNumber : 211031 BuildVersion : 21.0.0.0.0 (21.4.0.0.0) Node Name : oel8n1-21c Driver Name : AFD BuildNumber : 211031 BuildVersion : 21.0.0.0.0 (21.4.0.0.0) Node Name : oel8n2-21c Driver Name : ACFS BuildNumber : 211031 BuildVersion : 21.0.0.0.0 (21.4.0.0.0) Node Name : oel8n2-21c Driver Name : AFD BuildNumber : 210701 BuildVersion : 21.0.0.0.0 (21.3.0.0.0) [root@oel8n1-21c ~]# [root@oel8n1-21c ~]# su - oracle [oracle@oel8n1-21c ~]$ [oracle@oel8n1-21c ~]$ srvctl start instance -d orcl21c -i orcl21c1 [oracle@oel8n1-21c ~]$ [oracle@oel8n1-21c ~]$ logout [root@oel8n1-21c ~]# [root@oel8n1-21c ~]#
And after node01, we can do the same at node02:
[root@oel8n2-21c ~]# su - oracle [oracle@oel8n2-21c ~]$ [oracle@oel8n2-21c ~]$ srvctl status database -d orcl21c Instance orcl21c1 is running on node oel8n1-21c Instance orcl21c2 is running on node oel8n2-21c [oracle@oel8n2-21c ~]$ [oracle@oel8n2-21c ~]$ srvctl stop instance -d orcl21c -i orcl21c2 -o immediate [oracle@oel8n2-21c ~]$ [oracle@oel8n2-21c ~]$ logout [root@oel8n2-21c ~]# [root@oel8n2-21c ~]# [root@oel8n2-21c ~]# /u01/app/21.5.0.0/grid/crs/install/rootcrs.sh -updateosfiles Using configuration parameter file: /u01/app/21.5.0.0/grid/crs/install/crsconfig_params The log of current session can be found at: /u01/app/grid/crsdata/oel8n2-21c/crsconfig/crsupdate_osfiles_oel8n2-21c_2022-03-12_11-13-53PM.log 2022/03/12 23:13:55 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.service' [root@oel8n2-21c ~]# [root@oel8n2-21c ~]# [root@oel8n2-21c ~]# ps -ef |grep smon root 411611 292560 0 23:17 pts/3 00:00:00 grep --color=auto smon [root@oel8n2-21c ~]# [root@oel8n2-21c ~]# [root@oel8n2-21c ~]# reboot login as: root root@10.160.10.75's password: Activate the web console with: systemctl enable --now cockpit.socket Last login: Sat Mar 12 23:21:03 2022 [root@oel8n2-21c ~]# [root@oel8n2-21c ~]# [root@oel8n2-21c ~]# /u01/app/21.5.0.0/grid/bin/acfsdriverstate version ACFS-9325: Driver OS kernel version = 5.4.17-2011.0.7.el8uek.x86_64. ACFS-9326: Driver build number = 211031. ACFS-9212: Driver build version = 21.0.0.0.0 (21.4.0.0.0). ACFS-9547: Driver available build number = 211031. ACFS-9548: Driver available build version = 21.0.0.0.0 (21.4.0.0.0). [root@oel8n2-21c ~]# [root@oel8n2-21c ~]# /u01/app/21.5.0.0/grid/bin/afddriverstate version AFD-9325: Driver OS kernel version = 5.4.17-2011.0.7.el8uek.x86_64. AFD-9326: Driver build number = 211031. AFD-9212: Driver build version = 21.0.0.0.0. AFD-9547: Driver available build number = 211031. AFD-9548: Driver available build version = 21.0.0.0.0. [root@oel8n2-21c ~]# [root@oel8n2-21c ~]# su - oracle [oracle@oel8n2-21c ~]$ [oracle@oel8n2-21c ~]$ srvctl start instance -d orcl21c -i orcl21c2 [oracle@oel8n2-21c ~]$ [oracle@oel8n2-21c ~]$ exit logout [root@oel8n2-21c ~]#
And after that we can see that all nodes are updated with AFD and ACFS drivers:
[root@oel8n1-21c ~]# /u01/app/21.5.0.0/grid/bin/crsctl query driver softwareversion -all Node Name : oel8n1-21c Driver Name : ACFS BuildNumber : 211031 BuildVersion : 21.0.0.0.0 (21.4.0.0.0) Node Name : oel8n1-21c Driver Name : AFD BuildNumber : 211031 BuildVersion : 21.0.0.0.0 (21.4.0.0.0) Node Name : oel8n2-21c Driver Name : ACFS BuildNumber : 211031 BuildVersion : 21.0.0.0.0 (21.4.0.0.0) Node Name : oel8n2-21c Driver Name : AFD BuildNumber : 211031 BuildVersion : 21.0.0.0.0 (21.4.0.0.0) [root@oel8n1-21c ~]# /u01/app/21.5.0.0/grid/bin/crsctl query driver softwareversion -all Node Name : oel8n1-21c Driver Name : ACFS BuildNumber : 211031 BuildVersion : 21.0.0.0.0 (21.4.0.0.0) Node Name : oel8n1-21c Driver Name : AFD BuildNumber : 211031 BuildVersion : 21.0.0.0.0 (21.4.0.0.0) Node Name : oel8n2-21c Driver Name : ACFS BuildNumber : 211031 BuildVersion : 21.0.0.0.0 (21.4.0.0.0) Node Name : oel8n2-21c Driver Name : AFD BuildNumber : 211031 BuildVersion : 21.0.0.0.0 (21.4.0.0.0) [root@oel8n1-21c ~]#
Conclusion
The Zero-Downtime Oracle Grid Infrastructure Patching (zeroDowntimeGIPatching) is really interesting for GI 21c and MAA/HA perspectives. The database continues to be running and the downtime/outage is zero (as promised). But we need to take care of the details for ACFS and AFD drivers, if we do not use the skipDriverUpdate option when calling the gridSetup.sh the database (and the CRS will be stopped).
One point to add is that my environment had a fresh install of the GI 21c, it does not come from an upgrade of 19c to 21c. When I tested in one environment where the GI was upgraded from 19c to 21c I got problems while calling the root.sh at the last node. The other nodes worked perfectly and the database continue to run, but at the last node, the database restarted in both nodes due to a lack of communication to write at controlfile (ORA-221). SR is open and we are working over that for more than one month.
So, you need to take care and test in other/similar environments before applying at your production. Oracle 21c is an Innovative Release but is always interesting to test new features that will become the basis for the next releases. If you reach here, thanks for reading the entire post (I know that was a long journey until here).
Disclaimer: “The postings on this site are my own and don’t necessarily represent my actual employer positions, strategies or opinions. The information here was edited to be useful for general purpose, specific data and identifications were removed to allow reach the generic audience and to be useful for the community. Post protected by copyright.”
Well documented and simple. Thanks Fernando for sharing this.
Pingback: 21c, updateosfiles after Grid Infrastructure Patch - Fernando Simon
Pingback: 21c, Zero-Downtime Oracle Grid Infrastructure Patching – Silent Mode - Fernando Simon
Pingback: Exadata version 23.1.0.0.0 – Part 01 - Fernando Simon
Pingback: 23ai, Zero-Downtime Oracle Grid Infrastructure Patching – GOLD IMAGE - Fernando Simon