Category Archives: Database

Exadata version 23.1.0.0.0 – Part 01

On 08/March/2023 the Oracle Exadata team released version 23.1.0.0.0 and this include a significant change, OEL 8. But is not just that, other interesting requirements are there and I will discuss them below. I will show you how to patch to the 23.1 version and some other details as well. In this first part, I will just discuss one interesting point that you need to take care of before you start to patch. And probably is more important than you imagine.

Before you patch

The new version brings some requirements (over what you need to be running) to allow you to patch. For the Grid Infrastructure, you need to run 19.15 or a newer version. You can even run the 21c (21.6 or newer) version if you want. If you want to know how to do that, I already discussed how to upgrade both in previous posts (19c, and 21c).

For databases, the recommendation is the same, 19c or 21c. You can still run older versions (11,g, 12c, and 18) but they are already (or will be soon) under Market Driver Support. You can read the MOS note over that (here), but to be clear (now) only the 19c have premier support available.

And now things became quite interesting because the new 23.1 version is the first running with OEL 8. And if you check the supplemental README for the 23.1 version just the 19c support the database and GI are listed. So, be aware and check the compatibilities.

One important detail for this version is that you can only upgrade to 23.1 if your base Exadata running version is newer or equal to 21.2.10 (basically one year old only). If not, you need to upgrade to (at least) this version before you patch to 23.1. And this will be the same in the one-year future, it will be only possible to upgrade to 24.x if you will be running (at least) 23.1.

If you are running the old Exadata with InfiniBand, your dom0 will always be updated until Oracle Linux 7 with UEK5. For domU you can upgrade to the OEL 8. And you can upgrade in any order, first dom0 or domU. If you are running RoCE, your dom0 can run the latest OEL 8 UEK6. The blog post from Oracle made an excellent explanation about the upgrade paths and below you can see the images that are there (I used the image from their post).

So, as usual, the version includes everything, switches, storage, and database node. And while for switches and storages, the patches are quite normal, for virtualized environments the upgrades paths start to be a little more challenging to plan. I will explain, but (as hinted in the blog post) the upgrade of the Hosts and Guests independently and in any order. And is hard not because of the patch apply itself, but will be to create the plan. Remember the requirements for Oracle Database and GI? So, you can spend a lot of time patching others parts than the Exadata version.

But let’s put pieces together, the small lines written in several places. With this version 23.1, Oracle is telling you that you need to be running at least the Oracle Database 19c to be allowed to have a continuous upgrade for future releases (and possible usage) of Exadata. And whatever the machine version that you use, IB or RoCE network. You can’t anymore use GI older than 19.15, and the databases are enforced, as well, to be this version too. Imagine that you have some kind of incapability between 11g/12c and OEL 8, if you need to open one SR, you need to have/pay for that support, and will not be cheap.

And if you think the upcoming 23c (and that it will be the new LTS version) being in OEL 8 is a requirement. Imagine one year in the future, when the Exadata 24.x version will arrive, do you think that Oracle still supports 11g to the new OEL 9? I don’t think so.

And by the way, IMHO you should be running to 19c. 11G is from 2009, 12.1 from July 2013. So, they are old and out of support for good reasons. I understand the point that they are working on and the legacy applications that maybe you have. But the point is not just to support them, is the case to be possible to continue to upgrade/update your Exadata. Please do not postpone your database upgrades anymore, for the good sake of your Exadata. 

 

Click here to read more…

Exadata, REQUIRED_MIRROR_FREE_MB and GRID 19.16

Starting with Grid Infrastructure/ASM 19.16 Oracle changed how the REQUIRED_MIRROR_FREE_MB is calculated and the impact is more than expected. Check below examples of the changes, and how this will impact you. This is valid for all GI/ASM starting with 19.16 and only for Exadata/ExaCC.

Please read my new post about this issue.

REQUIRED_MIRROR_FREE_MB

The REQUIRED_MIRROR_FREE_MB (according to 19c documentation) is:

amount of space that must be available in a disk group to restore full redundancy after the worst failure that can be tolerated by the disk group without adding additional storage. This requirement ensures that there are sufficient failure groups to restore redundancy”.

And (at Exadata environment until 19.16) is calculated based on the disk redundancy that you have. If you choose the HIGH, the raw size of two disks (the largest in your diskgroup) is reserved; at NORMAL, is the raw size of one disk. At Exadata, it differs from other environments because does not consider the whole failgroup failure and the way that extends are written/spread (more info below and in another post).

But for now, understand that the required size is what you need to reserve (as raw space) at your diskgroup to ensure protection in case of disk failure. And it is directly related to the USABLE_FILE_MB because the space that you can allocate at your diskgroup (USABLE_FILE_MB) comes from (FREE_MB- REQUIRED_MIRROR_FREE_MB)/redundancy factor (3 for HIGH, 2 for NORMAL). So, when you increase the REQUIRED_MIRROR_FREE_MB you reduce the USABLE_FILE_MB. I will explain more later.

Click here to read more…

Friends, Conferences, and Community

Last September was pretty special for me because had some opportunity to meet friends again after COVID/Pandemic situation.

POUG

First, POUG is POUG. There is no way to describe what it is if you were never there. I had the opportunity to be there talking about ExaCC. The whole conference is amazing, not just because of the technical content (that is surreal), but also because of the friends that were/are there. Everyone that was there was enjoying the conference, but most important, enjoying being there with friends.

For POUG I need to say thank you to Kamil Stawiarski (https://twitter.com/ora600pl) and Luiza Nowak (https://www.ora-600.pl/en/tp/luiza-koziel-2/) for organizing the event. You, together with the whole POUG team, made one fantastic conference.

Click here to read more…

21c, DG PDB

Since the 21c was public available the Data Guard per Pluggable Database – DG PDB – was intended to be there, but Oracle needed more time to make things work and some weeks ago released the feature with the 21.7 version. Here in this post, I will show to configure it and also how to troubleshoot, and the pitfalls of using it. As usual, all the steps, logs, and outputs are covered here and I hope that it helps you understand the whole DG PDB process.

My environment

The environment that I am using here is:

  • Two databases running in RAC mode (two nodes in each cluster).
  • ASM: same DATA and RECO diskgroups names in each cluster.

About the databases I have:

  • ORADBDC1, that have the pdb PDBDC1. So, they represent the DC1.
  • ORADBDC2, that have the pdb PDBDC2. So, they represent the DC2.

Each of these clusters is in a separate environment, this means that both are primary databases inside each datacenter. So, they have no DG configured between them.

The main target for this post is to have the pdb from DC2 protected by the ORADBDC1 at the DC1. I used RAC and ASM because this is usually the normal configuration for the MAA (following the recommended architectures baseline) when using DG. This increases the protection and reduces the SPOF of your environment.

DG PDB

The idea of DG PDB differs a little from what we see commonly for Data Guard, here each container have own life. This means that only the pdb is protected and not the entire cdb. This puts the DG PDB close to Cloud than On-Prem because it fit perfectly at the OCI structure since you can create your pdb in one region and choose another region to protect it. And even closer if you think for Autonomous Database that your ownership is pdb only. I will not say that is good or not, but is linked to how Oracle works with OCI. Personally, I prefer to have normal DG configured to protect my databases and I choose where I want to open my pdb (maybe they add this feature in the future).

Another detail is that DG PDG (from now) works only in MaxPerformance mode, so, there is no SYNC mode for the archive destinations. There are more limitations for the DG PDB and you can check it in the topic DG PDB Configuration Restrictions from official documentation (I recommend that you read it).

Please read my new blog post about the new changes to the process. You can see how the process evolved and it is better. Read it here. 

Click here to read more…

21c, Zero-Downtime Oracle Grid Infrastructure Patching – Silent Mode

Recently I made two posts about the process for patch/upgrading your 21 Grid Infrastructure (GI) while the databases continue to be running. The first post shows how to do this using the GUI interface, and the second one show more details about the process for AFD/ACFS Kernel Driver Update. But here in this post, I will show how to do the Zero-Downtime Patch (zeroDowntimeGIPatching – ZDGIP) in silent mode.

This way to do the patch is important because allows you to automatize it. You can create your own script and call it (using Ansible, Puppet, Chief, etc.) to upgrade your servers (or farms) remotely.

Current Environment

The current environment is the same of the first post:

  • OEL 8.4 Kernel 5.4.17-2102.201.3.el8uek.x86_64.
  • Oracle GI 21c, version 21.3 with no one-off or patches installed.
  • Oracle Database 21c, RU 21.5 (with OCW 21.5).
  • TFA version is 21.4 (last available in March 2022).
  • Nodes are not using Transparent HugePages.
  • Is a RAC installation, with two nodes.

You can see the output for the info above in this txt file.

And I will apply the same RU 21.5 (21.5.0.0.220118) for GI which is patch 33531909.

Patch Process

The patch process is almost the same as the first post, the main change is the response file and the way to call the gridSetup.sh. So, for this reason, I recommend for you read the first (and second) post. Below you will see a quick review of previous steps and a focus on the new 

Click here to read more…

21c, updateosfiles after Grid Infrastructure Patch

Recently I made one post about how to use the new feature -zeroDowntimeGIPatching when patching the Grid Infrastructure for 21c. It is a new feature/option that allows your database continues to be running while the grid is patched. You can see my post here. But during that post I talked about the usage of -updateosfiles when calling the rootcrs.sh and want to clarify some details and provide better examples.

Current environment

For this post, my environment is:

  • OEL 8.4 Kernel 5.4.17-2102.201.3.el8uek.x86_64.
  • Oracle GI 21c, version 21.5.
  • Is a RAC installation, with two nodes.

The GI was upgraded from 21.3 to 21.5 as demonstrated in my post.

Compatibility Matrix

Before you think about upgrading the ACFS/AFD drivers you need to check if they are compatible with the version or kernel that you are running. The only place to check this is the MOS note ACFS Support On OS Platforms (Certification Matrix). (Doc ID 1369107.1). On that note, you will see tables for each major version (18c, 19c, 21c), and you can see the versions of Linux Version and Kernel versions that are compatible. Below is marked for OEL 8:

And you can see that my version of Linux Kernel is compatible. If your version is not compatible, not update the ACFS/AFD kernel drivers.

Click here to read more…

Duplicate PDB from active database, ASM, and OMF

Starting with 18c is possible to duplicate one PDB from an active database. This is a cool feature that helps a lot in daily activities. But recently I got one error when the destination is using ASM, and the files (of course) are managed using OMF. The solution is simple and is related to a bug that is affecting the 18c, 19c, and 21c versions.

Duplicating pluggable databases can be done for a long time and have some rules. But the duplicate PDB from an active database to a new CDB helps a lot because everything can be done online. We don’t need to create an intermediate CDB to export this PDB doing the unplug/plug, or cloning the source locally to read-only PDB and create a new one using dblink, or even using rman backups.

Click here to read more…

21c, Zero-Downtime Oracle Grid Infrastructure Patching

Oracle 21c delivered a lot of new features and for Grid infrastructure one of the most interesting is the zero-downtime patch (zeroDowntimeGIPatching). This basically allows your database continues to be running while you patch/upgrade your GI. The official doc can be seen here. Let’s say that is an evolution of the Out of Place (OOP) patch for GI.

In this post I will show how to do that, but some details before starting:

  • This post shows how to do the zero-downtime patch using GUI mode.
  • I will do another post showing how to do in silent mode the same procedure. So, it can be automatized.
  • In a third post, I will detail how the zero-downtime works behind the scenes and will discuss some logs.

Click here to read more…

Oracle Engineered Systems since 2010

Recently I made a tweet about a new project with Oracle Engineered System (X9M) that remembered me about what I made with these systems until now. So, this opened the opportunity to tell my background and history until now working with these systems. Is not a show-off of ego boost post.

 

Click here to read more…

AHF and TFA Management

Recently I posted about the upgrade of AHF/TAF from version 19 to 21 at Exadata and also for ODA. But with version 21 of AHF, some collections are made automatically and this can impact your space usage. Here you can see how to check this and disable/modify some of these.

The automatic collection for AHF/TFA is a feature that generates the diagnostic packages (to send to Oracle) when some specifics errors appear in the database. The collected errors follow some patterns like ORA-0600, ORA-07445, and several others. The basic idea can be seen in the official doc here and in the image below (retried directly from the official doc).

Click here to read more…