Tag Archives: Oracle

INDEX_BACKUP task for ZDLRA, Check percentage done

Quick post how to check and identify done for INDEX_BACKUP task in ZDLRA. In one simple way, just to contextualize, INDEX_BACKUP is one task for ZDLRA that (after you input the backup of datafile) generate an index of the blocks and create the virtual backup for you.

Here I will start a new series about ZDLRA with some hints based on my usage experience (practically since the release in 2014). The post from today is just little scratch about ZDLRA internals, I will extend this post in others (and future posts), stay tuned.

More…

Reimage ODA

The idea to reimage ODA is to refresh the environment without the need to jump from one by one to reach the last available version, or even rescue the system from S.O. failure/crash. The process to do a reimage can be check in the official documentation but unfortunately can be very tricky because the information (the order and steps) are not 100% clear. The idea is to show you how to reimage using version 18 (18.3 in this example), that represents the last available.

In resume the process is executed in the order:

  1. ILOM: Boot the ISO
  2. Prepare to create the appliance
  3. Upload GI and DB base version to the repository
  4. Linux: Create the appliance
  5. Firmware and patch
  6. Create Oracle Homes and the databases
  7. Finish and clean the install

More…

RMAN, Allocate channel, CDB, and CLOSE: bug

Allocate channel for RMAN is used in various scenarios, most of the time is useful when you use tape as device type or you need to use some kind of format. The way to do the allocation not changed since a long time ago, but when you run the database in container mode you can hit a bug that turns your channel unusable. I will show you the bug and how to avoid it with a simple trick.

This bug hit every version since 12 and I discovered it last year when testing some scenarios, but I was able to test and post just recently. It just occurs for CDB databases and exists just one one-off solution published for 12.2. But there is one workaround more useful and works for every version.

The most interesting part is that everything that we made until now when allocate channel will not work. You can search in all doc available for allocate channel since 9i until 19c the first thing that you made after open the run{} is allocate channel. This is the default and recommended in the docs: for 19c, 18c, and 9i.

More…

Shrink ASM Diskgroup and Exadata Grid Disks

Here I will cover the shrink of ASM diskgroup in Exadata environment running VM’s. The process here is the opposite of what I wrote in the previous post, but have a tricky part that demands attention to avoid errors. The same points that you checked for extending are valid now: number the cells, disks per cell, ASM mirroring, and the VM that you want to change continue to be important, but we have more now. Besides that, the post shows how to verify (and “fix”) if you have something in the ASM internal extent map that can block the shrink.

More…

Increase size for Exadata Grid Disks

A quick article about a maintenance task for Oracle Exadata when you are using OVM and you divided your storage cell disks for every VM. Here I will show you how to extend your Grid Disks to add more space in your ASM diskgroup.

The first thing is being aware of your environment, before everything you need to know the points below because, they are important to calculate the new space, and to avoid do something wrong:

  • Number of cells in your appliance.
  • Number of disks for each cell.
  • Mirroring for your ASM.
  • The VM that you want to add the space.

The “normal” Exadata storage cell has 12 disks, the Extreme Flash version uses 8 disks per storage. If you have doubt about how many disks you have per storage cell, you can connect in each one and check the number of celldisks you have. And before continuing, be aware of Exadata disk division:

To do this change we execute three major steps: ASM, Exadata Storage, and ASM again.

More…

Observer, Quorum

This article closes the series for DG and Fast-Start Failover that I covered with more details the case of isolation can leverage the shutdown of your healthy/running primary database. The “ORA-16830: primary isolated from fast-start failover partners”.

In the first article, I wrote about how one simple detail that impacts dramatically the reliability of your MAA environment. Where you put your Observer in DG environment (when Fast-Start Failover is in use) have a core figure in case of outages, and you can face Primary isolation and shutdown. Besides that, there is no clear documentation to base yourself on “pros and cons” to define the correct place for Observer. You read more in my article here.

In the second article, I wrote about one new feature that can help to have more protected and cover more scenarios for Fast-Start Failover/DG. Using Multiple Observers you can remove the single point of failure and allow you to put one Observer in each side of your environment (primary, standby, and a third one). You can read more in my article here.

In this last article, I discuss how, even using all the features, there is no perfect solution. Another point is discussing here is how (maybe) Oracle can improve that. Below I will show more details that even multiple observers continue to shutdown a healthy primary database. Unfortunately, it is a lot of tech info and is a log thread output. But you can jump directly to the end to see the discussion about how this can be improved.

More…

ODA, ACFS and ASM Dilemma

As you know, for ODA, you have two options for storage: ACFS or ASM. If you choose ACFS, you can create all versions for databases, from 11g to 18c (until this moment). But if you choose ASM, the 11g will not be compatible.

So, ASM or ACFS? If you choose ACFS, the diskgroup where ACFS runs will be sliced and you have one mount point for each database. If you have, as an example, one system with more than 30 databases, can be complicated to manage all the ACFS mount points. So, ASM it simple and easier solution to sustain. Besides the fact that it is more homogeneous with other database environments (Exadata, RAC’s …etc).

If you choose ASM you can’t use 11g versions or avoid the ACFS mount points for all databases, but you can do a little simple approach to use 11g databases and still use ASM for others. Took one example where just 3 or 4 databases will run over 11g version and all others 30 databases in the environment will be in 12/18. To achieve that, the option, in this case, is using a “manual” ACFS mount point, I will explain.

More…

ODA, JSON and FLASH

Recently, in March, I made the reimage from an X5-2 HA ODA and saw a strange behavior during the diskgroup creation and couldn’t reproduce (because involve reimaging again). Basically, the FLASH diskgroup was not created.

But in last May I reimaged another ODA using the same patch/imageversion (18.3.0.0 – Patch 27604623) and was possible to verify again. In both cases, I created the appliance using the CLI “odacli create-appliance” using JSON file because the network uses VLAN (what it is impossible to create using the web interface), and both appliances are identical (X5-2 HA with SSD for RECO and FLASH).

To reimage, I followed the steps in the docs for this version and used the ISO to do the baremetal procedure. If you look in the docs about the options for storage (check here) you can see that there is no single reference to use FLASH diskgroup (or that you need to do that). Checking in the readme/reference JSON files that exist in the folder “/opt/oracle/dcs/sample” under file “sample-oda-ha-json-readme.txt”:

More…

Observer, More Than One

Recently I made a post about a little issue that I got with Oracle Data Guard. In that scenario, because of outage in the standby datacenter, healthy primary database shutdown with error “ORA-16830: primary isolated…”. Just to remember that the database was running with Maximum Availability, Fail-Start Failover enabled, and (the most important detail) the Observer was running in the standby datacenter too.

The point from my previous post tried to show that does not exists one doc that provides full details about “pros” and “cons” where put your observer. Whatever the place, on the primary datacenter or in standby, it has little details to check. Even the best (ideal) scenario, with a third datacenter, can be tough to sustain.

Here I will try to show one option that can help you and improve the reliability of your MAA/DG environment. At least, you will have more options to decide how to protect your database. Bellow, I show some details about how to configure and use multiple observers, but if you want to jump and see a little concern you can directly to the end of the post.

More…

ZDLRA, since 2014

In Oracle Open World 2014 the Zero Data Loss Recovery Appliance (ZDLRA) was released and it changed MAA in many ways, but two principals: protection and backup. I watched the ZDLRA presentation and saw that matched with the needs that I had that time.

After OOW in 2014 I started the project (all phases, from conception, requirements until deployments and usage) that become (in 2015) the first ZDLRA installation in Brazil, and one of the first of the world too that use replicated ZDLRA to protect both sites (primary and standby) and many levels of databases (PRO, TST, DEV). The Oracle MAA at its finest was amazing: ZDLRA + Exadata + DG; everything integrated to protect both sites.

Because of the high design level of the project it was chosen to be one of the main presentation in Oracle Open World 2015 about ZDLRA, you can find the link of the presentation that I made together with ZDLRA dev team here. As told before, in this project was integrated two ZDLRA, two Exadata and DG to reach ZERO Recover Point Objective (RPO) and Recovery Time Objective (RTO) and beside that, reduce backup time. You can see the presentation to check the scope and other details.

More…