Category Archives: Oracle

How to use ZDLRA and enroll a database

ZDLRA it is an Oracle dedicate appliance specialized to manage your backups, but more than that, provide you zero data loss. I already made an introduction about ZDLRA in my previous post and here I will show how to use and enrolling protected database at ZDLRA: enroll database, create policies and access with rman. Understand how to use ZDLRA, at least the starting point.

Every project starts with scope definition, steps, and requirements. It is not different for ZDLRA project, it is a big appliance, expensive, and integrate a lot of things in just one place: backups, archivelogs, redo, replication, and RPO. The requirements vary from project to project, I will not discuss that here. But will show you the tech part about the usage.

The post has two divisions, the first it is the technical part. You can check how to do that. The second contains the information/documentation part, where I will show more details about the steps.

More…

Understanding ZDLRA

Oracle Zero Data Loss Recovery Appliance (ZDLRA) deliver to you the capacity to improve the reliability of your environment in more than one way. You can improve the RPO (Recovery Point Objective) for your databases until you reach zero, zero data loss. And besides that, adding a lot of new cool features (virtual backups, real-time redo, tape and cloud, DG/MAA integration) on the way how you do that your backups (incremental forever), and backup strategy. And again, besides that, improve the MAA at the highest level that you can hit.

But this is just marketing, right? No, really, works pretty well! My history with ZDLRA  starts with Oracle Open World 2014 when they released the ZDLRA and I watched the session/presentation. At that moment I figure out how good the solution was. In that moment, hit exactly the problem that I was suffering for databases: deduplication (bad dedup). One year later, in 2015 at OOW I made the presentation for a big project that I coordinate (from definition implementation, and usage)  with 2 Sites + 2 ZDLRA + N Exadata’s + Zero RPO and RTO + DG + Replication. And at the end of 2017 moved to a new continent, but still involved with MAA and ZDLRA until today.

This post is just a little start point about ZDLRA, I will do a quick review about some key points but will write about each one (with examples) in several other dedicates posts. I will not cover the bureaucratic steps to build the project like that, POC, scope definition, key turn points, and budget. I will talk technically about ZDLRA.

More…

INDEX_BACKUP task for ZDLRA, Check percentage done

Quick post how to check and identify done for INDEX_BACKUP task in ZDLRA. In one simple way, just to contextualize, INDEX_BACKUP is one task for ZDLRA that (after you input the backup of datafile) generate an index of the blocks and create the virtual backup for you.

Here I will start a new series about ZDLRA with some hints based on my usage experience (practically since the release in 2014). The post from today is just little scratch about ZDLRA internals, I will extend this post in others (and future posts), stay tuned.

More…

Reimage ODA

The idea to reimage ODA is to refresh the environment without the need to jump from one by one to reach the last available version, or even rescue the system from S.O. failure/crash. The process to do a reimage can be check in the official documentation but unfortunately can be very tricky because the information (the order and steps) are not 100% clear. The idea is to show you how to reimage using version 18 (18.3 in this example), that represents the last available.

In resume the process is executed in the order:

  1. ILOM: Boot the ISO
  2. Prepare to create the appliance
  3. Upload GI and DB base version to the repository
  4. Linux: Create the appliance
  5. Firmware and patch
  6. Create Oracle Homes and the databases
  7. Finish and clean the install

More…

RMAN, Allocate channel, CDB, and CLOSE: bug

Allocate channel for RMAN is used in various scenarios, most of the time is useful when you use tape as device type or you need to use some kind of format. The way to do the allocation not changed since a long time ago, but when you run the database in container mode you can hit a bug that turns your channel unusable. I will show you the bug and how to avoid it with a simple trick.

This bug hit every version since 12 and I discovered it last year when testing some scenarios, but I was able to test and post just recently. It just occurs for CDB databases and exists just one one-off solution published for 12.2. But there is one workaround more useful and works for every version.

The most interesting part is that everything that we made until now when allocate channel will not work. You can search in all doc available for allocate channel since 9i until 19c the first thing that you made after open the run{} is allocate channel. This is the default and recommended in the docs: for 19c, 18c, and 9i.

More…

Shrink ASM Diskgroup and Exadata Grid Disks

Here I will cover the shrink of ASM diskgroup in Exadata environment running VM’s. The process here is the opposite of what I wrote in the previous post, but have a tricky part that demands attention to avoid errors. The same points that you checked for extending are valid now: number the cells, disks per cell, ASM mirroring, and the VM that you want to change continue to be important, but we have more now. Besides that, the post shows how to verify (and “fix”) if you have something in the ASM internal extent map that can block the shrink.

More…

Increase size for Exadata Grid Disks

A quick article about a maintenance task for Oracle Exadata when you are using OVM and you divided your storage cell disks for every VM. Here I will show you how to extend your Grid Disks to add more space in your ASM diskgroup.

The first thing is being aware of your environment, before everything you need to know the points below because, they are important to calculate the new space, and to avoid do something wrong:

  • Number of cells in your appliance.
  • Number of disks for each cell.
  • Mirroring for your ASM.
  • The VM that you want to add the space.

The “normal” Exadata storage cell has 12 disks, the Extreme Flash version uses 8 disks per storage. If you have doubt about how many disks you have per storage cell, you can connect in each one and check the number of celldisks you have. And before continuing, be aware of Exadata disk division:

To do this change we execute three major steps: ASM, Exadata Storage, and ASM again.

More…

Observer, Quorum

This article closes the series for DG and Fast-Start Failover that I covered with more details the case of isolation can leverage the shutdown of your healthy/running primary database. The “ORA-16830: primary isolated from fast-start failover partners”.

In the first article, I wrote about how one simple detail that impacts dramatically the reliability of your MAA environment. Where you put your Observer in DG environment (when Fast-Start Failover is in use) have a core figure in case of outages, and you can face Primary isolation and shutdown. Besides that, there is no clear documentation to base yourself on “pros and cons” to define the correct place for Observer. You read more in my article here.

In the second article, I wrote about one new feature that can help to have more protected and cover more scenarios for Fast-Start Failover/DG. Using Multiple Observers you can remove the single point of failure and allow you to put one Observer in each side of your environment (primary, standby, and a third one). You can read more in my article here.

In this last article, I discuss how, even using all the features, there is no perfect solution. Another point is discussing here is how (maybe) Oracle can improve that. Below I will show more details that even multiple observers continue to shutdown a healthy primary database. Unfortunately, it is a lot of tech info and is a log thread output. But you can jump directly to the end to see the discussion about how this can be improved.

More…

ODA, ACFS and ASM Dilemma

As you know, for ODA, you have two options for storage: ACFS or ASM. If you choose ACFS, you can create all versions for databases, from 11g to 18c (until this moment). But if you choose ASM, the 11g will not be compatible.

So, ASM or ACFS? If you choose ACFS, the diskgroup where ACFS runs will be sliced and you have one mount point for each database. If you have, as an example, one system with more than 30 databases, can be complicated to manage all the ACFS mount points. So, ASM it simple and easier solution to sustain. Besides the fact that it is more homogeneous with other database environments (Exadata, RAC’s …etc).

If you choose ASM you can’t use 11g versions or avoid the ACFS mount points for all databases, but you can do a little simple approach to use 11g databases and still use ASM for others. Took one example where just 3 or 4 databases will run over 11g version and all others 30 databases in the environment will be in 12/18. To achieve that, the option, in this case, is using a “manual” ACFS mount point, I will explain.

More…

ODA, JSON and FLASH

Recently, in March, I made the reimage from an X5-2 HA ODA and saw a strange behavior during the diskgroup creation and couldn’t reproduce (because involve reimaging again). Basically, the FLASH diskgroup was not created.

But in last May I reimaged another ODA using the same patch/imageversion (18.3.0.0 – Patch 27604623) and was possible to verify again. In both cases, I created the appliance using the CLI “odacli create-appliance” using JSON file because the network uses VLAN (what it is impossible to create using the web interface), and both appliances are identical (X5-2 HA with SSD for RECO and FLASH).

To reimage, I followed the steps in the docs for this version and used the ISO to do the baremetal procedure. If you look in the docs about the options for storage (check here) you can see that there is no single reference to use FLASH diskgroup (or that you need to do that). Checking in the readme/reference JSON files that exist in the folder “/opt/oracle/dcs/sample” under file “sample-oda-ha-json-readme.txt”:

More…