Category Archives: Banco de Dados

ZDLRA, Multi-site protection – ZERO RPO for Primary and Standby

ZDLRA can be used from a small single database environment to big environments where you need protection in more than one site at the same time. At every level, you can use different features of ZDLRA to provide desirable protection. Here I will show how to reach zero RPO for both primary and standby databases. All the steps, doc, and tech parts are covered.

You can check the examples the reference for every scenario int these two papers from the Oracle MAA team: MAA Overview On-Premises and Oracle MAA Reference Architectures. They provide good information on how to prepare to reduce RPO and improve RTO. In resume, the focus is the same, reduce the downtime and data loss in case of a catastrophe (zero RPO, and zero RPO).

Multi-site protection

If you looked both papers before, you saw that to provide good protection is desirable to have an additional site to, at least, send the backups. And if you go higher, for GOLD and PLATINUM environments, you start to have multiple sites synced with data guard. These Critical/Mission-critical environments need to be protected for every kind of catastrophic failure, from disk until complete site outage (some need to follow specific law’s requirements, bank as an example).

And the focus of this post is these big environments. I will show you how to use ZDLRA to protect both sites, reaching zero RPO even for standby databases. And doing that, you can survive for a catastrophic outage (like entire datacenter failure) and still have zero RPO. Going further, you can even have zero RPO if you lose completely on site when using real-time redo for ZDLRA, and this is not written in the docs by the way.

Click here to read more…

ZDLRA Internals, INDEX_BACKUP task in details

For ZDLRA, the task type INDEX_BACKUP it is important (if it is not the most) because it is responsible to create the virtual full backup. This task runs for every backup that you ingest at ZDLRA and here, I will show with more details what occurs at ZDLRA: internals steps, phases, and tables involved.

I recommend that you check my previous post about ZDLRA: ZDLRA Internals, Tables and Storage, ZDLRA, Virtual Full Backup and Incremental Forever, and Understanding ZDLRA. They provide a good base to understand some aspects of ZDLRA architecture and features.

Backup

As you saw in my previous post, ZDLRA opens every backup that you sent and read every block of it to generate one new virtual full backup. And this backup is validated block a block (physically and logically) against corruption. It differs from a snapshot because it is content-aware (in this case it is proprietary Oracle datafile blocks inside another proprietary Oracle rman block) and Oracle it is the only that can do this guaranteeing that result is valid.

Click here to read more…

ZDLRA, Virtual Full Backup and Incremental Forever

One of the most knows features of ZDLRA is the virtual full backup, basically incremental forever strategy. But what this means in real life? In this post, I will show some details about that and how interesting they are, check what it is Virtual Full Backup and Incremental Forever strategy for ZDLRA.

This post is based on my previous one where I showed all the steps to configure the VPC and enroll database at ZDLRA. 

Virtual Full Backup

A virtual full backup appears as an incremental level 0 backup in the recovery catalog. From the user’s perspective, a virtual full backup is indistinguishable from a non-virtual full backup. Using virtual backups, Recovery Appliance provides the protection of frequent level 0 backups with only the cost of frequent level 1 backups.

This definition (and image) are in the Zero Data Loss Recovery Appliance Administrator’s Guide and I think that represents the essence of the virtual full backup. ZDLRA receive the incremental level 1 backup, index it, and generate a level 0 to you that it is indistinguishable from a normal backup level 0.

More…

INDEX_BACKUP task for ZDLRA, Check percentage done

Quick post how to check and identify done for INDEX_BACKUP task in ZDLRA. In one simple way, just to contextualize, INDEX_BACKUP is one task for ZDLRA that (after you input the backup of datafile) generate an index of the blocks and create the virtual backup for you.

Here I will start a new series about ZDLRA with some hints based on my usage experience (practically since the release in 2014). The post from today is just little scratch about ZDLRA internals, I will extend this post in others (and future posts), stay tuned.

More…

Reimage ODA

The idea to reimage ODA is to refresh the environment without the need to jump from one by one to reach the last available version, or even rescue the system from S.O. failure/crash. The process to do a reimage can be check in the official documentation but unfortunately can be very tricky because the information (the order and steps) are not 100% clear. The idea is to show you how to reimage using version 18 (18.3 in this example), that represents the last available.

In resume the process is executed in the order:

  1. ILOM: Boot the ISO
  2. Prepare to create the appliance
  3. Upload GI and DB base version to the repository
  4. Linux: Create the appliance
  5. Firmware and patch
  6. Create Oracle Homes and the databases
  7. Finish and clean the install

More…

RMAN, Allocate channel, CDB, and CLOSE: bug

Allocate channel for RMAN is used in various scenarios, most of the time is useful when you use tape as device type or you need to use some kind of format. The way to do the allocation not changed since a long time ago, but when you run the database in container mode you can hit a bug that turns your channel unusable. I will show you the bug and how to avoid it with a simple trick.

This bug hit every version since 12 and I discovered it last year when testing some scenarios, but I was able to test and post just recently. It just occurs for CDB databases and exists just one one-off solution published for 12.2. But there is one workaround more useful and works for every version.

The most interesting part is that everything that we made until now when allocate channel will not work. You can search in all doc available for allocate channel since 9i until 19c the first thing that you made after open the run{} is allocate channel. This is the default and recommended in the docs: for 19c, 18c, and 9i.

More…

Shrink ASM Diskgroup and Exadata Grid Disks

Here I will cover the shrink of ASM diskgroup in Exadata environment running VM’s. The process here is the opposite of what I wrote in the previous post, but have a tricky part that demands attention to avoid errors. The same points that you checked for extending are valid now: number the cells, disks per cell, ASM mirroring, and the VM that you want to change continue to be important, but we have more now. Besides that, the post shows how to verify (and “fix”) if you have something in the ASM internal extent map that can block the shrink.

More…

Increase size for Exadata Grid Disks

A quick article about a maintenance task for Oracle Exadata when you are using OVM and you divided your storage cell disks for every VM. Here I will show you how to extend your Grid Disks to add more space in your ASM diskgroup.

The first thing is being aware of your environment, before everything you need to know the points below because, they are important to calculate the new space, and to avoid do something wrong:

  • Number of cells in your appliance.
  • Number of disks for each cell.
  • Mirroring for your ASM.
  • The VM that you want to add the space.

The “normal” Exadata storage cell has 12 disks, the Extreme Flash version uses 8 disks per storage. If you have doubt about how many disks you have per storage cell, you can connect in each one and check the number of celldisks you have. And before continuing, be aware of Exadata disk division:

To do this change we execute three major steps: ASM, Exadata Storage, and ASM again.

More…

Observer, Quorum

This article closes the series for DG and Fast-Start Failover that I covered with more details the case of isolation can leverage the shutdown of your healthy/running primary database. The “ORA-16830: primary isolated from fast-start failover partners”.

In the first article, I wrote about how one simple detail that impacts dramatically the reliability of your MAA environment. Where you put your Observer in DG environment (when Fast-Start Failover is in use) have a core figure in case of outages, and you can face Primary isolation and shutdown. Besides that, there is no clear documentation to base yourself about “pros and cons” to define the correct place for Observer. You read more in my article here.

In the second article, I wrote about one new feature that can help to have more protected and cover more scenarios for Fast-Start Failover/DG. Using Multiple Observers you can remove the single point of failure and allow you to put one Observer in each side of your environment (primary, standby and a third one). You can read more in my article here.

In this last article I discuss how, even using all the features, there is no perfect solution. Another point is discussing here is how (maybe) Oracle can improve that. Below I will show more details that even multiple observers continue to shutdown a healthy primary database. Unfortunately, it is a lot of tech info and is a log thread output. But you can jump directly to the end to see the discussion about how this can be improved.

More…

Exadata X8, Second look

Every year Oracle arrives and release new version of Exadata with a plenty of new things. We have the natural evolution from hardware (more memory, more cpu…) and sometimes news features from software side. The point for this post today it is not about the HW and SW things, but something is hidden in the small lines of the new X8.

More…