Category Archives: RMAN

COPY_BACKUP at ZDLRA

As you know, ZDLRA is one appliance dedicated to provides you zero data loss in several (planned and unplanned) outages. All the backups are stored inside of the delta store to be processed, and they are deconstructed, meaning that the rman backup set does not exist (as is the formal backup set file).

But sometimes we need to copy/extract some backups outside of ZDLRA and copy it to the filesystem. Maybe because some regulations/compliances need to store for long-term/archival purposes. But some caveats are important to be explained.

Click here to read more…

ASM, Mount Restricted Force For Recovery

Survive to disk failures it is crucial to avoid data corruption, but sometimes, even with redundancy at ASM, multiple failures can happen. Check in this post how to use the undocumented feature “mount restricted force for recovery” to resurrect diskgroup and lose less data when multiple failures occur.

Diskgroup redundancy is a key factor for ASM resilience, where you can survive to disk failures and still continue to run databases. I will not extend about ASM disk redundancy here, but usually, you can configure your diskgroup without redundancy (EXTERNAL), double redundancy (NORMAL), triple redundancy (HIGH), and even fourth redundancy (EXTEND for stretch clusters).

If you want to understand more about redundancy you have a lot of articles at MOS and on the internet that provide useful information. One good is this. The idea is simple, spread multiple copies in different disks. And can even be better if you group disks in the same failgroups, so, your data will have multiple copies in separate places.

As an example, this a key for Exadata, where every storage cell is one independent failgroup and you can survive to one entire cell failure (or double full, depending on the redundancy of your diskgroup) without data loss. The same idea can be applied at a “normal” environment, where you can create failgroup to disks attached to controller A, and another attached to controller B (so the failure of one storage controller does not affect all failgroups). At ASM, if you do not create failgroup, each disk is a different one in diskgroups that have redundancy enabled.

Click here to read more…

ZDLRA Internals, INDEX_BACKUP task in details

For ZDLRA, the task type INDEX_BACKUP it is important (if it is not the most) because it is responsible to create the virtual full backup. This task runs for every backup that you ingest at ZDLRA and here, I will show with more details what occurs at ZDLRA: internals steps, phases, and tables involved.

I recommend that you check my previous post about ZDLRA: ZDLRA Internals, Tables and Storage, ZDLRA, Virtual Full Backup and Incremental Forever, and Understanding ZDLRA. They provide a good base to understand some aspects of ZDLRA architecture and features.

Backup

As you saw in my previous post, ZDLRA opens every backup that you sent and read every block of it to generate one new virtual full backup. And this backup is validated block a block (physically and logically) against corruption. It differs from a snapshot because it is content-aware (in this case it is proprietary Oracle datafile blocks inside another proprietary Oracle rman block) and Oracle it is the only that can do this guaranteeing that result is valid.

Click here to read more…

ZDLRA Internals, Tables and Storage

ZDLRA tables are owned by rasys user and are one extension of rman recovery catalog views. The internal tables that were added are just a few, but are important to understand how tasks works (mainly INDEX_BACKUP). In this post, I will show tables like plans, plans_details, blocks, and chunks. And besides that, show a little glimpse about how the files are stored.

Extra tables

If you check, officially, the rman recovery catalog already includes some columns from ZDLRA, the column VB_KEY identify the virtual backup key inside ZDLRA. It exists in RC_BACKUP_PIECE as an example. The extra tables added are linked with SBT tasks (clone to tape or cloud), replication (to other ZDLRA), storage (chunks), vbdf and plans (virtual full backup).

Click here to read more…

ZDLRA, Virtual Full Backup and Incremental Forever

One of the most knows features of ZDLRA is the virtual full backup, basically incremental forever strategy. But what this means in real life? In this post, I will show some details about that and how interesting they are, check what it is Virtual Full Backup and Incremental Forever strategy for ZDLRA.

This post is based on my previous one where I showed all the steps to configure the VPC and enroll database at ZDLRA. 

Virtual Full Backup

A virtual full backup appears as an incremental level 0 backup in the recovery catalog. From the user’s perspective, a virtual full backup is indistinguishable from a non-virtual full backup. Using virtual backups, Recovery Appliance provides the protection of frequent level 0 backups with only the cost of frequent level 1 backups.

This definition (and image) are in the Zero Data Loss Recovery Appliance Administrator’s Guide and I think that represents the essence of the virtual full backup. ZDLRA receive the incremental level 1 backup, index it, and generate a level 0 to you that it is indistinguishable from a normal backup level 0.

More…

How to use ZDLRA and enroll a database

ZDLRA it is an Oracle dedicate appliance specialized to manage your backups, but more than that, provide you zero data loss. I already made an introduction about ZDLRA in my previous post and here I will show how to use and enrolling protected database at ZDLRA: enroll database, create policies and access with rman. Understand how to use ZDLRA, at least the starting point.

Every project starts with scope definition, steps, and requirements. It is not different for ZDLRA project, it is a big appliance, expensive, and integrate a lot of things in just one place: backups, archivelogs, redo, replication, and RPO. The requirements vary from project to project, I will not discuss that here. But will show you the tech part about the usage.

The post has two divisions, the first it is the technical part. You can check how to do that. The second contains the information/documentation part, where I will show more details about the steps.

More…

RMAN, Allocate channel, CDB, and CLOSE: bug

Allocate channel for RMAN is used in various scenarios, most of the time is useful when you use tape as device type or you need to use some kind of format. The way to do the allocation not changed since a long time ago, but when you run the database in container mode you can hit a bug that turns your channel unusable. I will show you the bug and how to avoid it with a simple trick.

This bug hit every version since 12 and I discovered it last year when testing some scenarios, but I was able to test and post just recently. It just occurs for CDB databases and exists just one one-off solution published for 12.2. But there is one workaround more useful and works for every version.

The most interesting part is that everything that we made until now when allocate channel will not work. You can search in all doc available for allocate channel since 9i until 19c the first thing that you made after open the run{} is allocate channel. This is the default and recommended in the docs: for 19c, 18c, and 9i.

More…