Tag Archives: Database

21c, DG PDB, New Steps

When the DGPDB was released for 21c (at version 21.7) I wrote a blog post about how to use the feature (you can read it here). This was in August of 2022 and since that time, we got small changes and corrections, but with the update 21.12 (patch 35740258) we got new commands like “EDIT CONFIGURATION PREPARE DGPDB”.

Not just that, but Ludovico Caldara (Data Guard PM) recently wrote one blog post about new commands for Data Guard preparation that can be used with Broker. Is an evolution of the commands I covered in one previous blog post.

So, in this post, I will cover the new commands for DG PDB and the changes/improvements that appeared in the last version. It is a long post, but everything is covered here. No gaps or information are missing, all the steps, logs, and outputs are described and documented.

Click here to read more…

Friends, Conferences, and Community

Last September was pretty special for me because had some opportunity to meet friends again after COVID/Pandemic situation.

POUG

First, POUG is POUG. There is no way to describe what it is if you were never there. I had the opportunity to be there talking about ExaCC. The whole conference is amazing, not just because of the technical content (that is surreal), but also because of the friends that were/are there. Everyone that was there was enjoying the conference, but most important, enjoying being there with friends.

For POUG I need to say thank you to Kamil Stawiarski (https://twitter.com/ora600pl) and Luiza Nowak (https://www.ora-600.pl/en/tp/luiza-koziel-2/) for organizing the event. You, together with the whole POUG team, made one fantastic conference.

Click here to read more…

21c, DG PDB

Since the 21c was public available the Data Guard per Pluggable Database – DG PDB – was intended to be there, but Oracle needed more time to make things work and some weeks ago released the feature with the 21.7 version. Here in this post, I will show to configure it and also how to troubleshoot, and the pitfalls of using it. As usual, all the steps, logs, and outputs are covered here and I hope that it helps you understand the whole DG PDB process.

My environment

The environment that I am using here is:

  • Two databases running in RAC mode (two nodes in each cluster).
  • ASM: same DATA and RECO diskgroups names in each cluster.

About the databases I have:

  • ORADBDC1, that have the pdb PDBDC1. So, they represent the DC1.
  • ORADBDC2, that have the pdb PDBDC2. So, they represent the DC2.

Each of these clusters is in a separate environment, this means that both are primary databases inside each datacenter. So, they have no DG configured between them.

The main target for this post is to have the pdb from DC2 protected by the ORADBDC1 at the DC1. I used RAC and ASM because this is usually the normal configuration for the MAA (following the recommended architectures baseline) when using DG. This increases the protection and reduces the SPOF of your environment.

DG PDB

The idea of DG PDB differs a little from what we see commonly for Data Guard, here each container have own life. This means that only the pdb is protected and not the entire cdb. This puts the DG PDB close to Cloud than On-Prem because it fit perfectly at the OCI structure since you can create your pdb in one region and choose another region to protect it. And even closer if you think for Autonomous Database that your ownership is pdb only. I will not say that is good or not, but is linked to how Oracle works with OCI. Personally, I prefer to have normal DG configured to protect my databases and I choose where I want to open my pdb (maybe they add this feature in the future).

Another detail is that DG PDG (from now) works only in MaxPerformance mode, so, there is no SYNC mode for the archive destinations. There are more limitations for the DG PDB and you can check it in the topic DG PDB Configuration Restrictions from official documentation (I recommend that you read it).

Please read my new blog post about the new changes to the process. You can see how the process evolved and it is better. Read it here. 

Click here to read more…

2020-2021

My first post of 2021 is just Thank You. Thanks for reading my posts and following me on my social media (Twitter/LinkedIn/Blog). Thanks for all of the 41.000 site access during the last year and for all that attended my sessions at the online events. I hope that was possible to help you with something about Oracle.

Click here to read more…

DB_UNIQUE_NAME, PDB, and Data Guard

When you change the parameters for the database is possible to specify the db_unique_name and allow more control where you want to apply/use it. This is very useful to limit the scope, but you need to be aware of some collateral effects. Even not present at the official doc, you can use it. But check here some details that you need to take care of.

Click here to read more…

Exadata, Missing Metric

Understand metrics for Exadata Storage Server is important to understand how all the software features are being used and all the details from that. Here I will discuss one case where the FC_IO_BY_R_SEC metric can show not precise values. And I will discuss one missing metric that can save a lot.

If you have doubts about metrics, you can check my post about metrics, it was an introduction, but cover some aspects of how to read and use it. You can check my other post where I show how to use metric DB_FC_IO_BY_SEC to identify database problems that can be hidden when checking only from the database side.

Click here to read more…

Exadata, Understanding Metrics

Metrics for Exadata deliver to you one way to deeply see, and understand, what it is happening for Exadata Storage Server and Exadata Software. Understand it is fundamental to identify and solve problems that can be hidden (or even unsee) from the database side. In this post, I will explain details about these metrics and what you can do using them.

My last article about Exadata Storage Server metrics was about one example of how to use them to identify problems that do not appear in the database side. In that post, I showed how I used the metric DB_FC_IO_BY_SEC to identify bad queries.

The point for Exadata (that I made in that article), is that most of the time, Exadata is so powerful that bad statements are handled without a problem because of the features that exist (flashcache, smartio, and others). But another point is that usually, Exadata is a high consolidated environment, where you “consolidate” a lot of databases and it is normal that some of them have different workloads and needs. Using metrics can help you to do a fine tune of your environment, but besides that, it delivers to you one way to check and control everything that’s happening.

In this post, I will not explain each metric one by one, but guide you to understand metrics and some interesting and important details about them.

Click here to read more…

ZDLRA, Multi-site protection – ZERO RPO for Primary and Standby

ZDLRA can be used from a small single database environment to big environments where you need protection in more than one site at the same time. At every level, you can use different features of ZDLRA to provide desirable protection. Here I will show how to reach zero RPO for both primary and standby databases. All the steps, doc, and tech parts are covered.

You can check the examples the reference for every scenario int these two papers from the Oracle MAA team: MAA Overview On-Premises and Oracle MAA Reference Architectures. They provide good information on how to prepare to reduce RPO and improve RTO. In resume, the focus is the same, reduce the downtime and data loss in case of a catastrophe (zero RPO, and zero RPO).

Multi-site protection

If you looked both papers before, you saw that to provide good protection is desirable to have an additional site to, at least, send the backups. And if you go higher, for GOLD and PLATINUM environments, you start to have multiple sites synced with data guard. These Critical/Mission-critical environments need to be protected for every kind of catastrophic failure, from disk until complete site outage (some need to follow specific law’s requirements, bank as an example).

And the focus of this post is these big environments. I will show you how to use ZDLRA to protect both sites, reaching zero RPO even for standby databases. And doing that, you can survive for a catastrophic outage (like entire datacenter failure) and still have zero RPO. Going further, you can even have zero RPO if you lose completely on site when using real-time redo for ZDLRA, and this is not written in the docs by the way.

Click here to read more…

ZDLRA, Real-Time Redo and Zero RPO

The idea for Real-Time Redo is to reach zero RPO for every kind of database and this includes ones with and without DG. As you can see in my last post, where I showed how to configure Real-Time Redo for one database, some little steps need to be executed and they are pretty similar than a remote destination for archivelog for DG.

But if you noticed, the configuration for the remote destination was defined as ASYNC, and hinted like that at ZDLRA docs (“Protection of Ongoing Transactions” or at “How Real-Time Redo Transport Works”). In the same post, I suggested as “controversial” because the ASYNC does not guarantee the RPO zero. 

You can see more in the DataGuard docs at (Oracle Data Guard Protection Modes and Oracle Data Guard Concepts and Administration), but the resume it is:

  • ASYNC: The primary database does not wait for the response from a remote destination.
  • SYNC/NOAFIRM: The Primary database holds commit until the remote destination report that received the redo data. It does not wait until the remote site report that wrote the data in the disc.
  • SYNC/AFFIRM: The primary database holds commit until the remote destination report that received redo data and wrote it at the disk.

You can read with more details the difference here: Best Practices for Synchronous Redo Transport and Best Practices for Asynchronous Redo Transport.

The idea is simple, if you use ASYNC, there is no guarantee for zero data loss between the primary database and the remote destination.

Click here to read more…

ZDLRA Internals, Virtual Full Backup

Virtual Full Backup probably is the most know feature of Oracle Zero Data Loss Recovery Appliance (ZDLRA) and you can check here how it works. In this post I will show how virtual full backup works internally and integrate INDEX_BACKUP task with tables like PLANS, PLAN_DETAILS, CHUNKS, and BLOCKS.

About the internal tables, you can check my previous post “ZDLRA Internals, Tables and Storage” where I explained details about that. To understand the INDEX_BACKUP task, check my post “ZDLRA Internals, INDEX_BACKUP task in details”. But if you know nothing and want to start reading about ZDLRA, you can check my post “Understanding ZDLRA” and check all the features and details.

The base for this article is virtual full backup and incremental forever strategy. I explained both at “ZDLRA, Virtual Full Backup and Incremental Forever” and I included hot it’s work integrated with rman backup. Basically, after an initial backup level 0, you execute just level 1 backups and ZDLRA generated a virtual backup level 0. But here, in this post, I will show you how it works in some internal details.

Click here to read more…