With the release of the 21c of Oracle Database is time to study new features. The 21c version of Grid Infrastructure (and ASM) was released and an upgrade from orders versions can be executed. It is not a complex task, but some details need to be verified. In this post, I will show the steps to upgrade the Grid Infrastructure to 21c. If you need to upgrade from 18c to 19c you can check my previous post.
OS version: If it is compatible with 21c and if you are using asmlib or asm filter, check kernel modules and certification matrix.
Current GI: Maybe you need to apply some patches. The best practice recommends using the last version.
Used features (like AFD, HAIP, Resources): Check compatibilities of the old features with 21c. Maybe you need to remove HAIP or change your crs resources.
21c requirements for GI: Check memory, space, and database versions.
Oracle Home patches (for databases running): Check if you need to apply some patches for your database to be compatible with GI 21c.
Backup of your Databases: Just in case you need to roll back something.
The environment that I am using for this example is:
Oracle Linux 8.4.
GI cluster with two nodes.
ASM Filter for disk access.
19.11 for GI.
19.12 for Oracle Home database.
I personally recommend upgrading your current GI to 19c before upgrade or apply one of the last PSU for your running version. This avoids a lot of errors since most of the know bugs will be patched. Check below my environment:
Quick post for today. Recently needed to upgrade to the last version of Autonomous Health Framework (AHF) from an Exadata running GI 19.5. In this particular case the GI was not even running AHF, but still using the standalone TFA that comes with it. So, here I will show how to upgrade to the last version of AHF and replacing the TFA as well.
Recently during the Exadata patch, one database node reported an issue during the patchmgr and stopped the patch apply. The error was related to missing volumes (LVDoNotRemoveOrUse) at LVM. During the post, you can check the error, but please take attention that it changes some LVM config file contents. So, check correctly the step executed and (if possible) open pro-active SR to be sure what you will be doing.
Sometimes even a perfectly running server can pass over issues after a simple reboot. And was exactly that occurred recently with one Exadata database node. And was not the first time that the same error appears (and since there is no well-documented step by step to fix it I documented them below). So, check how to fix the issue related to the read-only locking_type of LVM detected by dracut.
My first post of 2021 is just Thank You. Thanks for reading my posts and following me on my social media (Twitter/LinkedIn/Blog). Thanks for all of the 41.000 site access during the last year and for all that attended my sessions at the online events. I hope that was possible to help you with something about Oracle.
From time to time we need to clone/duplicate some databases and we have several ways to do that, most common are duplicate and restore (with a new name) commands. But when using RMAN catalogs we need to take extra care because we can up lose the backups of the entire database because of the wrong way to do that. And this is even more crucial when using ZDLRA.
You need to choose between The Good and The Bad. Because if you choose wrong you will have troubles with The Ugly. The key factor here is the RMAN/ZDLRA catalog, choose wrong and you will automatically add bad data in the internal catalog tables and if you will try to clean, can delete database backups.
In this post, I will show how correctly clone one database when you are using the RMAN/ZDLRA catalog and the reasons for that. I will show the problems and the collateral effects for ZDLRA when you choose the bad way.
When we are enrolling a database at ZDLRA we need to configure the RMAN channel parameters to point to our definitions like RA_WALLET and CREDENTIAL_ALIAS. The same is done for backup and restore channels. But this can be done in a different way using RA_CLIENT_CONFIG_FILE for ra_library.
As I wrote previously, sometimes we need to have long-term/archival backups due to some compliance. And usually, these backups are stores outside (like a vault/bunker) but for sure not at the same datacenter as the database. But how we can do this at ZDLRA?
In my post about COPY_BACKUP, I wrote how to have an external copy of one backup set at ZDLRA. But this is not the best option when we need to archive some backup because it continues to follow the same recovery window as the original backup set. This means that if you need to have some kind of archive for 5 years, you need to define your recovery window (at the policy level) to this window. And for sure this will put high pressure on space usage because all backups will be stored until became obsolete.
So, the best way is to use the KEEP backups from rman. And as I wrote in my previous post, they not interact/broke with the incremental forever strategy. Is possible to generate the keep backup, and using the DBMS_RA.MOVE_BACKUP moves these backups to a filesystem destination (and further you can copy/store) and archive it outside of ZDLRA.
KEEP backups from rman are used to provide long-term and archival retention. They are used to bypass the default policy retention and are used to sustain some regulations/compliances (like HIPAA, or others) that require archival retention. But with ZDLRA they are treated in a different way. Here I will show how the KEEP backups can impact your backup strategy for ZDLRA.
As you know, ZDLRA is one appliance dedicated to provides you zero data loss in several (planned and unplanned) outages. All the backups are stored inside of the delta store to be processed, and they are deconstructed, meaning that the rman backup set does not exist (as is the formal backup set file).
But sometimes we need to copy/extract some backups outside of ZDLRA and copy it to the filesystem. Maybe because some regulations/compliances need to store for long-term/archival purposes. But some caveats are important to be explained.