Tag Archives: 19c

Duplicate PDB from active database, ASM, and OMF

Starting with 18c is possible to duplicate one PDB from an active database. This is a cool feature that helps a lot in daily activities. But recently I got one error when the destination is using ASM, and the files (of course) are managed using OMF. The solution is simple and is related to a bug that is affecting the 18c, 19c, and 21c versions.

Duplicating pluggable databases can be done for a long time and have some rules. But the duplicate PDB from an active database to a new CDB helps a lot because everything can be done online. We don’t need to create an intermediate CDB to export this PDB doing the unplug/plug, or cloning the source locally to read-only PDB and create a new one using dblink, or even using rman backups.

Click here to read more…

21c Grid Infrastructure Upgrade

With the release of the 21c of Oracle Database is time to study new features. The 21c version of Grid Infrastructure (and ASM) was released and an upgrade from orders versions can be executed. It is not a complex task, but some details need to be verified. In this post, I will show the steps to upgrade the Grid Infrastructure to 21c. If you need to upgrade from 18c to 19c you can check my previous post.

Planning

The first step that you need to do is plan everything. You need to check the requirements, read the docs, download files, and plan the actions. While I am writing this post, there is no official MOS docs about how to upgrade the GI to 19c. The first place to the procedure is the official doc for GI Installation and Upgrade, mainly chapter 11. And another good example is 19c Grid Infrastructure and Database Upgrade steps for Exadata Database Machine running on Oracle Linux (Doc ID 2542082.1).

So, what you need to consider:

  • OS version: If it is compatible with 21c and if you are using asmlib or asm filter, check kernel modules and certification matrix.
  • Current GI: Maybe you need to apply some patches. The best practice recommends using the last version.
  • Used features (like AFD, HAIP, Resources): Check compatibilities of the old features with 21c. Maybe you need to remove HAIP or change your crs resources.
  • 21c requirements for GI: Check memory, space, and database versions.
  • Oracle Home patches (for databases running): Check if you need to apply some patches for your database to be compatible with GI 21c.
  • Backup of your Databases: Just in case you need to roll back something.

My environment

The environment that I am using for this example is:

  • Oracle Linux 8.4.
  • GI cluster with two nodes.
  • ASM Filter for disk access.
  • 19.11 for GI.
  • 19.12 for Oracle Home database.

I personally recommend upgrading your current GI to 19c before upgrade or apply one of the last PSU for your running version. This avoids a lot of errors since most of the know bugs will be patched. Check below my environment:

Click here to read more…

Fast-Start Failover, Observe-Only Mode and Health Conditions

Oracle Data Guard Broker allows the database administrators to automate some tasks and an easy way to configure properly a lot of features and details for data guard environments. The Fast-Start FailOver (FSFO) allows the broker to automatically failover to standby database in case of failure of the primary. But until 19c the only option is always to trigger the failover. This changed at 19c with a nice new feature that allows us to put FSFO in Observe-Only Mode.

In this post, I will focus just on new features for FSFO like Observer-Only Mode and Health Conditions for it. Lag and other details will not be covered here.

Observe-Only Mode

The Observe-Only Mode is a simple change that allows putting the FSFO to just observing/monitoring the DG environment, but in case of failure, it does not change the roles between primary and standby. Simple like that. As the Broker documentation for Observe-Only Mode says:

The observe-only mode enables you to test the impact of using fast-start failover in your configuration, without making any actual changes to the configuration.

Click here to read more…

19c Grid Infrastructure Upgrade

Upgrade GRID infrastructure is one activity that usually is postponed because it involves a sensible area that, when not works, causes big downtime until be fixed. But, in the last versions, it is not a complicated task and if you follow the basic rules, it works without problems.

Here I will show a little example of how to upgrade the GI from 18.6.0 to 19.5. The steps below were executed at Exadata running version 19.2.7.0.0.191012 and GI 18.6.0.0, but can be done in every environment that supports Oracle GI.

Click here to read more…

TFA error after GI upgrade to 19c

Recently I made an Exadata stack upgrade/update to the last 19.2 version (19.2.7.0.0.191012) and I upgraded the GI from 18c to 19c (last 19c version – 19.5.0.0.191015) and after that, TFA does not work.

Since I don’t want to complete execute a TFA clean and reinstallation I tried to find the error and the solution. Here I want to share with you the workaround (since there is no solution yet) that I discovered and used to fix the error.

The environment

The actual environment is:

  • Old Grid Infrastructure: Version 18.6.0.0.190416
  • New Grid Infrastructure: Version 19.5.0.0.191015
  • Exadata domU: Version 19.2.7.0.0.191012 running kernel 4.1.12-124.30.1.el7uek.x86_64

TFA error

After upgrade the GI from 18c to 19c, the TFA does not work. If you try to start it or collect log using it, you can receive errors. In the environment described here, the TFA was running fine with the 18c version, and the rootupgrade script from 18c to 19c does not report an error.

And to be more precise, the TFA upgrade from 18c to 19c called by rootupgrade was ok (according to the log – I will show later). But even after that, the error occurs.

The provided solution as usual (by MOS support): download the lastest TFA and reinstall the actual one. Unfortunately, I not like this approach because can lead to an error during GI upgrade for next releases (like 20) and updates (19.6 as an example).

Click here to read more…

Exadata, workaround for oracka.ko error

Recently I made an Exadata stack upgrade/update to the last 19.2 version (19.2.7.0.0.191012) released in October of 2019, and update the GI to the last 19c version (19.5.0.0.191015) and after that, I hade some issues to create 11G databases.

So, when I try to create an 11G RAC database, the error “File -oracka.ko- was not found” appears and creation fails. Here I want to share with you the workaround (since there is no solution yet) that I discovered and used to bypass the error.

The environment

The actual environment is:

  • Grid Infrastructure: Version 19.5.0.0.191015
  • Exadata domU: Version 19.2.7.0.0.191012 running kernel 4.1.12-124.30.1.el7uek.x86_64
  • 11G Database: Version 11.2.0.4.180717
  • ACFS: Used to store some files

oracka.ko

So, calling dbca:

[DEV-oracle@exsite1c1-]$ /u01/app/oracle/product/11.2.0.4/dbhome_1/bin/dbca -silent -createDatabase -templateName General_Purpose.dbc -gdbName D11TST19 -adminManaged -sid D11TST19 -sysPassword oracle11 -systemPassword oracle11 -characterSet WE8ISO8859P15 -emConfiguration NONE -storageType ASM -diskGroupName DATAC8 -recoveryGroupName RECOC8 -nodelist exsite1c1,exsite1c2 -sampleSchema false
Copying database files
100% complete
Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/D11TST19/D11TST19.log" for further details.
[DEV-oracle@exsite1c1-]$

Click here to read more…

DML over Standby for Active Data Guard in 19c

With the new 19c version the Data Guard received some attention and now we can do DML over the standby and it will be redirect to primary database. It is not hard to implement, but unfortunately there is no much information about that in the docs about that.

As training exercise I tested this new feature and want to share some information about that. First, the environment that I used (and the requirements too):

  • Primary and Standby databases running 19c.
  • Data Guard in Maximum Availability .
  • Active Data Guard enabled.

Remember that the idea of DML over the standby it is to use in some cases where your reporting application need to update some tables and few records (like audit logins) while processing the data in the standby. The volume of DML is (and will be) low. At this point there is no effort to allow, or create, a multiple active-active datacenters/sites for your database. If you start to execute a lot of DML in the standby side you can impact the primary database and you adding the fact that you can maximize the problems for locks and concurrency.

More…