Search this Blog

Friday, November 11, 2022

Refreshable PDBs, a DR setup made easy

With the introduction of Oracle CDB-PDB architecture, DBA's had to get used to a whole way of thinking about databases. Databases became more flexible, more volatile in a sense that you can do many cool things with PDB's, easier upgrades, cloning etc...

With the release of Oracle 12.2 the cloning of PDB's got extended with a option to refresh a remote PDB with data from a local source PDB. The clone you created in a remote location can now be updated automatically with changes from its local partner (source) by applying redo that is retrieved over a database link.

This is called a Refreshable Clone PDB

When thinking about these kind of options, DBA's may start to think of Dataguard, the Standby Database solution for Oracle Enterprise Editions. 
The Refreshable Clone PDB is much of the same, but then free of any extra costs and very easy to setup and maintain. Even better, the Refreshable Clone PDB also works on Standard Edition, so an Enterprise Edition License is not needed here. 
Of course the remote database server must be licensed correctly to run this database.

There are a number of advantages of having a Refreshable Clone PDB:

  • When the Clone is on another server, you can use it as a Disaster Recovery solution. The RPO is then dependent on the refresh rate you specify when creating the remote clone.
    RTO is depending on the amount of work needed, to change the role of the clone to a production status in case of a disaster. 
  • When there is no disaster and when there is sufficient archivelog space on the source database, one can even open up the clone in a "read only" state for reporting purposes, thus unloading the source database. Keep in mind that the clone does not get updated when it is open and thus gets out of sync with its source. The redo can only be applied, when the clone is closed.
    Opening a database for reporting like this, hugely increases RTO and RPO.
A Refreshable Clone PDB can either be refreshed automatically or manually. When creating the clone this is specified in the "CREATE PLUGGABLE DATABASE" command. Of course the behavior can also changed afterwards. Manual refreshes can be run anyway, even in automatic refresh mode. 

Some requirements before you can create a Refreshable Clone PDB:

  • A CDB on the remote location must already exist having the same patch version and components as the source CDB
  • A database link must be created from remote CDB, connecting to the local (source) CDB, using the SYSTEM account
  • The SYSTEM account in the source CDB must have "CREATE PLUGGABLE DATABASE" rights. In Oracle 12.2, this was the case, however in 19c this right is by default only granted to SYS.
  • The remote CDB must have the PDB_FILE_NAME_CONVERT parameter set in the spfile, if the directory structure of the clone does not match that of the source.

Example:

In this example we assume that we have two CDBs, a CDB1 as the source and a CDB2 as the remote location. The PDB in the source is called HR_PROD. It is up to you if you want to name the Clone PDB the same. For DR this may be convenient, as this eases a failover. When used for reporting the Clone PDB may get another name. 

We assume that you have entered the correct connection string (named CDB1) in the tnsnames.ora on the remote server, pointing to the source server. 

We start with creating the database link:

SQL> create database link TO_CDB1 
     connect to system identified by yourpassword
     using 'CDB1';

Now we are creating the Refreshable Clone PDB:

SQL> alter system set   
  pdb_file_name_convert='/u01/oradata/hrprod','/u01/oradata/hrclone' 
  scope=both;

SQL> create pluggable database HR_CLONE from HR_PROD@TO_CDB1
     refresh mode every 5 minutes;

That's it !

Checking the result shows:

SQL> show pdbs;

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 HR_CLONE                       MOUNTED


If you now start monitoring the alert_log of the CDB2, then messages like this will appear:

HR_CLONE(3):Serial Media Recovery started
HR_CLONE(3):max_pdb is 3
2022-11-11T09:23:34.574917+01:00
HR_CLONE(3):Media Recovery Log /u01/oradata/hrprod/fr01/CDB1/archivelog/2022_11_11/o1_mf_1_3270_kpw1ft17_.arc
2022-11-11T09:23:34.618074+01:00
HR_CLONE(3):Media Recovery Log /u01/oradata/hrprod/fr01/CDB1/foreign_archivelog/HR_PROD/2022_11_11/o1_mf_1_3271_kpw1n6gy_.arc
2022-11-11T09:23:34.671022+01:00
HR_CLONE(3):Incomplete Recovery applied until change 104159016 time 11/11/2022 09:23:34
2022-11-11T09:23:34.672222+01:00
HR_CLONE(3):Media Recovery Complete (CDB2)
HR_CLONE(3):Completed: alter pluggable database refres

Some other commands to manage the Refreshable Clone PDB are:

Manual refresh:
SQL> alter pluggable database HR_CLONE refresh;

Change the refresh interval:
SQL> alter pluggable database HR_CLONE refresh mode manual;
or
SQL> alter pluggable database HR_CLONE refresh mode every 30 minutes;

Open the database:
SQL> alter pluggable database HR_CLONE open read only;

Thinking about this, it may be convenient to first put refresh mode to manual, before opening the database. This prevent you from getting lots of errors in the alert_log;

To resume the replication of the redo, the PDB must be closed again and a refresh interval set.


Switchover

Because a Refreshable Clone PDB is not a Dataguard solution, a switchover is something manual, to be performed by the DBA. With Dataguard, failovers can be automated with near zero loss of data.
When however doing a switchover correctly, the latest transactions are taken into account, also the switchover of a Cloned PDB must be possible with near zero dataloss.

We need to take into account that in the above database the refresh rate is set to 5 minutes. This means that every 5 minutes the source database is probed for changes. Whenever we do a switchover in between two such probes / refreshes, then we may loose changes applied in the source database during that interval. 

First: Switchover can be done by the database by using an ALTER PLUGGABLE DATABASE .... SWITCHOVER command, but this feature is restricted to Enterprise Edition on Exadata and all editions on Oracle Cloud Services.
To overcome this for on-premise databases a hidden parameter can be set:

SQL> alter system set "_exadata_feature_on"=true scope=spfile;

A test using an on-premise Standard Edition runs into -yet- unresolvable errors, even when setting that parameter. It may be of the Standard Edition, but it may also be that the available documentation on the Internet is incomplete. 


A manual switchover however is possible. Whenever we shutdown or loose the source database, we can disable the Refresh Mode on the Clone and open it in Read/Write mode to start using it.
This then looks like this:

SQL> alter pluggable database HR_CLONE refresh;

Pluggable database altered.

SQL> alter pluggable database HR_CLONE refresh mode none;

Pluggable database altered.

SQL> alter pluggable database HR_CLONE open;

Pluggable database altered.

If you do not disable the automatic refresh mode, the pluggable database can not be opened in Read/Write mode and the command errors out.

After opening the PDB in Read/Write mode it can not be reverted to Refreshable Clone. 
It has to be recreated.










Thursday, October 13, 2022

DIAGNOSTIC & TUNING packs automatically used

We have configured an Oracle Enterprise Manager report that shows us the use of EE-options where we don't have a license.

What shows up, immediately started to confuse us. 
It showed that the DIAGNOSTIC+TUNING option was used in several databases:

When checking the settings from within Oracle Enterprise Manager => Management Pack Usage, it showed that the packs were not in use, the radio-button was not active:


When clicking on the Edit button next to the radio-button, it surprised us to see this:

So it looks to enabled, unless the previous screen did not show that.

When looking at the database itself, we see the following:

SQL> show parameter control_management_pack_access

NAME                                 TYPE
------------------------------------ ---------------------------------
VALUE
------------------------------
control_management_pack_access       string
DIAGNOSTIC+TUNING
SQL>

So, it is active.

Checking the spfile it is not defined there. So activation is probably a default. 
Then we found this document:

where is states:


So, when installing an Enterprise Edition of Oracle, and you don't have any pack licenses, you should explicitly set this option to NONE.

I myself find this default value very much up to discussion. When a junior DBA installs a database for the company he starts working for, creates unknowingly a huge risk of extra costs when subjected to a License Audit. 

The solution is easy, but also easily forgotten when installing a new database release:

SQL> alter system set control_management_pack_access=NONE scope=both;









Wednesday, September 14, 2022

Using Oracle Wallets

An Oracle Wallet is a file that contains encrypted credentials for databases. This prevents the use of hardcoded passwords in scripts and/or on the command line. 
When using an Oracle Wallet the username and password are retrieved from this file on setting up the connection. 

The goal of using a wallet is to increase security. There were scripting in small companies usually contain plain-text passwords, the use of an Oracle Wallet can change this, when e.g. a company increases and more employees are using these scripts.

A small disadvantage of using an Oracle Wallet is that the wallet can contain only one set of credentials per database name. 
Since it however uses TNS entries to connect to a database, one can store this one set of credentials per TNS name. 
So in the case that you have a database called ORADB, TNS entries for each user group can be created in tnsnames.ora and thus credentials for each group. 
Using e.d. credentials for batch jobs and other credentials for an application, two entries in tnsnames.ora would be sufficient to support his, e.g. ORADB_BATCH & ORADB_APP.

Setting up the use of a Wallet is easy. First of course a wallet has to be created. Use the following command to do so:

mkstore -wrl /u01/oradata/wallet -create

The entered location is a directory where the wallet is created. Make sure the directory used exists.
The command will ask you to enter a password. This password is used to encrypt the Wallet. Store this Wallet password somewhere on a safe place. It is needed every time when adding/removing or changing entries in the wallet. 

Adding/Deleting or changing entries in a wallet uses a similar syntax. Every type of action you perform on a wallet uses the "mkstore" command. 
Adding an entry is done like this:

mkstore -wrl /u01/oradata/wallet -createCredential TNS_NAME USER PASSWORD

Example:
mksore -wrl /u01/oradata/wallet -createCredential ORADB_APP scott  tiger


But now how do scripts use the wallet?

First we need to setup sqlnet.ora This file is usually found in the location defined by $TNS_ADMIN. IF it is not present, then create a new file and add the following contents to it:

WALLET_LOCATION =
   (SOURCE =
     (METHOD = FILE)
     (METHOD_DATA =
       (DIRECTORY = /u01/oradata/wallet)
     )
    )

SQLNET.WALLET_OVERRIDE = TRUE


The WALLET_LOCATION part is self-explaining. 
The WALLET_OVERRIDE parameter allows this method to override any existing OS authentication configuration. 
When adding an sqlnet.ora file or making changes to it, a listener restart is required to activate it.
This means that switching to the use of a wallet, requires a brief unavailability of the database for the business.

Using the wallet is now easy. 
Any connections to the database not using a credential, will use the stored credential in the wallet, e.g.:

sqlplus /@ORADB_APP

will succesfully login to the database.

To manage the wallet over time, the following commands can be needed:

1. List all wallet entries
mkstore -wrl /u01/oradata/wallet -listCredential

2. Show password for specific entry
mkstore -wrl /u01/oradata/wallet -listCredential

Using presented list, use the INDEXnumber , e.g.:
mkstore -wrl /u01/oradata/wallet -viewEntry oracle.security.client.password13

3. Manual entry creation (TNS connection name must be unique) within wallet
mkstore -wrl /u01/oradata/wallet -createCredential TNS_NAME USER PASSWORD

Example:
mkstore -wrl /u01/oradata/wallet -createCredential oradb_app scott tiger

4. Manual entry deletion
mkstore -wrl /u01/oradata/wallet -deleteCredential TNS_NAME

This will delete "all", entries for this TNS_NAME, but since a wallet can store only one credential per TNS_NAME, it will delete that credential

5. Wallet deletion
mkstore -wrl /u01/oradata/wallet -delete

Note that this will delete the complete wallet. All entries are lost !!

6. Modifying an Entry:
mkstore -wrl /u01/oradata/wallet -modifyCredential TNS_NAME USER PASSWORD


The wallet is also usable for e.g. RMAN in combination with a catalog database, like:

$ORACLE_HOME/bin/rman  target  /   catalog  /@RMANCAT


Friday, April 29, 2022

DBSAT and the risks for EBS databases

A couple of months ago we went on investigating the capabilities of Oracle's DBSAT, the DataBase Security and Assessment Tool.

The tool itself if powerful, it generates detailed information about vulnerabilities in your databases. The information is presented in readable HTML format that enables quick handling to reduce seurity risks.


However....

One special point of attention is needed here and we already made a mistake here.
DBSAT recognizes the installation of the so-called "sample schema's" in the scanned database and reports on this detailed, looking like this:






The special point of attention is regarding EBS database, as they have HR and OE schema's but then for real usage !!


The reporting of DBSAT that Sample Schema's are installed in EBS databases, which need to be dropped, is a so-called False Positive.

For EBS database DO NOT DROP these schema's, as it will render the EBS application unusable