Search this Blog

Monday, January 27, 2025

Remove a lot of data from a table (4 ways)

Deleting old historical data or test data from a database is one of these DBA jobs that may be boring, but may also need sufficient attention and care. 
There are three ways to delete massive amounts of data, one more care full. than others:

  1. Truncate
    The easiest and most destructive way of deleting data is truncating a table. A truncate command is like cutting down a tree: It simply cuts away all data from a table and leaves it empty. It is unreversible, there is no UNDO saved. 

    Truncate has two options to either release all reserved space of a table or to keep it, for reallocation of new data. 

    SQL> truncate table massive_table drop storage;

    This will empty the table and release all occupied storage in the datafiles, to be used by other objects. When new data is inserted into the table, it may be less fragmented, depending on the overal fragmentation of the datafiles.

    The other option is:

    SQL> truncate table massive_table reuse storage;

    This will empty the table, but keep the allocated storage. This is quite use full for tables that hold huge amounts of data during a batch process or repeating process. When storage is reused, the database does not need to reallocate it, when data is inserted into the table again. 
    The data in the table will be as fragmented as it was before.

    There is also an extra option "cascade". This can be added to the statement when foreign keys are used referencing other tables:

    SQL> truncate table massive_table drop storage cascade;


  2. CTAS
    Create Table As Select is probably the most safe way when deleting huge amounts of data. It can be used to save the data to be deleted into a backup table before a truncate is executed. It is also possible to create a new table with "data to keep only" and use that table instead of the old one, by e.g. renaming it:

    SQL> create table data_to_keep as
               select * from massive_table
               where year >= to_date('01-01-2010','dd-mm-yyyy');

    SQL> drop table massive table purge;

    SQL> alter table data_to_keep rename to massive_table;


    Remember to add Indexes afterwards if they existed and recalculate statistics.

    It is also possible to truncate the massive_table and reinsert the saved data back into it:

    SQL> create table data_to_keep as
            select * from massive_table
            where year >= to_date('01-01-2010','dd-mm-yyyy');

    SQL> truncate table massive table drop storage;

    SQL> insert into massive_table (select * from data_to_keep);

    SQL> commit;

  3. Partitioning
    Another way to delete massive amounts of data on a more regular basis is the use of partitioning. Partitioning is only available in an Enterprise Edition database and it needs a separaten license. 
    It requires to think over the partitioning clause. If in the future data is to deleted based on a year, it is not smart to partition e.g. on company name.
    It interval-partitioning is not used (see this article), then partitions to be deleted can be found using:

    SQL> select 'select count(*) from pohead_his partition ('||partition_name||') ;'
         from user_tab_partitions where table_name='MASSIVE_TABLE'
         and partition_name like '%2010';

     
    This generates a list of statements to drop the partitions from the table


  4. Filtered Move
    Probably the easiest way to delete massive amounts of data the quickest way is a filtered move. In this way the table is rebuild with only the data that is needed to be kept. 

    The statement to do this, also has options to do this "online" ( Enterprise Edition only) and to update the indexes automatically.
    The statement:

    SQL> alter table massive_table
               move online 
               update indexes 
               including rows
               where year >= to_date('01-01-2010','dd-mm-yyyy');

    This statement rebuilds the massive_table and deletes all data that have a year older than Jan. 1st of 2010. 
    Since an "alter table move" is a DDL action, an implicit commit is executed. 
    So, there is no way back. It is as effective as the CTAS option, but way more easier and faster.


This concludes this blog entry. Hopefully it helps

Generate Reset Password statements

 A lot of companies we as DBA's work for have Test- and Development databases that are based on a copy of a Production database. Every time a lower environment databases is refreshed with a copy of production, there are always post-clone steps needed, to convert the production data into test- or development-data. A nice article to make this action easy can be read here.

One of these post-clone steps is a reset of passwords for all user accounts. Passwords on the lower environments should not be the same as on the production database.
Before a test- or development database is refreshed, it is smart to retrieve and keep all passwords of user accounts of the old copy of these databases and restore them after the refresh. 

Below a script is listed that generates "ALTER USER..." statements for all users in the database, except the common-users. It is known that resetting a password for a common user can only be done on the CDB level and not on the PDB level, so they are excluded.

The script makes use of a database directory called TMPDIR, that points to '/tmp'. First create this directory:

SQL> CREATE DIRECTORY TMPDIR AS '/tmp';

SQL> GRANT READ,WRITE ON DIRECTORY TMPDIR TO CLONE_USER;

It is assumed the one knows that grants have to be set before one can create a directory, or that one asks someone else (the DBA) to create a directory for him/her. The user CLONE_USER is a fictive user, e.g. the user that performs the post-clone actions.

Before the old copy of the Test-/Development database is destroyed/replaced, run this script, to save the old passwords.

The script:

declare
    cursor userlist is
        select username
        from   dba_users
        where      common = 'NO';
    l_usercmd varchar2(500);
    l_query   varchar(500);
    fd        utl_file.file_type;
begin
    -- Open SQL output file
    fd := utl_file.fopen('TMPDIR', 'ResetUserPasswords.sql', 'W');
    for r1 in userlist
    loop
        -- Generate CREATE USER command for every user
        l_usercmd := dbms_metadata.get_ddl('USER', r1.username);
        -- Reformat query so it can be used to reset the password
        l_query   :=
            trim(
                replace(replace(replace(substr(l_usercmd, 1, instr(l_usercmd, 'DEFAULT') - 1), 'CREATE', 'ALTER') || ';', chr(10), '')
                      , chr(13)
                      , ''
                       )
            );
        -- Write the query into the SQL file
        utl_file.put_line(fd, l_query);
    end loop;
    utl_file.put_line(fd, 'quit;');
    utl_file.fclose(fd);
end;


When successfully finished, a script "ResetUserPassword.sql" can be found in /tmp.
Execute this script on the new copy of the Test-/Development database.



Wednesday, January 22, 2025

Password rollover time

A new security feature announced for Oracle 23ai has been back ported to 19c as of RU 19.12.
The feature offers the possibility to keep using an old password for a certain period after a new password for an account was set. This feature is called "Password Rollover Time".

And it is great!!

Now whenever an application account gets the password changed, one can postpone the update of the application itself to a moment where downtime can be planned. The same with regular users: If a user changes his/her password and the next day forgot that they did, they still can login with the old password. Sounds pretty cool, but the first question would be:
How about security ?
Allowing users to logon with an old password lowers the level of security, doesn't it?

It does not!!  

When a password life time is used, forcing users to change their password after a given amount of time, there is also a grace time set, within which a password must be changed before it expires.

If the user changes his/her password, a kind of extra grace period added for the user or the application owner to have the new password configured in all their applications before the new password becomes valid definitely. 

Password Rollover Time is controlled by the profile a user has. The value entered is a numeric value and ranges from 0 to 60 days. When set to the default, being 0, the functionality is disabled. 
Setting/Enabling this functionality is done by setting a non-zero value like :

ALTER PROFILE USER_PROFILE LIMIT PASSWORD_ROLLOVER_TIME 10;

This gives a user 10 days to still keep using their old password after a password has been changed.
So the value given is in days. To set the value e.g. to 6 hours enter the value as following:

ALTER PROFILE USER_PROFILE LIMIT PASSWORD_ROLLOVER_TIME 6/24;

Important:
When also the PASSWORD_GRACE_TIME and PASSWORD_LIFE_TIME are used in the same profile, the maximum number or days for PASSWORD_ROLLOVER_TIME can not be greater than the PASSWORD_GRACE_TIME.
This is expected, otherwise password life time would be extended.
Also, if you try to set PASSWORD_ROLLOVER_TIME to a value larger than PASSWORD_GRACE_TIME an error is thrown:

>> ALTER PROFILE KRG_USERS LIMIT
   PASSWORD_GRACE_TIME 10
   PASSWORD_ROLLOVER_TIME 20
Error at line 1
ORA-02377: invalid profile limit PASSWORD_ROLLOVER_TIME

The timer will start at the first login after a password has been reset. The last date/time when the password was reset, can be seen in the dba_users table. 
Also, when a password rollover has been enabled from the profile, the account status for that users changes from "OPEN" to "OPEN & IN ROLLOVER"


An example:
Create a profile that has grace_time, life_time and rollover_time for a password set:

CREATE PROFILE PWROLLTIME LIMIT
  PASSWORD_GRACE_TIME 7
  PASSWORD_LIFE_TIME 90
  PASSWORD_ROLLOVER_TIME 6;

Create a user that uses this profile:

CREATE USER ORCL
  IDENTIFIED BY P0wer_Rail
  DEFAULT TABLESPACE USERS
  TEMPORARY TABLESPACE TEMP
  PROFILE PWROLLTIME
  ACCOUNT UNLOCK;

GRANT CREATE SESSION TO ORCL;

ALTER USER ORCL QUOTA UNLIMITED ON USERS;

Now when the password is changed, either by the user, or by and administrator, the account status in dba_users changes:

SQL> ALTER USER ORCL IDENTIFIED BY ELEC_TR4IN;

SQL> SELECT USERNAME, ACCOUNT_STATUS, PASSWORD_CHANGE_DATE 
     FROM  DBA_USERS WHERE USERNAME='ORCL';

USERNAME   ACCOUNT_STATUS                   PASSWORD_CHANGE_DATE
---------- -------------------------------- --------------------
ORCL       OPEN & IN ROLLOVER               22-JAN-25           
1 row selected.


Following the created profile, this state will remain for the coming 6 days. After 6 days the account_status will change back to "OPEN" and the old password will not be valid anymore. 
If within these 6 days the password of this user is changed again, then first change password is not valid anymore. Within one period of PASSWORD_ROLLOVER_TIME, always the oldest and most new password are valid;

So if we would issue another password change within the 6 days, on the account above:

SQL> ALTER USER ORCL IDENTIFIED BY ORACLE_4U;

then the user can login using the passwords P0wer_Rail and ORACLE_4U.

In the example above these two password will both be valid until Jan 28th.
If in the period between now and Jan 28th the rollover time in the profile is changed, the validness of the passwords is adjusted accordingly.










Thursday, January 9, 2025

Restore database from NFS storage

In this blog the restore of an Oracle database is described. The database was, before it got lost, daily backed up to an NFS mounted storage device. Next to that, every 10 minutes backups of the archived log files were created.

Starting point:

  • Clean installed VM, no database running
  • File systems created and sized to needed size
  • No DB, no spfiles etc. found on the system

 

Steps:

  • Set ORACLE_SID
  • Mount NetApp storage for this database under /mnt/backup/$ORACLE_SID
  • Find newest backup file that contains the spfile and restore it

     ls -lrt /mnt/backup/$ORACLE_SID/c*

     If you do not have a valid spfile or init.ora RMAN has the ability to nomount an instance without the requirement of a spfile. This will allow you to restore your spfile from a valid backup.

$ rman target / 
( do not connect to the recovery catalog, as it will not find this forcefully started instance)

RMAN> startup nomount force;

     To be able to restore the spfile, we do need the DBID for the database to be restored. Use SQLplus and query the DBID from the RMAN catalog;

SQL>  select db_id from  db where reg_db_unique_name='DBAPRDC';

  • Return to RMAN and execute the following statements:

RMAN> set DBID=<ID from query>;

RMAN> restore spfile from ‘<file found in ls -lrt command>';

-          Shutdown the database and restart  it in nomount using the restored spfile and then connect to the RMAN catalog:

RMAN> shutdown immediate;

RMAN> startup nomount;

Quit RMAN and restart using the recovery catalog;

rman target / rcvcat /@tsmora

  • Restore the controlfiles from the same backup file that held the spfile:

RMAN> restore controlfile from ‘<file found in ls -lrt command>’;

The control files are automatically restored to the correct location.

  • Restore and recover database

Now it is time to restore and recover the database.
Since we assume that we don’t have anything but the backup of the database, a full recovery is not possible as we are missing the latest redo information from the online redo files. We do have to recovery with the amount of redo we have from archived log backups:


RMAN> alter database mount;

RMAN> restore database;

RMAN> recover database until available redo;

Depending on the database size, this may take a while.

  • Open the database using RESETLOGS, to let the new redo-log file being created:

RMAN> alter database open resetlogs;

That completes the restore of the database.
Remember that the last couple minutes between the latest archivelog backup and the point in time where database got lost are gone. Data inconsistencies may therefore exist.