Search this Blog

Friday, April 6, 2012

Oracle Manual Standby (no Dataguard )

This entry is outdated. A revised solution can be found here:
http://managingoracle.blogspot.nl/2013/06/oracle-manual-standby-revised-no_13.html

A new personal challenge. How to create a manual standby database using Oracle11gR2 Standard Edition.

As everybody might know, the official Standby database solution is Oracle Dataguard, only available however as an option with Oracle's Enterprise Edition of the database software. Since we have a lot of Standard Edition databases and we didn't want to pay the license fee for Enterprise Edition and the Dataguard option, I went on searching myself for a solution.

With dataguard a primary and a secondary (standby) database exist. These database servers know eachother as FAL_SERVER and FAL_CLIENT. The FAL_SERVER is the primary, serving archivelog information to the standby server, which in his turn 'replays' this information onto his mounted database, thus syncing the database with the primary.
Because this is a dataguard automated process, completely covered by the socalled dataguard broker, there is no need for any crontab entry what-so-ever to get this working.

In our case, where I don't want to use Dataguard, I don't have a broker process, responsible for transferring the archivelog information and replaying this on the standby database. I had to find a way, to get this done manually.
Now the actual method is much the same. Without dataguard, the only way to have a similar result is by using a continuing process of database recovery.
So simply said: Put the standby database in mount mode, get the archivelog file from the primary and recover the database until the last archivelog available. This process can be repeated as often as necessary.

One remark has to be made upfront:
A manual standby database always lags behind the primary database. There where dataguard standby databases are usually in sync up to the last transaction made, this manual standby is in sync up to the beginning of the last archivelog received. 

So, there is always a gap to overcome, in case the standby database needs to become active. Depending on the scenario of the disaster, we need to cope with having lost database information of the size of the last archivelog and the active redo-log, or we need to replay the last archivelogs and possibly the redo-logs from primary to get the database completely in sync. As said, this is totally depending on the situation of the disaster.


What did I use for this project:
  • Two Redhat 6 Linux virtual machines, both setup with Oracle's Virtualbox. Both VM's have 1Gb of memory, and 30Gb of diskspace. 
  • I named the VM's "dgmaster" for the primary and "dgslave" for the standby host. In this blog I will refer to them as such.
  • Both VM's are identically installed, with Oracle 11.2.0.3 SE, with the exact file system naming.
  • On dgmaster a database was configured, called ORCL.
  • On dgslave initially an offline copy of dgmaster's ORCL database was restored/copied and brought up into mounted state. This ofcourse brings the need of transferring the init.ora / spfile, and the creation of a password file.

The goal was to set up a fully automated process that could run from thne crontab of the standby server "dgslave". To get this working we need to have the following setup:
  • SSH equivalency between the two Oracle users on both nodes. The oracle user must be able to transfer archive files from dgmaster to dgslave, without the need of entering a password. Information about how to setup SSH equivalence can be found here:  http://managingoracle.blogspot.com/2010/06/ssh-equivalence-on-rac.html
  • Determine a way of automatic transfer of archivelogs and also determine the time to recover to, related to the last retrieved archivelog. I selected to use the 'first_time#' of the last archivelog received as the point to recover to. This makes sure that every time the recovery will succeed, as all needed information is available. 
  • The master ( primary ) database is up and running in 'open' state. It is e.g. in use as production database.
  • The standby ( secondary) database is in mounted state, so it is closed
  • The primary database has archive logging enabled. The log_archive_format of the primary is copied to the spfile of the standby database. This makes sure that dgslave can find the files it has retrieved, when starting the recovery.
  • Make sure the primary database has forced logging enabled. This is necessary to be able to log really all changes in the redo files, as all changes need to be send to the standby server. This can be configured using the statement:
    • ALTER DATABASE FORCE LOGGING;
I'm sure every DBA can setup two Virtual machines running an Oracle installation. I'm also sure that they will succeed in setting up SSH equivalence, so when everything is setup, it is time to discuss the steps that need to be done over and over again.

1) Dgmaster executes a forced log-switch
2) Dgmaster determines the 'first_time#' entry of the archivelog file written with this logswitch and sends it to dgslave.
3) Dgslave fetches all missing archivelogs from the master 
4) With the retrieved 'first_time#' entry, the recovery of this mounted database can continue another step. 

To make this run automatically, dgslave has the initiative. Dgslave runs the necessary scripts on dgmaster, by means of a ssh-connection. This makes it also possible to retrieve information sent by dgmaster.
Below the script that is used on dgmaster is shown. This script is - in my test system - in the home-directory of the oracle user. You can put it anywhere you like on dgmaster, but then make sure to add that location to the script on dgslave:


#
# Force a logswitch to get the last archivelog to the standby host
#
ORACLE_SID=ORCL
ORAENV_ASK=NO
. oraenv >/dev/null 2>&1


SwitchLogfile()
{
  #
  # Do logswitch  
  #
  RESULT=`echo "Alter system switch logfile;" | sqlplus -S / as sysdba | grep 'System altered'`
  if [ "$RESULT" = "System altered." ]
  then
export RETURN=1
  else
export RETURN=0
  fi
  # Do we need to do something with this return value?
  export RETURN
}


GetArchiveTime()
{
  CURYEAR=`date +%Y`
  echo "set heading off;" > temp.sql
  echo "set termout off;" >> temp.sql
  echo "select to_char(first_time,'YYYY-MM-DD HH24:MI:SS') from v\$archived_log where sequence#=(select sequence# - 1 from v\$log where status='CURRENT');" >> temp.sql
  sqlplus -S / as sysdba <
spool tempres.txt
@temp.sql
quit
EOF

cat tempres.txt | grep ${CURYEAR} | grep -v grep | awk '{print $1" "$2}'

#rm -f temp.sql  tempres.sql


SwitchLogfile
GetArchiveTime

In this script two functions are recognized. The first is responsible for executing a log switch, making the database write the current redo-log file into and archived logfile. 
The second function determines the timestamp of the 'first_time#' of this archived logfile. This is the timestamp of the first database change recorded in this file. One can understand that here the gap is created which causes the standby database to lag behind.
The above script is never executed by itself on the dgmaster. It is always executed by a remote call from dgslave. 
The script used on dgslave looks like below:


# Recover a manual standby database to the current moment
#
#
export ORACLE_SID=ORCL
export ORALCE_USER=oracle
export ARCHPATH=/opt/mandg/ar01
#
# The hosts
#
export MASTER=dgmaster
export SLAVE=dgslave
#
# The current date and time
#
CURDATE=`date +%Y-%m-%d`
CURTIME=`date +%H:%M:%S`

ForceRemoteLogSwitch()

  # This function calls a remote script that forces a log-switch and return
  # the 'first_time' of subsequently just written archive-file
  #
  ARCHTIME=`ssh ${MASTER} /opt/mandg/ora/home/RecoverDbSwitchLogfile.sh`
  export ARCHTIME
}

GetArchives()
{
 # 
 # copy archives from remote host archive directory to the local d irectory
 #
 for remfile in `ssh ${MASTER} "ls ${ARCHPATH}/*arc"`
 {
   locfile=${ARCHPATH}/`echo ${remfile} | cut -d'/' -f5`
   if [ ! -f  ${locfile} ]
   then
      scp ${MASTER}:${remfile} ${ARCHPATH} 
   fi
 }
}

CreateSqlRecoverScript()
{
  RECOVERTIME=$1
  echo "connect / as sysdba" > Recover${ORACLE_SID}.sql
  echo "shutdown abort;" >> Recover${ORACLE_SID}.sql
  echo "startup mount;" >> Recover${ORACLE_SID}.sql
  echo "recover database until time '${RECOVERTIME}' using backup controlfile;"  >> Recover${ORACLE_SID}.sql
  echo "auto" >> Recover${ORACLE_SID}.sql
  echo "quit;"  >> Recover${ORACLE_SID}.sql 
}


#
# Main
#
ARCHTIME=FALSE

ForceRemoteLogSwitch

echo {$ARCHTIME}
if [ ! "$ARCHTIME" = "FALSE" ]
then
#
# Logswitch Succeeded, get the Archives now and recover
#
  GetArchives
  CreateSqlRecoverScript "${ARCHTIME}"
  sqlplus /nolog <<EOF
@Recover${ORACLE_SID}.sql
quit
EOF
fi

The script on dgslave has three main functions. The function ForceRemoteLogSwitch takes care of the remote logswitch. As we are assigning the output of the remote call to a variable ARCHTIME, we can use that later on for the recovery process.


The second function GetArchives is responsible for transferring the needed archives from dgmaster to dgslave. As both VM's are configured identically, the ARCHPATH on both servers is the same. This function only copies archived logfiles from dgmaster to dgslave which have not been copied already before.

The third function CreateSqlRecoverScript creates a local script which contains the recovery statements. As it is in fact unknown how the last recovery ended, we start the recovery with a 'shutdown abort' making sure we start recovery in a correctly mounted database. You can recognize here the usage of the retrieved ARCHTIME, called RECOVERTIME here, as it is passed to this function as argument $1.
Also seen here is the usage of the 'auto' recover option. This will make the recovery uninteractive, but it needs a correctly configured log_archive_dest pointing to the location where the archive files have been put.

Finally the Main part of this script, calls all above functions and finally starts an SQLplus session, with a call to the created Recoveryscript.
That's it. Nothing really special or complex. It works like a charm!!

Some additional remarks:
As we are using a database recovery method the process can not handle the creation of new tablespaces and or new datafile ( to an existing tablespace ).
When we e.g. execute on the primary dgmaster the following statement:

alter tablespace example add datafile '/opt/mandg/db01/example02.dbf' size 200m;

it will result on the dgmaster during the next recovery step into an

ORA-01244: unnamed datafile(s) added to control file by media recovery
ORA-01110: datafile file 7: '/opt/mandg/db01/example02dbf' 

This is also true when e.g. creating a new tablespace. This can be solved using the following steps:

Lookup the name of the datafile that was generated by the recovery process. It is identified by the fact that it resides within $ORACLE_HOME/dbs and is

called UNNAMEDnnnnn, where nnnnn is a number.
If the name is found, use the below statement to recreate the correct datafile, using:

alter database create datafile 
'/opt/mandg/ora/11.2.0.3/dbs/UNNAMED00007' 
as '/opt/mandg/db01/example02dbf' ;

In this example /opt/mandg/ora/11.2.0.3 is the ORACLE_HOME on the VM's we created.
After that the automated recovery can continue.

43 comments:

  1. Hello FJ,

    nicely explained how to set up manual standby for Oracle SE without Dataguard.

    Solutions that save money are always very useful :)

    Regards,
    Marko

    ReplyDelete
    Replies
    1. Thanks Marko,

      The cost aspect was in our case also the reason to investigate. And after all it was quite easy to get it up and running.

      Regards,
      Frank-Jan

      Delete
  2. You may want to consider using: "recover standby database until time '${RECOVERTIME}';" instead of "recover database until time '${RECOVERTIME}' using backup controlfile;"
    This does mean you have to create the standby database with a standby controlfile. Using a standby controlfile gives you some additional benefits , one which I believe is a read only standby database.

    If you do not want to go down the home scripting route, but want a fully fledged cost effective Data Guard alternative for Oracle SE and SE1 then check out Dbvisit Standby.

    Regards, Arjen

    ReplyDelete
    Replies
    1. It was the goal to keep it simple. It was all about setting up a license less zero cost alternative for oracle's dataguard. Thanks

      Delete
  3. Hello FJ,

    I was very helpfull. Thanks a lot and keep up the good work.
    I have a doubt, how often we need to run the scripts under dgmaster and dgslave crontab? Please correct me if i ma wrong...the first script need to be run in dgmaster and the second one in dgslave right?

    Thanks
    Narendra

    ReplyDelete
  4. You're correct about where to run the scripts.
    The frequency is highly dependent on business requirements. I mean if the business can't "live" with a data loss of e.g. 30 minutes, the scripts should run more often than once every 30 minutes.
    With a real dataguard setup, the replication is almost instantaneous.
    With these scripts this is hardly not possible.
    You could however have a look at the average log switch time and plan the scripts accordingly

    Success

    ReplyDelete
  5. Hello,
    Nice post.
    Have you ever tried the parameter STANDBY_FILE_MANAGEMENT (AUTO) ?
    With this parameter set to AUTO, file additions/deletions on the primary database are replicated on the standby database. You don't have to worry about file creation anymore.
    Regards,

    ReplyDelete
  6. @rmoving: Thanks for the comment. In the above Blog however, I discuss a self-created "Manual Standby" database. In this case the server I denote as primary does not have any clue about an existing standby as the standby is no more than an restored copy on another host.
    The parameter STANDBY_FILE_MANAGEMENT you mention is in my opinion only valid for real Dataguard configurations where the primary does know about its standby and is also able to send him/her actions to be executed. Thanks again.

    ReplyDelete
    Replies
    1. I tried to set the parameter STANDBY_FILE_MANAGEMENT = AUTO on Oracle 10.2.0.4 SE and it works correctly, ie without DataGuard.

      Delete
  7. FJ Franken, great instructions. I have used your instructions to create a manual standby but I am getting the below when apply the recovery even with all archive logs in synch.

    ORA-00289: suggestion :
    /opt/app/oracle/oradata/archivelog/1_92_788923790.arc
    ORA-00280: change 6565863264 for thread 1 is in sequence #92

    Then at the end it says:

    Specify log: {=suggested | filename | AUTO | CANCEL}
    AUTO
    Log applied.
    Media recovery complete.


    But when I check the latest sequence number from Primary and Standby they are not close. The v$log sequence does not get updated to the latest sequence#:

    Standby:
    SQL> select max(sequence#) from v$log;

    MAX(SEQUENCE#)
    --------------
    92

    Primary:
    SQL> select max(sequence#) from v$log;

    MAX(SEQUENCE#)
    --------------
    94

    But when I check v$log_history, I see that the archive logs were applied and they are the same there:

    Standby:
    SQL> select max(sequence#) from v$log_history;

    MAX(SEQUENCE#)
    --------------
    93

    Primary:
    SQL> select max(sequence#) from v$log_history;

    MAX(SEQUENCE#)
    --------------
    93

    Does this mean they are in sync? I'm just worried since in Dataguard when I check the v$log they should both be the same.

    Where am I going wrong here?

    ReplyDelete
  8. You're ok. Since the standby is in recovery mode no progress of v$log will be seen. v$archived_log shows your progress. v$log sequence number will be reset as soon as the database is opende with the resetlogs option.

    Thanks for your reply.

    ReplyDelete
    Replies
    1. #1
      Thanks for your quick response. I have read that the "alter database archive log current" statement is safer to user than the "alter database switch logfile" statement because a log switch implies the redo will be archived, but does not mean it was archived. What is your take on it?

      #2
      Also, I altered your recover statement to a single line.

      From your blog:
      echo "recover database until time '${RECOVERTIME}' using backup controlfile;" >> Recover${ORACLE_SID}.sql
      echo "auto" >> Recover${ORACLE_SID}.sql

      What I'm using:
      echo "alter database recover automatic database until time '${RECOVERTIME}' using backup controlfile;" >> Recover${ORACLE_SID}.sql

      Any difference between the two?

      #3
      You're suggesting to use sequence#-1, but I found that this causes the standby to be two sequence# behind the primary instead of one. I took the -1 away from that query and found myself only one sequence# behind the primary. So I am seeing all the transactions before the log switch applied immediately on the standby after the script executes, where if I left the sequence#-1 I can only see the transactions on the standby applied after the script executes two times. This means if I scheduled this for 30 minutes my standby would actually be 30min x 2 (60 min) behind the primary instead of only 30 minutes. Is this true or did I screw up somewhere?

      From your blog:
      select to_char(first_time, 'YYYY-MM-DD HH24:MI:SS') from v$archived_log where sequence#=(select sequence# - 1 from v$log where status='CURRENT');

      example v$log_history:

      TO_CHAR(FIRST_TIME, SEQUENCE#
      ------------------- ----------
      2012-08-07 13:43:58 97
      2012-08-07 14:21:06 98
      2012-08-07 14:28:56 99
      2012-08-07 15:28:20 100
      2012-08-07 15:33:32 101


      --When I used the recovery time ="2012-08-07 15:28:20" resulting from the above query for recovery it was recovering up to :

      TO_CHAR(FIRST_TIME, SEQUENCE#
      ------------------- ----------
      2012-08-07 14:28:56 99

      This would make it two sequences behind the primary since primary is already on sequence# 101. So I created a table before the log switch to see if it would show up on my standby after running the script.
      Despite using the correct recovery time it would not recover to the time specified as I would think, because otherwise it would show sequence# 100 in my standby, but it shows only up to 99. I check the standby and my new table is not there.



      What I'm using:
      select to_char(first_time, 'YYYY-MM-DD HH24:MI:SS'), sequence# from v$log where status='CURRENT';"

      --When I use the recovery time ="2012-08-07 15:33:32" resulting from the above query for recovery it was recovering up to sequence# 100 (without errors or user interaction), now it is only one sequence# behind the primary and my table shows up on standby immediately after executing the script.

      TO_CHAR(FIRST_TIME, SEQUENCE#
      ------------------- ----------
      2012-08-07 15:28:20 100


      Do you see any drawbacks in the above method?

      Thanks.

      Delete
    2. Just wanted to add that I'm using 10gR2, 10.2.0.4 SE to replicate your steps.

      Delete
    3. I can understand that you've removed the "-1" when getting the last sequence number if you're doing a "archive current" instead of a "switch logfile".
      The way I have set it up in this Blog was my first attempt to get thing working. So I appreciate your tuning attempts to get the job done better. I totally agree with your suggestions. Thanks for that.

      The Db version should not be very important. The recovery process for this has not changed that much from 10g to 11g

      Thanks

      Delete
  9. Thanks Franken for sharing this.

    We too have been searching for a dataguard like solution for Oracle SE.

    Now, as we all understand, Dataguard can be configured using SQL*Plus or using the DataGuard Broker. And DGMGRL and EM are just 2 interfaces to the Broker.

    The link http://docs.oracle.com/cd/B19306_01/server.102/b14230/install.htm#BABECEJC
    states:
    The following conditions must be true before you can use the broker:

    The primary and standby databases must be running Oracle Database 10g (10.2) and each can be installed in either a single-instance or multi-instance environment. The database must be licensed for Oracle Enterprise Edition or Personal Edition.

    Meaning that Oracle Enterprise Edition is required only if the Broker is used. But what if SQL*Plus is used? (The SQL*Plus based dataguard configuration that I mean is not the same as what you have demonstrated in your article). Or am I missing something?

    Thanks.

    ReplyDelete
    Replies
    1. I'm very interested too see what your sqlplus method would be. My method can also be seen as an sqlplus wat of solving it. Can you show me what you mean?

      Delete
    2. Not sure what was on my mind when I wrote that. May be I meant about tools like Dbvisit. But I understand that those too use the SQL*Plus based method that you have demonstrated at the core.

      But my main query was, why do the docs say that EE license required only in case of (DG) Broker?

      Thanks

      Delete
  10. You still need a separate Standard Edition license for this kind of setup. you may confirm it from your local Oracle distributor.

    ReplyDelete
  11. Thanks for the reply.

    In a previous reply I mentioned "a license less solution", but I guess you're right. Maybe, if explained clearly that it is a disaster recovery database, which can't be used, Oracle will allow it to run without a license.

    ReplyDelete
    Replies
    1. I consulted Oracle about this and they say that since it is not DG a license is required for both databases

      Delete
    2. That is right. SE license is required for both sites i.e. Primary as well as DR.
      With DG too, EE license is required for both sites; just that there is a "Special Use" license, but we haven't been able to get much clarity on that. If anybody has clarity, kindly share.

      Thanks

      Delete
    3. Thanks for the add on.

      There is a kind of "rule" from Oracle that says, that if you're using DG and if you're not failing over more than 10 times a year, the standby database does not have to be licensed. I however don't know if that is a valid statement across the world.

      Delete
  12. This comment has been removed by the author.

    ReplyDelete
    Replies
    1. But I thought that was true for Cold Failover Clusters only (also, if I am not wrong, it says - should not have been active for more than 10 calendar days.) and hence I had an understanding that the "Special Use" was something different and NOT this 10-day rule.
      I may be wrong!

      Delete
  13. I guess it is an issue for your Oracle sales representative. I think we can decide what it should be, but since we're both not Oracle employees, in the end we still will be wrong :-)

    ReplyDelete
  14. Hi FJ Franken,

    I'm finding your article very useful as we would also like to do the same.
    However, I'm very new to Oracle and i noticed you mentioned :

    On dgmaster a database was configured, called ORCL.
    On dgslave initially an offline copy of dgmaster's ORCL database was restored/copied and brought up into mounted state. This ofcourse brings the need of transferring the init.ora / spfile, and the creation of a password file.

    What I would like to know is if both database will be created with same SID or I use different SIDs for each

    Thanks

    ReplyDelete
    Replies
    1. Hi,

      Thank you for your question.
      With this solution both SIDs should be the same. Otherwise the archivelog apply will not work

      Delete
  15. Hi FJ Franken,

    First of all thank you for this great article.

    I'm trying to implment your method. WHat I do is get a cold backup of the production server. Then restore it on the standby server. Then I run your script.

    The script the get the current archive seq # from standby and retrieve tge missing logs from production. Then it run the recover till firsttime#.

    Still it doesn't show the updated seq# on the standby server. I'm bit confuse here,

    echo "recover database until time '${RECOVERTIME}' using backup controlfile;" >> Recover${ORACLE_SID}.sql

    Here do I have to copy production server controlfile as well.. or do I have to run a catalog on all archive logs..

    Appreciate your help..

    Thanks
    Ak

    ReplyDelete
  16. Hi Ak,

    Thanks for your Post. I appreciate it when people really use my blog entries.
    In the mean time I have developed a more stable solution for this kind of usage. The archives are no longer remotely retrieved by calling a remote script, but are transferred by means of a NFS mount from the standby server to the production server. The production server then uses two log_archive_dest location, on being its over archivelog location and one for the NFS mount.

    Regarding the log-seq# your seeing, this is correct. The log-sequence# will only change at the moment you open the database with the (NO)RESETLOGS option.
    Keep in mind here that as soon as you open the database for read/write access, it becomes an incarnation of its master and it is no longer usable for this manual recovery method.

    Regarding the controlfile question. If both the standby and the master are physically identical, you can use the controlfile from the master, to recover with.

    ReplyDelete
  17. Hi FJ,

    Thanks for the reply..

    I have one more question :-)..

    If we run the script for the second time it doesn't show the current seq# in the standby server after the recovery.. it's showing the seq# it had when I restored the cold backup of production on standby.. Is there any other way to retrieve the seq# after we do a recovery using your script..

    Thanks and appreciate your feedback..

    Ak

    ReplyDelete
  18. Hi Ak,

    I already addressed your question about the sequence# in my above reply. The standby seqence# in e.g. v$log will only be updated after you 'alter database open' the standby database. This is however something you don't want to do, unless you also have e.g. flashback configured. Opening a manual standby database with resetlogs or noresetlogs will turn this standby database into a working incarnation of its master. You probably will not be able to continue the recover afterwards and need to create a new cold backup of the master.

    Have a look also here: http://managingoracle.blogspot.nl/2013/06/oracle-manual-standby-revised-no_13.html

    This is my latest version of the manual standby, with only a single script solution. This solution is working fine for months now on two of our production databases.

    ReplyDelete
  19. Hi FJ,

    Thanks for the reply..

    Last night I did a 'alter atabase open use resetlogs' on standby for verification with productions DB. I checked the current SCN and Archive log seq# on standby and production. It showed database had recoverd successfully till the last seq#.

    Thanks again.. Keep up the good work.

    Ak

    ReplyDelete
  20. How I convert manualy standby database to primary database?

    ReplyDelete
    Replies
    1. Thanks Thomas for the reply.

      Converting the standby to the primary depends highly on the configuration you're using.

      Refer also to this blog entry:
      http://managingoracle.blogspot.nl/2013/06/oracle-manual-standby-revised-no_13.html


      It describes the new version of the Manual Standy script.

      Converting the standby to primary is only to be done if the primary has become totally useless, and opening the standby is your last resort.

      Why?

      This is because, as said in the above explanation, the standby lags behind in time. The time difference depends on how often the script has run, or when it ran the last time before the primary became useless.

      Since the manual standby is depending on an Oracle Recovery Procedure, you can simply apply the last couple of archivelogs onto the standby and then open the database with a RESETLOGS option.

      Remember that opening the standby if the primary is still active, the standby starts a live of its own will become an incarnation of the primary. Archivelog apply from the primary is no longer possible afterwards.

      Afterwards you ofcourse need to change tnsnames entries for the users to be able to access it.


      Thanks again

      FJ Franken

      Delete
  21. Hi Franken,

    I have came through your document,
    i have completed till SSH configuration,
    it is working fine..
    now in the scripting i have stucked..
    i don't know how to go with it..
    Direct me..


    Thanks
    Vel

    ReplyDelete
    Replies
    1. Hi,

      Thanks for your post. I suggest that that you have a look at my new blog entry, the manual dataguard revised. It uses NFS instead of ssh
      Much easier to configure

      Success

      Delete
    2. Hi ..

      SSH is working fine..
      I don't have a problem with SSH..
      Shell scripting is the problem..
      i don't know what is happening in that loop..
      just tell me exactly how i need to execute it..
      oracle 10.2.0.3 Standard edition..

      Delete
    3. Hi,


      There is only one loop in the above script:

      for remfile in `ssh ${MASTER} "ls ${ARCHPATH}/*arc"`
      {
      locfile=${ARCHPATH}/`echo ${remfile} | cut -d'/' -f5`
      if [ ! -f ${locfile} ]
      then
      scp ${MASTER}:${remfile} ${ARCHPATH}
      fi
      }

      This loop checks for remote archivelog files and copies them over to the local (standby) host.

      But again, this solution is in my opinion outdated and a bit buggy, because of the dependency on the remote script.
      The new script, which runs only on the standby host is much easier to configure, and much more reliable. The information can be found here: http://managingoracle.blogspot.nl/2013/06/oracle-manual-standby-revised-no_13.html


      If you still encounter problems, please send me an Email at oraclemanaging@gmail.com, with sufficient information ( screenshots ) enabling me to be more helpfull

      Thanks

      Delete
  22. F J Franken,

    I am at a site where there is no Enterprise Edition and looking for something like this, I will appreciate if you can email me any notes and documents on the process and configuring your scripts.

    thank you,
    Max

    ReplyDelete
    Replies
    1. I recommend you to have a look at my newest Blog entry about this subject. I revised the setup whereas there is now only one script instead of two. Also we use NFS instead of scp. It is much more stable.
      The Blog entry can be found here: http://managingoracle.blogspot.nl/2013/06/oracle-manual-standby-revised-no_13.html

      There is also a downloadable manual about the setup and configuration.

      Success!!

      Delete
  23. Hi Frank

    I'm a DBA Junior and Your post is very helpful.

    I tried to implement a manual standby in 11g with Oracle Linux, VirtualBox
    but when executing the script to copy archive logs I get a error.

    . ./manualstandby.sh
    Killed by signal 2.
    {}
    bash: ./manualstandby.sh: line 70: syntax error near unexpected token `newline'
    bash: ./manualstandby.sh: line 70: ` sqlplus /nolog <'
    I doesn't know batch and I have to finiched the laboratory to implement it in production

    Please HELP ME!!

    ReplyDelete
    Replies
    1. Hi David,

      Looking at the Post again, I guess you're right. There is a typo in the script. The last section saying:

      echo {$ARCHTIME}
      if [ ! "$ARCHTIME" = "FALSE" ]
      then
      #
      # Logswitch Succeeded, get the Archives now and recover
      #
      GetArchives
      CreateSqlRecoverScript "${ARCHTIME}"
      sqlplus /nolog <
      @Recover${ORACLE_SID}.sql
      quit
      EOF
      fi

      should in fact look like:

      echo {$ARCHTIME}
      if [ ! "$ARCHTIME" = "FALSE" ]
      then
      #
      # Logswitch Succeeded, get the Archives now and recover
      #
      GetArchives
      CreateSqlRecoverScript "${ARCHTIME}"
      sqlplus /nolog <<EOF
      @Recover${ORACLE_SID}.sql
      quit
      EOF
      fi


      I hope this solves your problem. I will update the Post.
      Thanks

      Delete