Omnipotent DBA
Wednesday, May 8, 2013
How Do You Know When to Change Jobs?
There are obvious signs, but the two most important are if you're no long learning and if you're no longer having fun.
Saturday, March 16, 2013
Learning MongoDB
Create a /etc/yum.repos.d/10gen.repo file to hold information about your repository.
[10gen]
name=10gen repository
baseurl=http://downloads-distro.mongodb.org/repo/redhat/os/x86_64
gpgcheck=0
enabled=1
Issue the following command as root to install the latest stable version of MongoDB and the associated tools:
yum install mongo-10gen mongo-10gen-server
These packages configure MongoDB using the /etc/mongod.conf fiile.
This MongoDB instance will store its data files in the /var/lib/mongo and its log files in /var/log/mongo, and run using the mongod user account.
From a system prompt, start mongo by issuing the mongo command.
When you query a collection, MongoDB returns a "cursor" object that contains the results of the query.
The it operation allows you to iterate over the next 20 results in the shell.
You can manipulate a cursor object as if it were an array.
You can constrain the size of the result set to increase performance by limiting the amount of data your application must receive over the network. To specify the maximum number of documents in the result set, call the limit() method on a cursor.
[10gen]
name=10gen repository
baseurl=http://downloads-distro.mongodb.org/repo/redhat/os/x86_64
gpgcheck=0
enabled=1
Issue the following command as root to install the latest stable version of MongoDB and the associated tools:
yum install mongo-10gen mongo-10gen-server
These packages configure MongoDB using the /etc/mongod.conf fiile.
This MongoDB instance will store its data files in the /var/lib/mongo and its log files in /var/log/mongo, and run using the mongod user account.
From a system prompt, start mongo by issuing the mongo command.
When you query a collection, MongoDB returns a "cursor" object that contains the results of the query.
The it operation allows you to iterate over the next 20 results in the shell.
You can manipulate a cursor object as if it were an array.
You can constrain the size of the result set to increase performance by limiting the amount of data your application must receive over the network. To specify the maximum number of documents in the result set, call the limit() method on a cursor.
Thursday, February 14, 2013
Creating an 11.2.0.2 RAC Logical Standby Database
Yesterday I created an 11.2.0.2 RAC logical standby database for an 11.2.0.2 RAC primary database. Here are the steps.
You create a logical standby database by first creating a physical standby database and then transitioning it to a logical standby database. So first I created a physical standby database "belmont" for primary database "gilroy":
SQL> SELECT db_unique_name,name,open_mode,database_role FROM v$database;
DB_UNIQUE_NAME NAME OPEN_MODE DATABASE_ROLE
------------------------------ --------- -------------------- ----------------
belmont GILROY MOUNTED PHYSICAL STANDBY
Before converting it to a logical standby, you need to stop Redo Apply on the physical standby database:
SQL> alter database recover managed standby database cancel;
A LogMiner dictionary must be built into the redo data so that the LogMiner component of SQL Apply can properly interpret changes it sees in the redo. To build the LogMiner dictionary, issue the following statement on primary database:
SQL> EXECUTE DBMS_LOGSTDBY.BUILD;
All auxiliary instances have to be shut down and disable the cluster on the target standby if your standby is a RAC (in my case). Shut down all but the instance on which the MRP was running (your actual target instance). Once they are all done, then disable the cluster and bounce the standby:
SQL> alter system set cluster_database=false scope=spfile;
SQL> shutdown immediate;
SQL> startup mount exclusive;
Now you are ready to tell MRP that it needs to continue applying redo data to the physical standby database until it is ready to convert to a logical standby database:
SQL> alter database recover to logical standby fremont;
In above statement, you changed the actual database name of the standby to "fremont" so it can become a logical standby database. Data Guard will change the database name (DB_NAME) and set a new database identifier (DBID) for the logical standby.
At this point, you can re-enable the cluster database parameter, if you had a RAC, and then restart and open the new logical standby database:
At this point, you can re-enable the cluster database parameter, if you had a RAC, and then restart and open the new logical standby database:
SQL> alter system set cluster_database=true scope=spfile;
SQL> shutdown;
SQL> startup mount;
SQL> alter database open resetlogs;
Issue the following statement to start SQL Apply in real-time apply mode using the IMMEDIATE keyword:
SQL> alter database start logical standby apply immediate;
Tuesday, January 8, 2013
Friday, December 21, 2012
How to Get EM 12c Agent Software Using Self Update in Offline Mode?
1. Ensure EM 12c Cloud Control is set to the offline mode
From the "Setup" menu, choose "Provisioning and Patching" -> "Offline Patching", then change the setting for "Connection" to "Offline". Optionally you can upload the catalog to keep EM up-to-date with patch recommendations in this screen too.
2. Acquiring Management Agent software in the offline mode
From the "Setup" menu, choose "Extensibility" -> "Self Update".
Select the entity type "Agent Software" and choose one.
Click "Download", a message box is displayed with a URL and instructions.
Download the file from a computer with Internet connection and copy the file to the computer where your EM installation resides.
3. Import Management Agent software into OMS using emcli
$OMS_HOME/bin/emcli login -username=sysman
$OMS_HOME/bin/emcli import_update -omslocal -file=/home/oracle/Desktop/p14570372_112000_Generic.zip
Refresh "Self Update" page, corresponding Agent software status will be changed to "Downloaded".
References:
MOS Note ID 1369575.1
From the "Setup" menu, choose "Provisioning and Patching" -> "Offline Patching", then change the setting for "Connection" to "Offline". Optionally you can upload the catalog to keep EM up-to-date with patch recommendations in this screen too.
2. Acquiring Management Agent software in the offline mode
From the "Setup" menu, choose "Extensibility" -> "Self Update".
Select the entity type "Agent Software" and choose one.
Click "Download", a message box is displayed with a URL and instructions.
Download the file from a computer with Internet connection and copy the file to the computer where your EM installation resides.
3. Import Management Agent software into OMS using emcli
$OMS_HOME/bin/emcli login -username=sysman
$OMS_HOME/bin/emcli import_update -omslocal -file=/home/oracle/Desktop/p14570372_112000_Generic.zip
Refresh "Self Update" page, corresponding Agent software status will be changed to "Downloaded".
References:
MOS Note ID 1369575.1
Friday, December 7, 2012
Adopting Exadata to Improve BI Solution Performance
Probably the most important thing will be their code: make sure that it employs set base (as opposed to row by row) processing, parallelism, compression, bulk direct path loading, and transformation techniques vs DML.
EHCC greatly assists with processing of data.
Infiniband also helps with the transfer of data as well - not just between the database nodes and Exadata storage cells but also when ingesting the data from the sources as well (i.e. the data warehouse is not a bottleneck).
EHCC greatly assists with processing of data.
Infiniband also helps with the transfer of data as well - not just between the database nodes and Exadata storage cells but also when ingesting the data from the sources as well (i.e. the data warehouse is not a bottleneck).
Backup and recovery is faster which means unlike many data
warehouses which their DR plan mainly consist of complete reload from
sources, this provides a true DR and even better DR via data guard than
simply "rebuild".
Unanticipated queries where
you do not have cubes or views developed aren't such an issue either.
Exadata is able to address the unanticipated ones. And these can be the
most valuable ones too - new questions that result in more improvement
over previous ways.
Tuesday, October 23, 2012
Estimating the IMDB Cache Size
It is hard to predict the cache size based on raw data size since there are a number of factors affecting it. The best practice is to use the sizing tool, ttSize, that ships as part of TimesTen. You need an installed copy of TimesTen to run the tool and it should be the same bittedness (32 or 64) as the system that you are sizing for in order to give accurate results.
You create an empty table (and associated indexes) in TimesTen with the same structure as the table to be cached. Then run the ttSize tool against this table and specify the expected number of rows for the table. The tool then calculates, based on the way in which TimesTen stores data/indexes and taking into account overheads etc., the amount of memory needed for the table and each index and reports these. Repeat for each table you want to cache and sum the results. This is really the only way to predict how much memory will be needed to cache a table in TimesTen.
You create an empty table (and associated indexes) in TimesTen with the same structure as the table to be cached. Then run the ttSize tool against this table and specify the expected number of rows for the table. The tool then calculates, based on the way in which TimesTen stores data/indexes and taking into account overheads etc., the amount of memory needed for the table and each index and reports these. Repeat for each table you want to cache and sum the results. This is really the only way to predict how much memory will be needed to cache a table in TimesTen.
Friday, October 19, 2012
Network Requirements for Oracle Exadata Database Machine
Oracle Exadata Database Machine includes database servers, Exadata Storage Servers, as well as equipment to connect the servers to your network. The network connections allow the servers to be administered remotely, and clients to connect to the database servers.
Each Oracle Exadata Database Machine X2-2 database server consists of the following network components and interfaces:
Each Exadata Storage Server consists of the following network components and interfaces:
Each Oracle Exadata Database Machine X2-2 database server has three 1 GbE ports and two 10 GbE ports available for additional connectivity.
Additional configuration, such as defining multiple VLANs or enabling routing, may be required for the switch to operate properly in your environment and is beyond the scope of the installation service.
To deploy Oracle Exadata Database Machine ensure that you meet the minimum network requirements There are up to 5 networks for Oracle Exadata Database Machine. Each network must be on a distinct and separate subnet from the others. The network descriptions are as follows:
Management network: This required network connects to your existing management network, and is used for all components of Oracle Exadata Database Machine. It connects the servers, ILOM, and switches connected to the Ethernet switch in the rack.
Client access network: This required network connects the database servers to your existing client network and is used for client access to the database servers. Applications access the database through this network using SCAN and Oracle RAC VIP addresses. Database servers support channel bonding to provide higher bandwidth or availability for client connections to the database. Oracle recommends channel bonding for the client access network. Channel bonding on Oracle Exadata Database Machine X2-2 bonds NET1 and NET2.
Additional networks (optional): Database servers may be configured to connect to one or two additional existing networks through the NET2 and NET3 ports on Oracle Exadata Database Machine X2-2 racks. If channel bonding is used for the client access network, then only one additional port (NET3) is available on Oracle Exadata Database Machine X2-2 racks.
InfiniBand private network: This network connects the database servers and Exadata Storage Servers using the InfiniBand switches on the rack and the BONDIB0 interface. Oracle Database uses this network for Oracle RAC cluster interconnect traffic and for accessing data on Exadata Storage Servers. This non-routable network is fully contained in Oracle Exadata Database Machine, and does not connects to your existing network. This network is automatically configured during installation.
Network Diagram for Oracle Exadata Database Machine X2-2 when Using Channel Bonding (NET1 and NET2)
Network Diagram for Oracle Exadata Database Machine X2-2 when Not Using Channel Bonding
References
Oracle Exadata Database Machine Owner's Guide 11g Release 2
Each Oracle Exadata Database Machine X2-2 database server consists of the following network components and interfaces:
- 4 embedded 1 GbE ports (NET0, NET1, NET2, and NET3)
- 1 dual-port QDR (40 Gb/s) InfiniBand Host Channel Adapter (HCA) (BONDIB0)
- 1 Ethernet port for Sun Integrated Lights Out Manager (ILOM)
- 1 dual-port 10 GbE PCIe 2.0 network card with Intel 82599 10 GbE controller (only in Sun Fire X4170 M2 Oracle Database Servers)
Each Exadata Storage Server consists of the following network components and interfaces:
- 1 embedded 1 GbE port (NET0)
- 1 dual-port QDR InfiniBand Host Channel Adapter (HCA) (BONDIB0)
- 1 Ethernet port for Sun Integrated Lights Out Manager (ILOM)
Each Oracle Exadata Database Machine X2-2 database server has three 1 GbE ports and two 10 GbE ports available for additional connectivity.
Additional configuration, such as defining multiple VLANs or enabling routing, may be required for the switch to operate properly in your environment and is beyond the scope of the installation service.
To deploy Oracle Exadata Database Machine ensure that you meet the minimum network requirements There are up to 5 networks for Oracle Exadata Database Machine. Each network must be on a distinct and separate subnet from the others. The network descriptions are as follows:
Management network: This required network connects to your existing management network, and is used for all components of Oracle Exadata Database Machine. It connects the servers, ILOM, and switches connected to the Ethernet switch in the rack.
Client access network: This required network connects the database servers to your existing client network and is used for client access to the database servers. Applications access the database through this network using SCAN and Oracle RAC VIP addresses. Database servers support channel bonding to provide higher bandwidth or availability for client connections to the database. Oracle recommends channel bonding for the client access network. Channel bonding on Oracle Exadata Database Machine X2-2 bonds NET1 and NET2.
Additional networks (optional): Database servers may be configured to connect to one or two additional existing networks through the NET2 and NET3 ports on Oracle Exadata Database Machine X2-2 racks. If channel bonding is used for the client access network, then only one additional port (NET3) is available on Oracle Exadata Database Machine X2-2 racks.
InfiniBand private network: This network connects the database servers and Exadata Storage Servers using the InfiniBand switches on the rack and the BONDIB0 interface. Oracle Database uses this network for Oracle RAC cluster interconnect traffic and for accessing data on Exadata Storage Servers. This non-routable network is fully contained in Oracle Exadata Database Machine, and does not connects to your existing network. This network is automatically configured during installation.
Network Diagram for Oracle Exadata Database Machine X2-2 when Using Channel Bonding (NET1 and NET2)
Network Diagram for Oracle Exadata Database Machine X2-2 when Not Using Channel Bonding
References
Oracle Exadata Database Machine Owner's Guide 11g Release 2
Wednesday, October 17, 2012
Creating a 10.2.0.4 Physical Standby Database
Firstly, install Oracle Database 10.2.0.4 software for both primary and standby sites.
Creating the primary database
Then create primary database "chicago" on file system using DBCA.
Remember to enable ARCHIVELOG mode during DBCA interview.
Place the primary database in FORCE LOGGING mode after database creation using the following SQL statement:
SQL> ALTER DATABASE FORCE LOGGING;
Create a listener using NETCA in GUI mode.
Preparing the standby system
You must configure the network as per your tuning with the TNS names for the primary database in the TNSNAMES file. This can be done using the Oracle Net Manager ($ORCLE_HOME/bin/netmgr) in GUI mode.
In addition, create the various directories for the dump parameters and, if you are not using ASM, the directories, where the data files, control files, online log files, and archive log files will be placed. In this case, these directories include (given $ORACLE_BASE is "/oracle/opt/oracle" and standby database DB_UNIQUE_NAME is "boston"):
/oracle/opt/oracle/oradata/boston
/oracle/opt/oracle/oradata/boston/archivelog
/oracle/opt/oracle/admin/boston/adump
/oracle/opt/oracle/admin/boston/bdump
/oracle/opt/oracle/admin/boston/cdump
/oracle/opt/oracle/admin/boston/udump
Getting the necessary files and creating the backups
You need to gather 4 main files on primary system for transporting to the target standby system be able to create a standby database:
Create a PFILE for standby database from primary database's SPFILE:
SQL> CREATE PFILE='/home/ora1024/stage/initboston.ora' FROM SPFILE;
Copy the password file from the primary system to your target standby system:
cp $ORACLE_HOME/dbs/orapwchicago /home/ora1024/stage/orapwboston
Create a full backup of the entire primary database "chicago":
RMAN> BACKUP DEVICE TYPE DISK FORMAT '/home/ora1024/stage/Database%U' DATABASE PLUS ARCHIVELOG;
Create a copy of the control file for the standby database:
RMAN> BACKUP DEVICE TYPE DISK FORMAT '/home/ora1024/stage/Control%U' CURRENT CONTROLFILE FOR STANDBY;
Now copy all these files to the SAME directory i.e. /home/ora1024/stage on your standby system "pstby5":
scp -p /home/ora1024/stage/* pstby5:/home/ora1024/stage/
Preparing the standby database
At minimum, you need to change the DB_UNIQUE_NAME to the name ("boston" in this case) of the standby in the PFILE i.e. initboston.ora:
*.db_unique_name='boston'
If your directory structure is different, you also need to add file name conversion parameters:
*.db_file_name_convert='/chicago/','boston'
*.log_file_name_convert='/chicago/','boston'
Restoring the backup
Once the initialization parameters are all set and the various directories have been created, start the standby database up in NOMOUNT mode:
export ORACLE_SID=boston
sqlplus / as sysdba
SQL> STARTUP NOMOUNT PFILE='/home/ora1024/stage/initboston.ora';
SQL> CREATE SPFILE FROM PFILE='/home/ora1024/stage/initboston.ora';
SQL> SHUTDOWN ABORT;
SQL> STARTUP NOMOUNT;
SQL> EXIT;
And using RMAN to connect to the primary database as the target and the standby database as the auxiliary, then duplicate database for standby:
rman target sys/oracle@chicago auxiliary /
RMAN> DUPLICATE TARGET DATABASE FOR STANDBY NOFILENAMECHECK DORECOVER;
Now standby database should be in MOUNT state.
Configuring the standby database
Add necessary SRL files to the standby database for redo transport:
SQL> ALTER DATABASE ADD STANDBY LOGFILE '/oracle/opt/oracle/oradata/boston/redo04.rdo' size 512M;
SQL> ALTER DATABASE ADD STANDBY LOGFILE '/oracle/opt/oracle/oradata/boston/redo05.rdo' size 512M;
SQL> ALTER DATABASE ADD STANDBY LOGFILE '/oracle/opt/oracle/oradata/boston/redo06.rdo' size 512M;
SQL> ALTER DATABASE ADD STANDBY LOGFILE '/oracle/opt/oracle/oradata/boston/redo07.rdo' size 512M;
You can now finish defining the Data Guard parameters that will be necessary in the standby role as well as the primary role when a switchover (or failover) occurs:
SQL> ALTER SYSTEM SET FAL_SEVER=chicago;
SQL> ALTER SYSTEM SET FAL_CLIENT=boston;
SQL> ALTER SYSTEM SET LOG_ARCHIVE_MAX_PROCESSES=8;
SQL> ALTER SYSTEM SET LOG_ARCHIVE_CONFIG='DG_CONFIG=(chicago,boston)';
SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_2='SERVICE=chicago ASYNC DB_UNIQUE_NAME=chicago VALID_FOR=(online_logfile,primary_role)';
SQL> ALTER SYSTEM SET STANDBY_FILE_MANAGEMENT=AUTO;
And start the apply process on the standby database:
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT;
This will create and clear the ORL files so that they exist when the standby becomes a primary.
Finalizing the primary database
Add SRL files so that they are in place for a future role transition:
SQL> ALTER DATABASE ADD STANDBY LOGFILE '/oracle/opt/oracle/oradata/chicago/redo04.rdo' size 512M;
SQL> ALTER DATABASE ADD STANDBY LOGFILE '/oracle/opt/oracle/oradata/chicago/redo05.rdo' size 512M;
SQL> ALTER DATABASE ADD STANDBY LOGFILE '/oracle/opt/oracle/oradata/chicago/redo06.rdo' size 512M;
SQL> ALTER DATABASE ADD STANDBY LOGFILE '/oracle/opt/oracle/oradata/chicago/redo07.rdo' size 512M;
Set the Data Guard parameters on the primary database that will be used to send redo to the standby. Also set those parameters that will be used when the primary becomes a standby database after a role transition:
SQL> ALTER SYSTEM SET FAL_SEVER=boston;
SQL> ALTER SYSTEM SET FAL_CLIENT=chicago;
SQL> ALTER SYSTEM SET LOG_ARCHIVE_MAX_PROCESSES=8;
SQL> ALTER SYSTEM SET LOG_ARCHIVE_CONFIG='DG_CONFIG=(chicago,boston)';
SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_2='SERVICE=boston ASYNC DB_UNIQUE_NAME=boston VALID_FOR=(online_logfile,primary_role)';
SQL> ALTER SYSTEM SET STANDBY_FILE_MANAGEMENT=AUTO;
To start sending redo, switching log files on the primary:
SQL> ALTER SYSTEM SWITCH LOGFILE;
Verifying the physical standby database is performing properly
Using follow queries to verify the physical standby database is performing properly:
SQL> SELECT DB_UNIQUE_NAME,NAME,OPEN_MODE,DATABASE_ROLE FROM V$DATABASE;
SQL> SELECT PROCESS,STATUS,THREAD#,SEQUENCE# FROM V$MANAGED_STANDBY;
And you can force a logfile switch to archive current ORL file on primary database and then verify the redo data was archived on the standby database.
Creating the primary database
Then create primary database "chicago" on file system using DBCA.
Remember to enable ARCHIVELOG mode during DBCA interview.
Place the primary database in FORCE LOGGING mode after database creation using the following SQL statement:
SQL> ALTER DATABASE FORCE LOGGING;
Create a listener using NETCA in GUI mode.
Preparing the standby system
You must configure the network as per your tuning with the TNS names for the primary database in the TNSNAMES file. This can be done using the Oracle Net Manager ($ORCLE_HOME/bin/netmgr) in GUI mode.
In addition, create the various directories for the dump parameters and, if you are not using ASM, the directories, where the data files, control files, online log files, and archive log files will be placed. In this case, these directories include (given $ORACLE_BASE is "/oracle/opt/oracle" and standby database DB_UNIQUE_NAME is "boston"):
/oracle/opt/oracle/oradata/boston
/oracle/opt/oracle/oradata/boston/archivelog
/oracle/opt/oracle/admin/boston/adump
/oracle/opt/oracle/admin/boston/bdump
/oracle/opt/oracle/admin/boston/cdump
/oracle/opt/oracle/admin/boston/udump
Getting the necessary files and creating the backups
You need to gather 4 main files on primary system for transporting to the target standby system be able to create a standby database:
- The initialization parameters
- The password file
- A full backup of the database
- The control file backup (as a standby control file)
Create a PFILE for standby database from primary database's SPFILE:
SQL> CREATE PFILE='/home/ora1024/stage/initboston.ora' FROM SPFILE;
Copy the password file from the primary system to your target standby system:
cp $ORACLE_HOME/dbs/orapwchicago /home/ora1024/stage/orapwboston
Create a full backup of the entire primary database "chicago":
RMAN> BACKUP DEVICE TYPE DISK FORMAT '/home/ora1024/stage/Database%U' DATABASE PLUS ARCHIVELOG;
Create a copy of the control file for the standby database:
RMAN> BACKUP DEVICE TYPE DISK FORMAT '/home/ora1024/stage/Control%U' CURRENT CONTROLFILE FOR STANDBY;
Now copy all these files to the SAME directory i.e. /home/ora1024/stage on your standby system "pstby5":
scp -p /home/ora1024/stage/* pstby5:/home/ora1024/stage/
Preparing the standby database
At minimum, you need to change the DB_UNIQUE_NAME to the name ("boston" in this case) of the standby in the PFILE i.e. initboston.ora:
*.db_unique_name='boston'
If your directory structure is different, you also need to add file name conversion parameters:
*.db_file_name_convert='/chicago/','boston'
*.log_file_name_convert='/chicago/','boston'
Restoring the backup
Once the initialization parameters are all set and the various directories have been created, start the standby database up in NOMOUNT mode:
export ORACLE_SID=boston
sqlplus / as sysdba
SQL> STARTUP NOMOUNT PFILE='/home/ora1024/stage/initboston.ora';
SQL> CREATE SPFILE FROM PFILE='/home/ora1024/stage/initboston.ora';
SQL> SHUTDOWN ABORT;
SQL> STARTUP NOMOUNT;
SQL> EXIT;
And using RMAN to connect to the primary database as the target and the standby database as the auxiliary, then duplicate database for standby:
rman target sys/oracle@chicago auxiliary /
RMAN> DUPLICATE TARGET DATABASE FOR STANDBY NOFILENAMECHECK DORECOVER;
Now standby database should be in MOUNT state.
Configuring the standby database
Add necessary SRL files to the standby database for redo transport:
SQL> ALTER DATABASE ADD STANDBY LOGFILE '/oracle/opt/oracle/oradata/boston/redo04.rdo' size 512M;
SQL> ALTER DATABASE ADD STANDBY LOGFILE '/oracle/opt/oracle/oradata/boston/redo05.rdo' size 512M;
SQL> ALTER DATABASE ADD STANDBY LOGFILE '/oracle/opt/oracle/oradata/boston/redo06.rdo' size 512M;
SQL> ALTER DATABASE ADD STANDBY LOGFILE '/oracle/opt/oracle/oradata/boston/redo07.rdo' size 512M;
You can now finish defining the Data Guard parameters that will be necessary in the standby role as well as the primary role when a switchover (or failover) occurs:
SQL> ALTER SYSTEM SET FAL_SEVER=chicago;
SQL> ALTER SYSTEM SET FAL_CLIENT=boston;
SQL> ALTER SYSTEM SET LOG_ARCHIVE_MAX_PROCESSES=8;
SQL> ALTER SYSTEM SET LOG_ARCHIVE_CONFIG='DG_CONFIG=(chicago,boston)';
SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_2='SERVICE=chicago ASYNC DB_UNIQUE_NAME=chicago VALID_FOR=(online_logfile,primary_role)';
SQL> ALTER SYSTEM SET STANDBY_FILE_MANAGEMENT=AUTO;
And start the apply process on the standby database:
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT;
This will create and clear the ORL files so that they exist when the standby becomes a primary.
Finalizing the primary database
Add SRL files so that they are in place for a future role transition:
SQL> ALTER DATABASE ADD STANDBY LOGFILE '/oracle/opt/oracle/oradata/chicago/redo04.rdo' size 512M;
SQL> ALTER DATABASE ADD STANDBY LOGFILE '/oracle/opt/oracle/oradata/chicago/redo05.rdo' size 512M;
SQL> ALTER DATABASE ADD STANDBY LOGFILE '/oracle/opt/oracle/oradata/chicago/redo06.rdo' size 512M;
SQL> ALTER DATABASE ADD STANDBY LOGFILE '/oracle/opt/oracle/oradata/chicago/redo07.rdo' size 512M;
Set the Data Guard parameters on the primary database that will be used to send redo to the standby. Also set those parameters that will be used when the primary becomes a standby database after a role transition:
SQL> ALTER SYSTEM SET FAL_SEVER=boston;
SQL> ALTER SYSTEM SET FAL_CLIENT=chicago;
SQL> ALTER SYSTEM SET LOG_ARCHIVE_MAX_PROCESSES=8;
SQL> ALTER SYSTEM SET LOG_ARCHIVE_CONFIG='DG_CONFIG=(chicago,boston)';
SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_2='SERVICE=boston ASYNC DB_UNIQUE_NAME=boston VALID_FOR=(online_logfile,primary_role)';
SQL> ALTER SYSTEM SET STANDBY_FILE_MANAGEMENT=AUTO;
To start sending redo, switching log files on the primary:
SQL> ALTER SYSTEM SWITCH LOGFILE;
Verifying the physical standby database is performing properly
Using follow queries to verify the physical standby database is performing properly:
SQL> SELECT DB_UNIQUE_NAME,NAME,OPEN_MODE,DATABASE_ROLE FROM V$DATABASE;
SQL> SELECT PROCESS,STATUS,THREAD#,SEQUENCE# FROM V$MANAGED_STANDBY;
And you can force a logfile switch to archive current ORL file on primary database and then verify the redo data was archived on the standby database.
Wednesday, October 10, 2012
Preparing Raw Devices for RAC on OEL 5u8
This demo is done with an OEL 5u8 x86_64 running as a guest in VirtualBox 4.2 on a Windows 7 host.
Add 3 new fixed-size VDI files to this VM:
ASM1 - 16 GB
ASM2 - 16 GB
OracleInstall - 12 GB
ASM1 and ASM2 will be used for raw devices for RAC.
Create partition tables on /dev/sdb (ASM1) and /dev/sdc (ASM2):
Create raw devices using these partitions.
Add udev raw device binding rules to /etc/udev/rules.d/60-raw.rules:
ACTION=="add", KERNEL=="sdb1", RUN+="/bin/raw /dev/raw/raw1 %N"
ACTION=="add", KERNEL=="sdc1", RUN+="/bin/raw /dev/raw/raw2 %N"
Add udev raw device permission rules to /etc/udev/rules.d/65-raw-permissions.rules:
KERNEL=="raw1", OWNER="ttadmin", GROUP="oinstall", MODE="660"
KERNEL=="raw2", OWNER="ttadmin", GROUP="oinstall", MODE="660"
Now restart udev:
Repeat above steps on all other nodes belong to the same RAC.
References:
Subscribe to:
Posts (Atom)