Tuesday, October 23, 2012

Estimating the IMDB Cache Size

It is hard to predict the cache size based on raw data size since there are a number of factors affecting it. The best practice is to use the sizing tool, ttSize, that ships as part of TimesTen. You need an installed copy of TimesTen to run the tool and it should be the same bittedness (32 or 64) as the system that you are sizing for in order to give accurate results.

You create an empty table (and associated indexes) in TimesTen with the same structure as the table to be cached. Then run the ttSize tool against this table and specify the expected number of rows for the table. The tool then calculates, based on the way in which TimesTen stores data/indexes and taking into account overheads etc., the amount of memory needed for the table and each index and reports these. Repeat for each table you want to cache and sum the results. This is really the only way to predict how much memory will be needed to cache a table in TimesTen.

Friday, October 19, 2012

Network Requirements for Oracle Exadata Database Machine

Oracle Exadata Database Machine includes database servers, Exadata Storage Servers, as well as equipment to connect the servers to your network. The network connections allow the servers to be administered remotely, and clients to connect to the database servers.

Each Oracle Exadata Database Machine X2-2 database server consists of the following network components and interfaces:

  • 4 embedded 1 GbE ports (NET0, NET1, NET2, and NET3)
  • 1 dual-port QDR (40 Gb/s) InfiniBand Host Channel Adapter (HCA) (BONDIB0)
  • 1 Ethernet port for Sun Integrated Lights Out Manager (ILOM)
  • 1 dual-port 10 GbE PCIe 2.0 network card with Intel 82599 10 GbE controller (only in Sun Fire X4170 M2 Oracle Database Servers)

Each Exadata Storage Server consists of the following network components and interfaces:

  • 1 embedded 1 GbE port (NET0)
  • 1 dual-port QDR InfiniBand Host Channel Adapter (HCA) (BONDIB0)
  • 1 Ethernet port for Sun Integrated Lights Out Manager (ILOM)

Each Oracle Exadata Database Machine X2-2 database server has three 1 GbE ports and two 10 GbE ports available for additional connectivity.

Additional configuration, such as defining multiple VLANs or enabling routing, may be required for the switch to operate properly in your environment and is beyond the scope of the installation service.

To deploy Oracle Exadata Database Machine ensure that you meet the minimum network requirements There are up to 5 networks for Oracle Exadata Database Machine. Each network must be on a distinct and separate subnet from the others. The network descriptions are as follows:

Management network: This required network connects to your existing management network, and is used for all components of Oracle Exadata Database Machine. It connects the servers, ILOM, and switches connected to the Ethernet switch in the rack.

Client access network: This required network connects the database servers to your existing client network and is used for client access to the database servers. Applications access the database through this network using SCAN and Oracle RAC VIP addresses. Database servers support channel bonding to provide higher bandwidth or availability for client connections to the database. Oracle recommends channel bonding for the client access network. Channel bonding on Oracle Exadata Database Machine X2-2 bonds NET1 and NET2.

Additional networks (optional): Database servers may be configured to connect to one or two additional existing networks through the NET2 and NET3 ports on Oracle Exadata Database Machine X2-2 racks. If channel bonding is used for the client access network, then only one additional port (NET3) is available on Oracle Exadata Database Machine X2-2 racks.

InfiniBand private network: This network connects the database servers and Exadata Storage Servers using the InfiniBand switches on the rack and the BONDIB0 interface. Oracle Database uses this network for Oracle RAC cluster interconnect traffic and for accessing data on Exadata Storage Servers. This non-routable network is fully contained in Oracle Exadata Database Machine, and does not connects to your existing network. This network is automatically configured during installation.


Network Diagram for Oracle Exadata Database Machine X2-2 when Using Channel Bonding (NET1 and NET2)


Network Diagram for Oracle Exadata Database Machine X2-2 when Not Using Channel Bonding

References
Oracle Exadata Database Machine Owner's Guide 11g Release 2

Wednesday, October 17, 2012

Creating a 10.2.0.4 Physical Standby Database

Firstly, install Oracle Database 10.2.0.4 software for both primary and standby sites.

Creating the primary database
Then create primary database "chicago" on file system using DBCA.

Remember to enable ARCHIVELOG mode during DBCA interview.

Place the primary database in FORCE LOGGING mode after database creation using the following SQL statement:

SQL> ALTER DATABASE FORCE LOGGING;

Create a listener using NETCA in GUI mode.

Preparing the standby system
You must configure the network as per your tuning with the TNS names for the primary database in the TNSNAMES file. This can be done using the Oracle Net Manager ($ORCLE_HOME/bin/netmgr) in GUI mode.

In addition, create the various directories for the dump parameters and, if you are not using ASM, the directories, where the data files, control files, online log files, and archive log files will be placed. In this case, these directories include (given $ORACLE_BASE is "/oracle/opt/oracle" and standby database DB_UNIQUE_NAME is "boston"):

/oracle/opt/oracle/oradata/boston

/oracle/opt/oracle/oradata/boston/archivelog

/oracle/opt/oracle/admin/boston/adump

/oracle/opt/oracle/admin/boston/bdump

/oracle/opt/oracle/admin/boston/cdump

/oracle/opt/oracle/admin/boston/udump

Getting the necessary files and creating the backups
You need to gather 4 main files on primary system for transporting to the target standby system be able to create a standby database:

  • The initialization parameters
  • The password file
  • A full backup of the database
  • The control file backup (as a standby control file)

Create a PFILE for standby database from primary database's SPFILE:

SQL> CREATE PFILE='/home/ora1024/stage/initboston.ora' FROM SPFILE;

Copy the password file from the primary system to your target standby system:

cp $ORACLE_HOME/dbs/orapwchicago /home/ora1024/stage/orapwboston

Create a full backup of the entire primary database "chicago":

RMAN> BACKUP DEVICE TYPE DISK FORMAT '/home/ora1024/stage/Database%U' DATABASE PLUS ARCHIVELOG;

Create a copy of the control file for the standby database:

RMAN> BACKUP DEVICE TYPE DISK FORMAT '/home/ora1024/stage/Control%U' CURRENT CONTROLFILE FOR STANDBY;

Now copy all these files to the SAME directory i.e. /home/ora1024/stage on your standby system "pstby5":

scp -p /home/ora1024/stage/* pstby5:/home/ora1024/stage/

Preparing the standby database
At minimum, you need to change the DB_UNIQUE_NAME to the name ("boston" in this case) of the standby in the PFILE i.e. initboston.ora:

*.db_unique_name='boston'

If your directory structure is different, you also need to add file name conversion parameters:

*.db_file_name_convert='/chicago/','boston'

*.log_file_name_convert='/chicago/','boston'

Restoring the backup
Once the initialization parameters are all set and the various directories have been created, start the standby database up in NOMOUNT mode:

export ORACLE_SID=boston

sqlplus / as sysdba

SQL> STARTUP NOMOUNT PFILE='/home/ora1024/stage/initboston.ora';

SQL> CREATE SPFILE FROM PFILE='/home/ora1024/stage/initboston.ora';

SQL> SHUTDOWN ABORT;

SQL> STARTUP NOMOUNT;

SQL> EXIT;

And using RMAN to connect to the primary database as the target and the standby database as the auxiliary, then duplicate database for standby:

rman target sys/oracle@chicago auxiliary /

RMAN> DUPLICATE TARGET DATABASE FOR STANDBY NOFILENAMECHECK DORECOVER;

Now standby database should be in MOUNT state.

Configuring the standby database
Add necessary SRL files to the standby database for redo transport:

SQL> ALTER DATABASE ADD STANDBY LOGFILE '/oracle/opt/oracle/oradata/boston/redo04.rdo' size 512M;

SQL> ALTER DATABASE ADD STANDBY LOGFILE '/oracle/opt/oracle/oradata/boston/redo05.rdo' size 512M;

SQL> ALTER DATABASE ADD STANDBY LOGFILE '/oracle/opt/oracle/oradata/boston/redo06.rdo' size 512M;

SQL> ALTER DATABASE ADD STANDBY LOGFILE '/oracle/opt/oracle/oradata/boston/redo07.rdo' size 512M;

You can now finish defining the Data Guard parameters that will be necessary in the standby role as well as the primary role when a switchover (or failover) occurs:

SQL> ALTER SYSTEM SET FAL_SEVER=chicago;

SQL> ALTER SYSTEM SET FAL_CLIENT=boston;

SQL> ALTER SYSTEM SET LOG_ARCHIVE_MAX_PROCESSES=8;

SQL> ALTER SYSTEM SET LOG_ARCHIVE_CONFIG='DG_CONFIG=(chicago,boston)';

SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_2='SERVICE=chicago ASYNC DB_UNIQUE_NAME=chicago VALID_FOR=(online_logfile,primary_role)';

SQL> ALTER SYSTEM SET STANDBY_FILE_MANAGEMENT=AUTO;

And start the apply process on the standby database:

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT;

This will create and clear the ORL files so that they exist when the standby becomes a primary.

Finalizing the primary database
Add SRL files so that they are in place for a future role transition:


SQL> ALTER DATABASE ADD STANDBY LOGFILE '/oracle/opt/oracle/oradata/chicago/redo04.rdo' size 512M;

SQL> ALTER DATABASE ADD STANDBY LOGFILE '/oracle/opt/oracle/oradata/chicago/redo05.rdo' size 512M;

SQL> ALTER DATABASE ADD STANDBY LOGFILE '/oracle/opt/oracle/oradata/chicago/redo06.rdo' size 512M;

SQL> ALTER DATABASE ADD STANDBY LOGFILE '/oracle/opt/oracle/oradata/chicago/redo07.rdo' size 512M;

Set the Data Guard parameters on the primary database that will be used to send redo to the standby. Also set those parameters that will be used when the primary becomes a standby database after a role transition:

SQL> ALTER SYSTEM SET FAL_SEVER=boston;

SQL> ALTER SYSTEM SET FAL_CLIENT=chicago;

SQL> ALTER SYSTEM SET LOG_ARCHIVE_MAX_PROCESSES=8;

SQL> ALTER SYSTEM SET LOG_ARCHIVE_CONFIG='DG_CONFIG=(chicago,boston)';

SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_2='SERVICE=boston ASYNC DB_UNIQUE_NAME=boston VALID_FOR=(online_logfile,primary_role)';

SQL> ALTER SYSTEM SET STANDBY_FILE_MANAGEMENT=AUTO;

To start sending redo, switching log files on the primary:

SQL> ALTER SYSTEM SWITCH LOGFILE;

Verifying the physical standby database is performing properly
Using follow queries to verify the physical standby database is performing properly:

SQL> SELECT DB_UNIQUE_NAME,NAME,OPEN_MODE,DATABASE_ROLE FROM V$DATABASE;

SQL> SELECT PROCESS,STATUS,THREAD#,SEQUENCE# FROM V$MANAGED_STANDBY;

And you can force a logfile switch to archive current ORL file on primary database and then verify the redo data was archived on the standby database.

Wednesday, October 10, 2012

Preparing Raw Devices for RAC on OEL 5u8

This demo is done with an OEL 5u8 x86_64 running as a guest in VirtualBox 4.2 on a Windows 7 host.

Add 3 new fixed-size VDI files to this VM:

ASM1 - 16 GB
ASM2 - 16 GB
OracleInstall - 12 GB

ASM1 and ASM2 will be used for raw devices for RAC.

Create partition tables on /dev/sdb (ASM1) and /dev/sdc (ASM2):


Create raw devices using these partitions.

Add udev raw device binding rules to /etc/udev/rules.d/60-raw.rules:

ACTION=="add", KERNEL=="sdb1", RUN+="/bin/raw /dev/raw/raw1 %N"
ACTION=="add", KERNEL=="sdc1", RUN+="/bin/raw /dev/raw/raw2 %N"

Add udev raw device permission rules to /etc/udev/rules.d/65-raw-permissions.rules:

KERNEL=="raw1", OWNER="ttadmin", GROUP="oinstall", MODE="660"
KERNEL=="raw2", OWNER="ttadmin", GROUP="oinstall", MODE="660"

Now restart udev:


Repeat above steps on all other nodes belong to the same RAC.

References: