1. Ensure EM 12c Cloud Control is set to the offline mode
From the "Setup" menu, choose "Provisioning and Patching" -> "Offline Patching", then change the setting for "Connection" to "Offline". Optionally you can upload the catalog to keep EM up-to-date with patch recommendations in this screen too.
2. Acquiring Management Agent software in the offline mode
From the "Setup" menu, choose "Extensibility" -> "Self Update".
Select the entity type "Agent Software" and choose one.
Click "Download", a message box is displayed with a URL and instructions.
Download the file from a computer with Internet connection and copy the file to the computer where your EM installation resides.
3. Import Management Agent software into OMS using emcli
$OMS_HOME/bin/emcli login -username=sysman
$OMS_HOME/bin/emcli import_update -omslocal -file=/home/oracle/Desktop/p14570372_112000_Generic.zip
Refresh "Self Update" page, corresponding Agent software status will be changed to "Downloaded".
References:
MOS Note ID 1369575.1
Friday, December 21, 2012
Friday, December 7, 2012
Adopting Exadata to Improve BI Solution Performance
Probably the most important thing will be their code: make sure that it employs set base (as opposed to row by row) processing, parallelism, compression, bulk direct path loading, and transformation techniques vs DML.
EHCC greatly assists with processing of data.
Infiniband also helps with the transfer of data as well - not just between the database nodes and Exadata storage cells but also when ingesting the data from the sources as well (i.e. the data warehouse is not a bottleneck).
EHCC greatly assists with processing of data.
Infiniband also helps with the transfer of data as well - not just between the database nodes and Exadata storage cells but also when ingesting the data from the sources as well (i.e. the data warehouse is not a bottleneck).
Backup and recovery is faster which means unlike many data
warehouses which their DR plan mainly consist of complete reload from
sources, this provides a true DR and even better DR via data guard than
simply "rebuild".
Unanticipated queries where
you do not have cubes or views developed aren't such an issue either.
Exadata is able to address the unanticipated ones. And these can be the
most valuable ones too - new questions that result in more improvement
over previous ways.
Tuesday, October 23, 2012
Estimating the IMDB Cache Size
It is hard to predict the cache size based on raw data size since there are a number of factors affecting it. The best practice is to use the sizing tool, ttSize, that ships as part of TimesTen. You need an installed copy of TimesTen to run the tool and it should be the same bittedness (32 or 64) as the system that you are sizing for in order to give accurate results.
You create an empty table (and associated indexes) in TimesTen with the same structure as the table to be cached. Then run the ttSize tool against this table and specify the expected number of rows for the table. The tool then calculates, based on the way in which TimesTen stores data/indexes and taking into account overheads etc., the amount of memory needed for the table and each index and reports these. Repeat for each table you want to cache and sum the results. This is really the only way to predict how much memory will be needed to cache a table in TimesTen.
You create an empty table (and associated indexes) in TimesTen with the same structure as the table to be cached. Then run the ttSize tool against this table and specify the expected number of rows for the table. The tool then calculates, based on the way in which TimesTen stores data/indexes and taking into account overheads etc., the amount of memory needed for the table and each index and reports these. Repeat for each table you want to cache and sum the results. This is really the only way to predict how much memory will be needed to cache a table in TimesTen.
Friday, October 19, 2012
Network Requirements for Oracle Exadata Database Machine
Oracle Exadata Database Machine includes database servers, Exadata Storage Servers, as well as equipment to connect the servers to your network. The network connections allow the servers to be administered remotely, and clients to connect to the database servers.
Each Oracle Exadata Database Machine X2-2 database server consists of the following network components and interfaces:
Each Exadata Storage Server consists of the following network components and interfaces:
Each Oracle Exadata Database Machine X2-2 database server has three 1 GbE ports and two 10 GbE ports available for additional connectivity.
Additional configuration, such as defining multiple VLANs or enabling routing, may be required for the switch to operate properly in your environment and is beyond the scope of the installation service.
To deploy Oracle Exadata Database Machine ensure that you meet the minimum network requirements There are up to 5 networks for Oracle Exadata Database Machine. Each network must be on a distinct and separate subnet from the others. The network descriptions are as follows:
Management network: This required network connects to your existing management network, and is used for all components of Oracle Exadata Database Machine. It connects the servers, ILOM, and switches connected to the Ethernet switch in the rack.
Client access network: This required network connects the database servers to your existing client network and is used for client access to the database servers. Applications access the database through this network using SCAN and Oracle RAC VIP addresses. Database servers support channel bonding to provide higher bandwidth or availability for client connections to the database. Oracle recommends channel bonding for the client access network. Channel bonding on Oracle Exadata Database Machine X2-2 bonds NET1 and NET2.
Additional networks (optional): Database servers may be configured to connect to one or two additional existing networks through the NET2 and NET3 ports on Oracle Exadata Database Machine X2-2 racks. If channel bonding is used for the client access network, then only one additional port (NET3) is available on Oracle Exadata Database Machine X2-2 racks.
InfiniBand private network: This network connects the database servers and Exadata Storage Servers using the InfiniBand switches on the rack and the BONDIB0 interface. Oracle Database uses this network for Oracle RAC cluster interconnect traffic and for accessing data on Exadata Storage Servers. This non-routable network is fully contained in Oracle Exadata Database Machine, and does not connects to your existing network. This network is automatically configured during installation.
Network Diagram for Oracle Exadata Database Machine X2-2 when Using Channel Bonding (NET1 and NET2)
Network Diagram for Oracle Exadata Database Machine X2-2 when Not Using Channel Bonding
References
Oracle Exadata Database Machine Owner's Guide 11g Release 2
Each Oracle Exadata Database Machine X2-2 database server consists of the following network components and interfaces:
- 4 embedded 1 GbE ports (NET0, NET1, NET2, and NET3)
- 1 dual-port QDR (40 Gb/s) InfiniBand Host Channel Adapter (HCA) (BONDIB0)
- 1 Ethernet port for Sun Integrated Lights Out Manager (ILOM)
- 1 dual-port 10 GbE PCIe 2.0 network card with Intel 82599 10 GbE controller (only in Sun Fire X4170 M2 Oracle Database Servers)
Each Exadata Storage Server consists of the following network components and interfaces:
- 1 embedded 1 GbE port (NET0)
- 1 dual-port QDR InfiniBand Host Channel Adapter (HCA) (BONDIB0)
- 1 Ethernet port for Sun Integrated Lights Out Manager (ILOM)
Each Oracle Exadata Database Machine X2-2 database server has three 1 GbE ports and two 10 GbE ports available for additional connectivity.
Additional configuration, such as defining multiple VLANs or enabling routing, may be required for the switch to operate properly in your environment and is beyond the scope of the installation service.
To deploy Oracle Exadata Database Machine ensure that you meet the minimum network requirements There are up to 5 networks for Oracle Exadata Database Machine. Each network must be on a distinct and separate subnet from the others. The network descriptions are as follows:
Management network: This required network connects to your existing management network, and is used for all components of Oracle Exadata Database Machine. It connects the servers, ILOM, and switches connected to the Ethernet switch in the rack.
Client access network: This required network connects the database servers to your existing client network and is used for client access to the database servers. Applications access the database through this network using SCAN and Oracle RAC VIP addresses. Database servers support channel bonding to provide higher bandwidth or availability for client connections to the database. Oracle recommends channel bonding for the client access network. Channel bonding on Oracle Exadata Database Machine X2-2 bonds NET1 and NET2.
Additional networks (optional): Database servers may be configured to connect to one or two additional existing networks through the NET2 and NET3 ports on Oracle Exadata Database Machine X2-2 racks. If channel bonding is used for the client access network, then only one additional port (NET3) is available on Oracle Exadata Database Machine X2-2 racks.
InfiniBand private network: This network connects the database servers and Exadata Storage Servers using the InfiniBand switches on the rack and the BONDIB0 interface. Oracle Database uses this network for Oracle RAC cluster interconnect traffic and for accessing data on Exadata Storage Servers. This non-routable network is fully contained in Oracle Exadata Database Machine, and does not connects to your existing network. This network is automatically configured during installation.
Network Diagram for Oracle Exadata Database Machine X2-2 when Using Channel Bonding (NET1 and NET2)
Network Diagram for Oracle Exadata Database Machine X2-2 when Not Using Channel Bonding
References
Oracle Exadata Database Machine Owner's Guide 11g Release 2
Wednesday, October 17, 2012
Creating a 10.2.0.4 Physical Standby Database
Firstly, install Oracle Database 10.2.0.4 software for both primary and standby sites.
Creating the primary database
Then create primary database "chicago" on file system using DBCA.
Remember to enable ARCHIVELOG mode during DBCA interview.
Place the primary database in FORCE LOGGING mode after database creation using the following SQL statement:
SQL> ALTER DATABASE FORCE LOGGING;
Create a listener using NETCA in GUI mode.
Preparing the standby system
You must configure the network as per your tuning with the TNS names for the primary database in the TNSNAMES file. This can be done using the Oracle Net Manager ($ORCLE_HOME/bin/netmgr) in GUI mode.
In addition, create the various directories for the dump parameters and, if you are not using ASM, the directories, where the data files, control files, online log files, and archive log files will be placed. In this case, these directories include (given $ORACLE_BASE is "/oracle/opt/oracle" and standby database DB_UNIQUE_NAME is "boston"):
/oracle/opt/oracle/oradata/boston
/oracle/opt/oracle/oradata/boston/archivelog
/oracle/opt/oracle/admin/boston/adump
/oracle/opt/oracle/admin/boston/bdump
/oracle/opt/oracle/admin/boston/cdump
/oracle/opt/oracle/admin/boston/udump
Getting the necessary files and creating the backups
You need to gather 4 main files on primary system for transporting to the target standby system be able to create a standby database:
Create a PFILE for standby database from primary database's SPFILE:
SQL> CREATE PFILE='/home/ora1024/stage/initboston.ora' FROM SPFILE;
Copy the password file from the primary system to your target standby system:
cp $ORACLE_HOME/dbs/orapwchicago /home/ora1024/stage/orapwboston
Create a full backup of the entire primary database "chicago":
RMAN> BACKUP DEVICE TYPE DISK FORMAT '/home/ora1024/stage/Database%U' DATABASE PLUS ARCHIVELOG;
Create a copy of the control file for the standby database:
RMAN> BACKUP DEVICE TYPE DISK FORMAT '/home/ora1024/stage/Control%U' CURRENT CONTROLFILE FOR STANDBY;
Now copy all these files to the SAME directory i.e. /home/ora1024/stage on your standby system "pstby5":
scp -p /home/ora1024/stage/* pstby5:/home/ora1024/stage/
Preparing the standby database
At minimum, you need to change the DB_UNIQUE_NAME to the name ("boston" in this case) of the standby in the PFILE i.e. initboston.ora:
*.db_unique_name='boston'
If your directory structure is different, you also need to add file name conversion parameters:
*.db_file_name_convert='/chicago/','boston'
*.log_file_name_convert='/chicago/','boston'
Restoring the backup
Once the initialization parameters are all set and the various directories have been created, start the standby database up in NOMOUNT mode:
export ORACLE_SID=boston
sqlplus / as sysdba
SQL> STARTUP NOMOUNT PFILE='/home/ora1024/stage/initboston.ora';
SQL> CREATE SPFILE FROM PFILE='/home/ora1024/stage/initboston.ora';
SQL> SHUTDOWN ABORT;
SQL> STARTUP NOMOUNT;
SQL> EXIT;
And using RMAN to connect to the primary database as the target and the standby database as the auxiliary, then duplicate database for standby:
rman target sys/oracle@chicago auxiliary /
RMAN> DUPLICATE TARGET DATABASE FOR STANDBY NOFILENAMECHECK DORECOVER;
Now standby database should be in MOUNT state.
Configuring the standby database
Add necessary SRL files to the standby database for redo transport:
SQL> ALTER DATABASE ADD STANDBY LOGFILE '/oracle/opt/oracle/oradata/boston/redo04.rdo' size 512M;
SQL> ALTER DATABASE ADD STANDBY LOGFILE '/oracle/opt/oracle/oradata/boston/redo05.rdo' size 512M;
SQL> ALTER DATABASE ADD STANDBY LOGFILE '/oracle/opt/oracle/oradata/boston/redo06.rdo' size 512M;
SQL> ALTER DATABASE ADD STANDBY LOGFILE '/oracle/opt/oracle/oradata/boston/redo07.rdo' size 512M;
You can now finish defining the Data Guard parameters that will be necessary in the standby role as well as the primary role when a switchover (or failover) occurs:
SQL> ALTER SYSTEM SET FAL_SEVER=chicago;
SQL> ALTER SYSTEM SET FAL_CLIENT=boston;
SQL> ALTER SYSTEM SET LOG_ARCHIVE_MAX_PROCESSES=8;
SQL> ALTER SYSTEM SET LOG_ARCHIVE_CONFIG='DG_CONFIG=(chicago,boston)';
SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_2='SERVICE=chicago ASYNC DB_UNIQUE_NAME=chicago VALID_FOR=(online_logfile,primary_role)';
SQL> ALTER SYSTEM SET STANDBY_FILE_MANAGEMENT=AUTO;
And start the apply process on the standby database:
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT;
This will create and clear the ORL files so that they exist when the standby becomes a primary.
Finalizing the primary database
Add SRL files so that they are in place for a future role transition:
SQL> ALTER DATABASE ADD STANDBY LOGFILE '/oracle/opt/oracle/oradata/chicago/redo04.rdo' size 512M;
SQL> ALTER DATABASE ADD STANDBY LOGFILE '/oracle/opt/oracle/oradata/chicago/redo05.rdo' size 512M;
SQL> ALTER DATABASE ADD STANDBY LOGFILE '/oracle/opt/oracle/oradata/chicago/redo06.rdo' size 512M;
SQL> ALTER DATABASE ADD STANDBY LOGFILE '/oracle/opt/oracle/oradata/chicago/redo07.rdo' size 512M;
Set the Data Guard parameters on the primary database that will be used to send redo to the standby. Also set those parameters that will be used when the primary becomes a standby database after a role transition:
SQL> ALTER SYSTEM SET FAL_SEVER=boston;
SQL> ALTER SYSTEM SET FAL_CLIENT=chicago;
SQL> ALTER SYSTEM SET LOG_ARCHIVE_MAX_PROCESSES=8;
SQL> ALTER SYSTEM SET LOG_ARCHIVE_CONFIG='DG_CONFIG=(chicago,boston)';
SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_2='SERVICE=boston ASYNC DB_UNIQUE_NAME=boston VALID_FOR=(online_logfile,primary_role)';
SQL> ALTER SYSTEM SET STANDBY_FILE_MANAGEMENT=AUTO;
To start sending redo, switching log files on the primary:
SQL> ALTER SYSTEM SWITCH LOGFILE;
Verifying the physical standby database is performing properly
Using follow queries to verify the physical standby database is performing properly:
SQL> SELECT DB_UNIQUE_NAME,NAME,OPEN_MODE,DATABASE_ROLE FROM V$DATABASE;
SQL> SELECT PROCESS,STATUS,THREAD#,SEQUENCE# FROM V$MANAGED_STANDBY;
And you can force a logfile switch to archive current ORL file on primary database and then verify the redo data was archived on the standby database.
Creating the primary database
Then create primary database "chicago" on file system using DBCA.
Remember to enable ARCHIVELOG mode during DBCA interview.
Place the primary database in FORCE LOGGING mode after database creation using the following SQL statement:
SQL> ALTER DATABASE FORCE LOGGING;
Create a listener using NETCA in GUI mode.
Preparing the standby system
You must configure the network as per your tuning with the TNS names for the primary database in the TNSNAMES file. This can be done using the Oracle Net Manager ($ORCLE_HOME/bin/netmgr) in GUI mode.
In addition, create the various directories for the dump parameters and, if you are not using ASM, the directories, where the data files, control files, online log files, and archive log files will be placed. In this case, these directories include (given $ORACLE_BASE is "/oracle/opt/oracle" and standby database DB_UNIQUE_NAME is "boston"):
/oracle/opt/oracle/oradata/boston
/oracle/opt/oracle/oradata/boston/archivelog
/oracle/opt/oracle/admin/boston/adump
/oracle/opt/oracle/admin/boston/bdump
/oracle/opt/oracle/admin/boston/cdump
/oracle/opt/oracle/admin/boston/udump
Getting the necessary files and creating the backups
You need to gather 4 main files on primary system for transporting to the target standby system be able to create a standby database:
- The initialization parameters
- The password file
- A full backup of the database
- The control file backup (as a standby control file)
Create a PFILE for standby database from primary database's SPFILE:
SQL> CREATE PFILE='/home/ora1024/stage/initboston.ora' FROM SPFILE;
Copy the password file from the primary system to your target standby system:
cp $ORACLE_HOME/dbs/orapwchicago /home/ora1024/stage/orapwboston
Create a full backup of the entire primary database "chicago":
RMAN> BACKUP DEVICE TYPE DISK FORMAT '/home/ora1024/stage/Database%U' DATABASE PLUS ARCHIVELOG;
Create a copy of the control file for the standby database:
RMAN> BACKUP DEVICE TYPE DISK FORMAT '/home/ora1024/stage/Control%U' CURRENT CONTROLFILE FOR STANDBY;
Now copy all these files to the SAME directory i.e. /home/ora1024/stage on your standby system "pstby5":
scp -p /home/ora1024/stage/* pstby5:/home/ora1024/stage/
Preparing the standby database
At minimum, you need to change the DB_UNIQUE_NAME to the name ("boston" in this case) of the standby in the PFILE i.e. initboston.ora:
*.db_unique_name='boston'
If your directory structure is different, you also need to add file name conversion parameters:
*.db_file_name_convert='/chicago/','boston'
*.log_file_name_convert='/chicago/','boston'
Restoring the backup
Once the initialization parameters are all set and the various directories have been created, start the standby database up in NOMOUNT mode:
export ORACLE_SID=boston
sqlplus / as sysdba
SQL> STARTUP NOMOUNT PFILE='/home/ora1024/stage/initboston.ora';
SQL> CREATE SPFILE FROM PFILE='/home/ora1024/stage/initboston.ora';
SQL> SHUTDOWN ABORT;
SQL> STARTUP NOMOUNT;
SQL> EXIT;
And using RMAN to connect to the primary database as the target and the standby database as the auxiliary, then duplicate database for standby:
rman target sys/oracle@chicago auxiliary /
RMAN> DUPLICATE TARGET DATABASE FOR STANDBY NOFILENAMECHECK DORECOVER;
Now standby database should be in MOUNT state.
Configuring the standby database
Add necessary SRL files to the standby database for redo transport:
SQL> ALTER DATABASE ADD STANDBY LOGFILE '/oracle/opt/oracle/oradata/boston/redo04.rdo' size 512M;
SQL> ALTER DATABASE ADD STANDBY LOGFILE '/oracle/opt/oracle/oradata/boston/redo05.rdo' size 512M;
SQL> ALTER DATABASE ADD STANDBY LOGFILE '/oracle/opt/oracle/oradata/boston/redo06.rdo' size 512M;
SQL> ALTER DATABASE ADD STANDBY LOGFILE '/oracle/opt/oracle/oradata/boston/redo07.rdo' size 512M;
You can now finish defining the Data Guard parameters that will be necessary in the standby role as well as the primary role when a switchover (or failover) occurs:
SQL> ALTER SYSTEM SET FAL_SEVER=chicago;
SQL> ALTER SYSTEM SET FAL_CLIENT=boston;
SQL> ALTER SYSTEM SET LOG_ARCHIVE_MAX_PROCESSES=8;
SQL> ALTER SYSTEM SET LOG_ARCHIVE_CONFIG='DG_CONFIG=(chicago,boston)';
SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_2='SERVICE=chicago ASYNC DB_UNIQUE_NAME=chicago VALID_FOR=(online_logfile,primary_role)';
SQL> ALTER SYSTEM SET STANDBY_FILE_MANAGEMENT=AUTO;
And start the apply process on the standby database:
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT;
This will create and clear the ORL files so that they exist when the standby becomes a primary.
Finalizing the primary database
Add SRL files so that they are in place for a future role transition:
SQL> ALTER DATABASE ADD STANDBY LOGFILE '/oracle/opt/oracle/oradata/chicago/redo04.rdo' size 512M;
SQL> ALTER DATABASE ADD STANDBY LOGFILE '/oracle/opt/oracle/oradata/chicago/redo05.rdo' size 512M;
SQL> ALTER DATABASE ADD STANDBY LOGFILE '/oracle/opt/oracle/oradata/chicago/redo06.rdo' size 512M;
SQL> ALTER DATABASE ADD STANDBY LOGFILE '/oracle/opt/oracle/oradata/chicago/redo07.rdo' size 512M;
Set the Data Guard parameters on the primary database that will be used to send redo to the standby. Also set those parameters that will be used when the primary becomes a standby database after a role transition:
SQL> ALTER SYSTEM SET FAL_SEVER=boston;
SQL> ALTER SYSTEM SET FAL_CLIENT=chicago;
SQL> ALTER SYSTEM SET LOG_ARCHIVE_MAX_PROCESSES=8;
SQL> ALTER SYSTEM SET LOG_ARCHIVE_CONFIG='DG_CONFIG=(chicago,boston)';
SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_2='SERVICE=boston ASYNC DB_UNIQUE_NAME=boston VALID_FOR=(online_logfile,primary_role)';
SQL> ALTER SYSTEM SET STANDBY_FILE_MANAGEMENT=AUTO;
To start sending redo, switching log files on the primary:
SQL> ALTER SYSTEM SWITCH LOGFILE;
Verifying the physical standby database is performing properly
Using follow queries to verify the physical standby database is performing properly:
SQL> SELECT DB_UNIQUE_NAME,NAME,OPEN_MODE,DATABASE_ROLE FROM V$DATABASE;
SQL> SELECT PROCESS,STATUS,THREAD#,SEQUENCE# FROM V$MANAGED_STANDBY;
And you can force a logfile switch to archive current ORL file on primary database and then verify the redo data was archived on the standby database.
Wednesday, October 10, 2012
Preparing Raw Devices for RAC on OEL 5u8
This demo is done with an OEL 5u8 x86_64 running as a guest in VirtualBox 4.2 on a Windows 7 host.
Add 3 new fixed-size VDI files to this VM:
ASM1 - 16 GB
ASM2 - 16 GB
OracleInstall - 12 GB
ASM1 and ASM2 will be used for raw devices for RAC.
Create partition tables on /dev/sdb (ASM1) and /dev/sdc (ASM2):
Create raw devices using these partitions.
Add udev raw device binding rules to /etc/udev/rules.d/60-raw.rules:
ACTION=="add", KERNEL=="sdb1", RUN+="/bin/raw /dev/raw/raw1 %N"
ACTION=="add", KERNEL=="sdc1", RUN+="/bin/raw /dev/raw/raw2 %N"
Add udev raw device permission rules to /etc/udev/rules.d/65-raw-permissions.rules:
KERNEL=="raw1", OWNER="ttadmin", GROUP="oinstall", MODE="660"
KERNEL=="raw2", OWNER="ttadmin", GROUP="oinstall", MODE="660"
Now restart udev:
Repeat above steps on all other nodes belong to the same RAC.
References:
Friday, September 28, 2012
Database Machine and Exadata Storage Server 11g Release 2 Supported Versions
Recommended bundle patches, named Quarterly Database Patch for Exadata (QDPE), will be released quarterly, aligned with and including the database Critical Patch Update (CPU) and Patch Set Update (PSU) releases. QDPE releases are targeted for planned grid infrastructure and database patching activities and are the only patch bundles that will be applied by most customers.
All bundle patches are cumulative, as in previous releases.
There are different types of patches for Exadata Database Machine, as listed below:
Exadata Storage Server
Database Server
Oracle Database Server
OS and firmware
InfiniBand switch
Additional components
In 11.2.3.1.0 and above, we use the yum procedure to update the DB server.
References:
MOS Note ID 1244344.1 - Exadata V2 Starter Kit
MOS Note ID 1187674.1 - Information Center: Oracle Exadata Database Machine
MOS Note ID 888828.1 - Database Machine and Exadata Storage Server 11g Release 2 Supported Versions
MOS Note ID 1262380.1 - Exadata Patching Overview and Patch Testing Guidelines
All bundle patches are cumulative, as in previous releases.
There are different types of patches for Exadata Database Machine, as listed below:
Exadata Storage Server
Database Server
Oracle Database Server
OS and firmware
InfiniBand switch
Additional components
In 11.2.3.1.0 and above, we use the yum procedure to update the DB server.
References:
MOS Note ID 1244344.1 - Exadata V2 Starter Kit
MOS Note ID 1187674.1 - Information Center: Oracle Exadata Database Machine
MOS Note ID 888828.1 - Database Machine and Exadata Storage Server 11g Release 2 Supported Versions
MOS Note ID 1262380.1 - Exadata Patching Overview and Patch Testing Guidelines
Tuesday, September 25, 2012
Oracle Exalytics Hardware
Memory
1 TB RAM, 1033 MHz
Compute
4 Intel Xeon E7-4780, 40 cores total
Networking
40 Gbps InfiniBand - 2 ports
10 Gbps Ethernet - 2 ports
1 Gbps Ethernet - 4 ports
Storage
3.6 TB HDD (6 * 600 GB SAS) Capacity
1 TB RAM, 1033 MHz
Compute
4 Intel Xeon E7-4780, 40 cores total
Networking
40 Gbps InfiniBand - 2 ports
10 Gbps Ethernet - 2 ports
1 Gbps Ethernet - 4 ports
Storage
3.6 TB HDD (6 * 600 GB SAS) Capacity
Sunday, September 23, 2012
RAC Upgrade from 11.2.0.2 to 11.2.0.3
First you need to patch vanilla 11.2.0.2 RAC to 11.2.0.2.5
1. Apply patch 6880880 to upgrade OPatch
2. Apply patch 13653086 to upgrade to 11.2.0.2.5
Apply one-off patch 12539000
1. Apply patch 12539000
Patch environment to 11.2.0.3
1. Apply patch 10404530
1. Apply patch 6880880 to upgrade OPatch
2. Apply patch 13653086 to upgrade to 11.2.0.2.5
Apply one-off patch 12539000
1. Apply patch 12539000
Patch environment to 11.2.0.3
1. Apply patch 10404530
Saturday, September 22, 2012
Disabling SELinux and iptables in RHEL 6
There is no way to disable SELinux and iptables (firewall) during the
Setup Agent in RHEL 6. So you need to do it manually as root.
Disabling SELinux
Modify the /etc/selinux/config file to disable SELinux:
SELINUX=disabled
Disabling iptables
Invoke the GUI tool "system-config-firewall" to disable firewall.
Disabling SELinux
Modify the /etc/selinux/config file to disable SELinux:
SELINUX=disabled
Disabling iptables
Invoke the GUI tool "system-config-firewall" to disable firewall.
Running Hadoop on Ubuntu in Pseudo-Distributed Mode
I tried to run Hadoop on Ubuntu in pseudo-distributed mode today, following are the detailed steps:
Install Ubuntu 11.10 i386 in VirtualBox. In this release, JDK is located in <strong>/usr/lib/jvm/java-6-openjdk</strong> by default.
Add a dedicated Hadoop user account for running Hadoop
<pre lang="bash">sudo addgroup hadoop
sudo adduser --ingroup hadoop hadoop</pre>
Configure SSH for Hadoop user
<pre lang="bash">sudo apt-get install ssh
su - hadoop
ssh-keygen -t rsa -P ""
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys</pre>
Download latest stable release of Hadoop from Hadoop's homepage. I downloaded release 1.0.2 in a gzipped tar file (hadoop-1.0.2-tar.gz). Then uncompress the hadoop-1.0.2.tar.gz.
<pre lang="bash">tar zxvf hadoop-1.0.2.tar.gz
mv hadoop-1.0.2 hadoop</pre>
Configure Hadoop
The $HADOOP_INSTALL/hadoop/conf directory contains some configuration files for Hadoop.
hadoop-env.sh
<pre lang="bash">export JAVA_HOME=/usr/lib/jvm/java-6-openjdk</pre>
core-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>
hdfs-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
mapred-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
Format the HDFS filesystem
<pre lang="bash">
bin/hadoop namenode -format
</pre>
Start your single-node cluster
<pre lang="bash">
bin/start-all.sh
</pre>
Run the WordCount example job
<pre lang="bash">
bin/hadoop fs -copyFromLocal /home/hadoop/test_wc.txt test_wc.txt
bin/hadoop fs -ls
bin/hadoop jar hadoop-examples-1.0.2.jar wordcount test_wc.txt test_wc-output
bin/hadoop fs -cat test_wc-output/part-r-00000
bin/hadoop fs -copyToLocal test_wc-output /home/hadoop/test_wc-output
</pre>
Stop your single-node cluster
<pre lang="bash">
bin/stop-all.sh
</pre>
References:
Hadoop: The Definitive Guide
Running Hadoop On Ubuntu Linux (Single-Node Cluster)
Install Ubuntu 11.10 i386 in VirtualBox. In this release, JDK is located in <strong>/usr/lib/jvm/java-6-openjdk</strong> by default.
Add a dedicated Hadoop user account for running Hadoop
<pre lang="bash">sudo addgroup hadoop
sudo adduser --ingroup hadoop hadoop</pre>
Configure SSH for Hadoop user
<pre lang="bash">sudo apt-get install ssh
su - hadoop
ssh-keygen -t rsa -P ""
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys</pre>
Download latest stable release of Hadoop from Hadoop's homepage. I downloaded release 1.0.2 in a gzipped tar file (hadoop-1.0.2-tar.gz). Then uncompress the hadoop-1.0.2.tar.gz.
<pre lang="bash">tar zxvf hadoop-1.0.2.tar.gz
mv hadoop-1.0.2 hadoop</pre>
Configure Hadoop
The $HADOOP_INSTALL/hadoop/conf directory contains some configuration files for Hadoop.
hadoop-env.sh
<pre lang="bash">export JAVA_HOME=/usr/lib/jvm/java-6-openjdk</pre>
core-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>
hdfs-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
mapred-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
Format the HDFS filesystem
<pre lang="bash">
bin/hadoop namenode -format
</pre>
Start your single-node cluster
<pre lang="bash">
bin/start-all.sh
</pre>
Run the WordCount example job
<pre lang="bash">
bin/hadoop fs -copyFromLocal /home/hadoop/test_wc.txt test_wc.txt
bin/hadoop fs -ls
bin/hadoop jar hadoop-examples-1.0.2.jar wordcount test_wc.txt test_wc-output
bin/hadoop fs -cat test_wc-output/part-r-00000
bin/hadoop fs -copyToLocal test_wc-output /home/hadoop/test_wc-output
</pre>
Stop your single-node cluster
<pre lang="bash">
bin/stop-all.sh
</pre>
References:
Hadoop: The Definitive Guide
Running Hadoop On Ubuntu Linux (Single-Node Cluster)
TimesTen Replication
Replication is the process of maintaining copies of data in multiple databases. The purpose of replication is to make data highly available to applications with minimal performance impact.
Replication is the process of copying data from a master database to a subscriber database. Replication is controlled by replication agents for each database. The replication agent on the master database reads the records from the transaction log for the master database. It forwards changes to replicated elements to the replication agent on the subscriber database. The replication agent on the subscriber database then applies the updates to its database.
Requirements for replication compatibility
TimesTen replication is supported only between identical platforms and bit-levels. The databases must have DSNs with identical DatabaseCharacterSet and TypeMode database attributes.
Replication is the process of copying data from a master database to a subscriber database. Replication is controlled by replication agents for each database. The replication agent on the master database reads the records from the transaction log for the master database. It forwards changes to replicated elements to the replication agent on the subscriber database. The replication agent on the subscriber database then applies the updates to its database.
Requirements for replication compatibility
TimesTen replication is supported only between identical platforms and bit-levels. The databases must have DSNs with identical DatabaseCharacterSet and TypeMode database attributes.
Wednesday, September 19, 2012
Oracle In-Memory Database Cache Concepts
Oracle In-Memory Database (IMDB) Cache is an Oracle Database product option that includes the Oracle TimesTen In-Memory Database. It is used as a database cache at the application tier to cache Oracle data and reduce the workload on the Oracle database.
You can cache Oracle data in a TimesTen database by defining a cache grid and then creating cache groups. A cache group in a TimesTen database can cache a single Oracle table or a group of related Oracle tables.
Overview of a cache grid
A cache grid is a set of distributed TimesTen in-memory databases that work together to cache data from an Oracle database and guarantee cache coherence among the TimesTen databases.
Overview of cache groups
Cache groups define the Oracle data to be cached in a TimesTen database. A cache group can be defined to cache all or part of a single Oracle table, or a set of related Oracle tables.
You can cache Oracle data in a TimesTen database by defining a cache grid and then creating cache groups. A cache group in a TimesTen database can cache a single Oracle table or a group of related Oracle tables.
Overview of a cache grid
A cache grid is a set of distributed TimesTen in-memory databases that work together to cache data from an Oracle database and guarantee cache coherence among the TimesTen databases.
Overview of cache groups
Cache groups define the Oracle data to be cached in a TimesTen database. A cache group can be defined to cache all or part of a single Oracle table, or a set of related Oracle tables.
Subscribe to:
Posts (Atom)