Recommended bundle patches, named Quarterly Database Patch for Exadata (QDPE), will be released quarterly, aligned with and including the database Critical Patch Update (CPU) and Patch Set Update (PSU) releases. QDPE releases are targeted for planned grid infrastructure and database patching activities and are the only patch bundles that will be applied by most customers.
All bundle patches are cumulative, as in previous releases.
There are different types of patches for Exadata Database Machine, as listed below:
Exadata Storage Server
Database Server
Oracle Database Server
OS and firmware
InfiniBand switch
Additional components
In 11.2.3.1.0 and above, we use the yum procedure to update the DB server.
References:
MOS Note ID 1244344.1 - Exadata V2 Starter Kit
MOS Note ID 1187674.1 - Information Center: Oracle Exadata Database Machine
MOS Note ID 888828.1 - Database Machine and Exadata Storage Server 11g Release 2 Supported Versions
MOS Note ID 1262380.1 - Exadata Patching Overview and Patch Testing Guidelines
Friday, September 28, 2012
Tuesday, September 25, 2012
Oracle Exalytics Hardware
Memory
1 TB RAM, 1033 MHz
Compute
4 Intel Xeon E7-4780, 40 cores total
Networking
40 Gbps InfiniBand - 2 ports
10 Gbps Ethernet - 2 ports
1 Gbps Ethernet - 4 ports
Storage
3.6 TB HDD (6 * 600 GB SAS) Capacity
1 TB RAM, 1033 MHz
Compute
4 Intel Xeon E7-4780, 40 cores total
Networking
40 Gbps InfiniBand - 2 ports
10 Gbps Ethernet - 2 ports
1 Gbps Ethernet - 4 ports
Storage
3.6 TB HDD (6 * 600 GB SAS) Capacity
Sunday, September 23, 2012
RAC Upgrade from 11.2.0.2 to 11.2.0.3
First you need to patch vanilla 11.2.0.2 RAC to 11.2.0.2.5
1. Apply patch 6880880 to upgrade OPatch
2. Apply patch 13653086 to upgrade to 11.2.0.2.5
Apply one-off patch 12539000
1. Apply patch 12539000
Patch environment to 11.2.0.3
1. Apply patch 10404530
1. Apply patch 6880880 to upgrade OPatch
2. Apply patch 13653086 to upgrade to 11.2.0.2.5
Apply one-off patch 12539000
1. Apply patch 12539000
Patch environment to 11.2.0.3
1. Apply patch 10404530
Saturday, September 22, 2012
Disabling SELinux and iptables in RHEL 6
There is no way to disable SELinux and iptables (firewall) during the
Setup Agent in RHEL 6. So you need to do it manually as root.
Disabling SELinux
Modify the /etc/selinux/config file to disable SELinux:
SELINUX=disabled
Disabling iptables
Invoke the GUI tool "system-config-firewall" to disable firewall.
Disabling SELinux
Modify the /etc/selinux/config file to disable SELinux:
SELINUX=disabled
Disabling iptables
Invoke the GUI tool "system-config-firewall" to disable firewall.
Running Hadoop on Ubuntu in Pseudo-Distributed Mode
I tried to run Hadoop on Ubuntu in pseudo-distributed mode today, following are the detailed steps:
Install Ubuntu 11.10 i386 in VirtualBox. In this release, JDK is located in <strong>/usr/lib/jvm/java-6-openjdk</strong> by default.
Add a dedicated Hadoop user account for running Hadoop
<pre lang="bash">sudo addgroup hadoop
sudo adduser --ingroup hadoop hadoop</pre>
Configure SSH for Hadoop user
<pre lang="bash">sudo apt-get install ssh
su - hadoop
ssh-keygen -t rsa -P ""
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys</pre>
Download latest stable release of Hadoop from Hadoop's homepage. I downloaded release 1.0.2 in a gzipped tar file (hadoop-1.0.2-tar.gz). Then uncompress the hadoop-1.0.2.tar.gz.
<pre lang="bash">tar zxvf hadoop-1.0.2.tar.gz
mv hadoop-1.0.2 hadoop</pre>
Configure Hadoop
The $HADOOP_INSTALL/hadoop/conf directory contains some configuration files for Hadoop.
hadoop-env.sh
<pre lang="bash">export JAVA_HOME=/usr/lib/jvm/java-6-openjdk</pre>
core-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>
hdfs-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
mapred-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
Format the HDFS filesystem
<pre lang="bash">
bin/hadoop namenode -format
</pre>
Start your single-node cluster
<pre lang="bash">
bin/start-all.sh
</pre>
Run the WordCount example job
<pre lang="bash">
bin/hadoop fs -copyFromLocal /home/hadoop/test_wc.txt test_wc.txt
bin/hadoop fs -ls
bin/hadoop jar hadoop-examples-1.0.2.jar wordcount test_wc.txt test_wc-output
bin/hadoop fs -cat test_wc-output/part-r-00000
bin/hadoop fs -copyToLocal test_wc-output /home/hadoop/test_wc-output
</pre>
Stop your single-node cluster
<pre lang="bash">
bin/stop-all.sh
</pre>
References:
Hadoop: The Definitive Guide
Running Hadoop On Ubuntu Linux (Single-Node Cluster)
Install Ubuntu 11.10 i386 in VirtualBox. In this release, JDK is located in <strong>/usr/lib/jvm/java-6-openjdk</strong> by default.
Add a dedicated Hadoop user account for running Hadoop
<pre lang="bash">sudo addgroup hadoop
sudo adduser --ingroup hadoop hadoop</pre>
Configure SSH for Hadoop user
<pre lang="bash">sudo apt-get install ssh
su - hadoop
ssh-keygen -t rsa -P ""
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys</pre>
Download latest stable release of Hadoop from Hadoop's homepage. I downloaded release 1.0.2 in a gzipped tar file (hadoop-1.0.2-tar.gz). Then uncompress the hadoop-1.0.2.tar.gz.
<pre lang="bash">tar zxvf hadoop-1.0.2.tar.gz
mv hadoop-1.0.2 hadoop</pre>
Configure Hadoop
The $HADOOP_INSTALL/hadoop/conf directory contains some configuration files for Hadoop.
hadoop-env.sh
<pre lang="bash">export JAVA_HOME=/usr/lib/jvm/java-6-openjdk</pre>
core-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>
hdfs-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
mapred-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
Format the HDFS filesystem
<pre lang="bash">
bin/hadoop namenode -format
</pre>
Start your single-node cluster
<pre lang="bash">
bin/start-all.sh
</pre>
Run the WordCount example job
<pre lang="bash">
bin/hadoop fs -copyFromLocal /home/hadoop/test_wc.txt test_wc.txt
bin/hadoop fs -ls
bin/hadoop jar hadoop-examples-1.0.2.jar wordcount test_wc.txt test_wc-output
bin/hadoop fs -cat test_wc-output/part-r-00000
bin/hadoop fs -copyToLocal test_wc-output /home/hadoop/test_wc-output
</pre>
Stop your single-node cluster
<pre lang="bash">
bin/stop-all.sh
</pre>
References:
Hadoop: The Definitive Guide
Running Hadoop On Ubuntu Linux (Single-Node Cluster)
TimesTen Replication
Replication is the process of maintaining copies of data in multiple databases. The purpose of replication is to make data highly available to applications with minimal performance impact.
Replication is the process of copying data from a master database to a subscriber database. Replication is controlled by replication agents for each database. The replication agent on the master database reads the records from the transaction log for the master database. It forwards changes to replicated elements to the replication agent on the subscriber database. The replication agent on the subscriber database then applies the updates to its database.
Requirements for replication compatibility
TimesTen replication is supported only between identical platforms and bit-levels. The databases must have DSNs with identical DatabaseCharacterSet and TypeMode database attributes.
Replication is the process of copying data from a master database to a subscriber database. Replication is controlled by replication agents for each database. The replication agent on the master database reads the records from the transaction log for the master database. It forwards changes to replicated elements to the replication agent on the subscriber database. The replication agent on the subscriber database then applies the updates to its database.
Requirements for replication compatibility
TimesTen replication is supported only between identical platforms and bit-levels. The databases must have DSNs with identical DatabaseCharacterSet and TypeMode database attributes.
Wednesday, September 19, 2012
Oracle In-Memory Database Cache Concepts
Oracle In-Memory Database (IMDB) Cache is an Oracle Database product option that includes the Oracle TimesTen In-Memory Database. It is used as a database cache at the application tier to cache Oracle data and reduce the workload on the Oracle database.
You can cache Oracle data in a TimesTen database by defining a cache grid and then creating cache groups. A cache group in a TimesTen database can cache a single Oracle table or a group of related Oracle tables.
Overview of a cache grid
A cache grid is a set of distributed TimesTen in-memory databases that work together to cache data from an Oracle database and guarantee cache coherence among the TimesTen databases.
Overview of cache groups
Cache groups define the Oracle data to be cached in a TimesTen database. A cache group can be defined to cache all or part of a single Oracle table, or a set of related Oracle tables.
You can cache Oracle data in a TimesTen database by defining a cache grid and then creating cache groups. A cache group in a TimesTen database can cache a single Oracle table or a group of related Oracle tables.
Overview of a cache grid
A cache grid is a set of distributed TimesTen in-memory databases that work together to cache data from an Oracle database and guarantee cache coherence among the TimesTen databases.
Overview of cache groups
Cache groups define the Oracle data to be cached in a TimesTen database. A cache group can be defined to cache all or part of a single Oracle table, or a set of related Oracle tables.
Friday, December 16, 2011
Tips for Solaris 10
Enabling Root Login via SSH
After fresh installation of Solaris 10, root login via SSH is disabled by default. It can be enabled as follows:
1. Modify /etc/ssh/sshd_config, set "PermitRootLogin" to "yes".
2. Restart the SSH service:
Changing Root's Default Shell to Bash
Modify /etc/passwd, change root user's default shell to /usr/bin/bash.
Installing Sudo
There is no sudo in Solaris 10 by default, so you need to build and install it from its source code by yourself. First, you need to install following dependent packages first (download them from http://sunfreeware.com):
gcc-3.4.6-sol10-x86-local.gz
libiconv-1.14-sol10-x86-local.gz
They can be installed by using "pkgadd" command.
Second, download sudo's source distribution from http://www.sudo.ws, "configure", "make" and "make install" it.
After fresh installation of Solaris 10, root login via SSH is disabled by default. It can be enabled as follows:
1. Modify /etc/ssh/sshd_config, set "PermitRootLogin" to "yes".
2. Restart the SSH service:
# svcadm restart svc:/network/ssh:default
Changing Root's Default Shell to Bash
Modify /etc/passwd, change root user's default shell to /usr/bin/bash.
Installing Sudo
There is no sudo in Solaris 10 by default, so you need to build and install it from its source code by yourself. First, you need to install following dependent packages first (download them from http://sunfreeware.com):
gcc-3.4.6-sol10-x86-local.gz
libiconv-1.14-sol10-x86-local.gz
They can be installed by using "pkgadd" command.
Second, download sudo's source distribution from http://www.sudo.ws, "configure", "make" and "make install" it.
Friday, December 2, 2011
Git Tagging
There are two basic tag types, usually called lightweight and annotated. An annotated tag is more substantial and creates an object.
Listing Tags
Creating Tags
You create an annotated, unsigned tag with a message on a commit using the "git tag" command:
Sharing Tags
By default, the "git push" command will not transfer tags to remote servers. To do so, you have to explicitly add a "--tags" to the "git push" command:
git push --tags
References:
http://learn.github.com/p/tagging.html
Listing Tags
git tag
Creating Tags
You create an annotated, unsigned tag with a message on a commit using the "git tag" command:
git tag -a r20111202 -m"revision 20111202"
Sharing Tags
By default, the "git push" command will not transfer tags to remote servers. To do so, you have to explicitly add a "--tags" to the "git push" command:
git push --tags
References:
http://learn.github.com/p/tagging.html
Monday, November 28, 2011
A Patch to Keep the Resizing of tmps Surviving Through Reboot
After a fresh installation of RHEL 6.1, I needed to adjust the size of tmpfs. However, I found it is a little more complicated than just add "size" option in /etc/fstab for tmpfs. The setting of "size" in /etc/fstab doesn't keep through system reboot. Then I found following patch against /etc/rc.d/rc.sysinit which can make it persistent:
[root@bbrh6u1src ~]# diff -u /etc/rc.d/rc.sysinit.orig /etc/rc.d/rc.sysinit
--- /etc/rc.d/rc.sysinit.orig 2011-04-19 10:04:51.000000000 -0400
+++ /etc/rc.d/rc.sysinit 2011-11-28 08:33:53.166819280 -0500
@@ -487,7 +487,7 @@
mount -f /proc >/dev/null 2>&1
mount -f /sys >/dev/null 2>&1
mount -f /dev/pts >/dev/null 2>&1
- mount -f /dev/shm >/dev/null 2>&1
+ #mount -f /dev/shm >/dev/null 2>&1
mount -f /proc/bus/usb >/dev/null 2>&1
fi
@@ -495,7 +495,7 @@
# mounted). Contrary to standard usage,
# filesystems are NOT unmounted in single user mode.
if [ "$READONLY" != "yes" ] ; then
- action $"Mounting local filesystems: " mount -a -t nonfs,nfs4,smbfs,ncpfs,cifs,gfs,gfs2 -O no_netdev
+ action $"Mounting local filesystems: " mount -a -t tmpfs,nonfs,nfs4,smbfs,ncpfs,cifs,gfs,gfs2 -O no_netdev
else
action $"Mounting local filesystems: " mount -a -n -t nonfs,nfs4,smbfs,ncpfs,cifs,gfs,gfs2 -O no_netdev
fi
References:
https://www.redhat.com/archives/rhelv6-list/2011-February/msg00081.html
[root@bbrh6u1src ~]# diff -u /etc/rc.d/rc.sysinit.orig /etc/rc.d/rc.sysinit
--- /etc/rc.d/rc.sysinit.orig 2011-04-19 10:04:51.000000000 -0400
+++ /etc/rc.d/rc.sysinit 2011-11-28 08:33:53.166819280 -0500
@@ -487,7 +487,7 @@
mount -f /proc >/dev/null 2>&1
mount -f /sys >/dev/null 2>&1
mount -f /dev/pts >/dev/null 2>&1
- mount -f /dev/shm >/dev/null 2>&1
+ #mount -f /dev/shm >/dev/null 2>&1
mount -f /proc/bus/usb >/dev/null 2>&1
fi
@@ -495,7 +495,7 @@
# mounted). Contrary to standard usage,
# filesystems are NOT unmounted in single user mode.
if [ "$READONLY" != "yes" ] ; then
- action $"Mounting local filesystems: " mount -a -t nonfs,nfs4,smbfs,ncpfs,cifs,gfs,gfs2 -O no_netdev
+ action $"Mounting local filesystems: " mount -a -t tmpfs,nonfs,nfs4,smbfs,ncpfs,cifs,gfs,gfs2 -O no_netdev
else
action $"Mounting local filesystems: " mount -a -n -t nonfs,nfs4,smbfs,ncpfs,cifs,gfs,gfs2 -O no_netdev
fi
References:
https://www.redhat.com/archives/rhelv6-list/2011-February/msg00081.html
Subscribe to:
Posts (Atom)