All posts by mrculp

VMWare Disk Issue

The issue we are trying to fix is that the VM was not sharing the disk devices, there are some parameters needed to be set


The fix:

– Make sure the VM is down
– Go to Edit Settings -> Options -> Advanced / General -> Press the “Configuration Parameters” button.
– Search the list for the name: “ctkEnabled” and change the value to “false”.
– Search the list for the names: “scsi0:0.ctkEnabled”, “scsi0:1.ctkEnabled”, “scsi1:0.ctkEnabled”, etc…. and set these values to “false”. Do this for every disk attached to this VM!

Last step is to remove the reference to the change tracking file from the vmdk descriptors. This should be done for each and every disk

– Enable SSH on you ESXi host and login through SSH
– Go to the directory of the VM that holds the VMDK file. If your VM has multiple VMDKs, maybe spread over multiple datastores, you’ll have to repeat this for each VMDK.
– List all VMDK files:  ls -l *.vmdk

– Check which VMDK file still has a reference to the change tracking files (CBT): grep changeTrackPath VMName.vmdk

– You should see something like this: changeTrackPath=”SPVSQ001-ctk.vmdk”

– If the references is still present, edit the vmdk file using the vi editor and place a # at the start of the changeTrackPath line. (Go to the line, press i for insert, type #, press <ESC>:wq to save the VMDK).

– Check the other VMDKs as well but leave the “-flat” and “-ctk” vmdk files alone.

– Now try to Power On the VM.


.kshrc and .bashrc for persistent settings

In order to get set -o vi
This should be set at the .kshrc or .bashrc level

# .kshrc

# Source global definitions
if [ -f /etc/kshrc ]; then
. /etc/kshrc

# use emacs editing mode by default
# set -o emacs
set -o vi

# User specific aliases and functions

For the bash folks here is the .bashrc file

# .bashrc

# User specific aliases and functions

alias rm='rm -i'
alias cp='cp -i'
alias mv='mv -i'

# Source global definitions
if [ -f /etc/bashrc ]; then
. /etc/bashrc

set -o vi

ASM Setup

BEFORE running the installer, you need to use oracleasm utilities to create the disks.

You may need to run this as root but I’m not 100% sure.

Probably, root will need to create the directory /dev/oracleasm and give ownership to grid in group oraasm or something like that.  It is VERY important that the disks are owned by the Grid user and group you designate to run ASM.  You CAN NOT do this as a a side-step when you have OUI running.  You must close it, create the disks, and then restart OUI.

When you get to the ASM setup, make sure the disk search string is a pattern that matches where the disks are.  ASM will only be able to use disks whose device nodes are owned by the correct user.  ASM disks MUST be raw (character, not block devices) and the file permissions must be 0600 (meaning read/write by owner and NOBODY else can read or write, with no set or sticky bits).

RAC Networking Considerations

Networking Considerations

For the private network 10 Gigabit Ethernet is highly recommended, the minimum requirement is 1 Gigabit Ethernet.

Underscores are not be used in a host or domain name according to RFC952 – DoD Internet host table specification. The same applies for Net, Host, Gateway, or Domain name.

The VIPs and SCAN VIPs must be on the same subnet as the public interface. For additional information see the Understanding SCAN VIP white paper.

The default gateway must be on the same subnet as the VIPs (including SCAN VIPs) to prevent VIP start/stop/failover issues. With 11gR2 this is detected and reported by the OUI, if the check is ignored this will result in the failure to start the VIPs resulting in failure of the installation itself.

It is recommended that the SCAN name (11gR2 and above) resolve via DNS to a minimum of 3 IP addresses round-robin regardless of the size of the cluster. For additional information see the Understanding SCAN VIP white paper.

To avoid name resolution issues, ensure that the HOSTS files and DNS are furnished with both VIP and Public host names. SCAN must NOT be in the HOSTS file due to the fact that the HOSTS file
is only able to represent a 1:1 host to IP mapping.

The network interfaces must have the same name on all nodes (e.g eth1 -> eth1 in support of the VIP and eth2 -> eth2 in support of the private interconnect).
Network Interface Card (NIC) names must not contain ” . ”

Jumbo Frames for the private interconnect is a recommended best practice for enhanced performance of cache fusion operations. Reference: Document 341788.1

Use non-routable network addresses for private interconnect; Class A: to, Class B: to, Class C: to Refer to RFC1918 and Document 338924.1 for additional information.
Make sure network interfaces are configured correctly in terms of speed, duplex, etc. Various tools exist to monitor and test network: ethtool, iperf, netperf, spray and tcp. See Document 563566.1.

To avoid the public network or the private interconnect network from being a single point of failure, Oracle highly recommends configuring a redundant set of public network interface cards (NIC’s) and private interconnect NIC’s on each cluster node.. Document 787420.1. Starting with Oracle Grid Infrastructure can provide redundancy and load balancing for the private interconnect (NOT the public network), this is the preferred method of NIC redundancy for full stacks ( Database must be used). More information can be found in Document 1210883.1.

NOTE: If using the Redundant Interconnect/HAIP feature – At present it is REQUIRED that all interconnect interfaces be placed on separate subnets. If the interfaces are all on the same subnet and the cable is pulled from the first NIC in the routing table a rebootless-restart or node reboot will occur. See Document 1481481.1 for a technical description of this requirement.

For more predictable hardware discovery, place hba and nic cards in the same corresponding slot on each server in the Grid.

The use of a switch (or redundant switches) is required for the private network (crossover cables are NOT supported).

Dedicated redundant switches are highly recommended for the private interconnect due to the fact that deploying the private interconnect on a switch (even when using a VLAN) may expose the interconnect links to congestion and instability in the larger IP network topology. If deploying the interconnect on a VLAN, there should be a 1:1 mapping of VLAN to non-routable subnet and the VLAN should not span multiple VLANs (tagged) or multiple switches. Deployment concerns in this environment include Spanning Tree loops when the larger IP network topology changes, Asymmetric routing that may cause packet flooding, and lack of fine grained monitoring of the VLAN/port. Reference Bug 9761210.

If deploying the cluster interconnect on a VLAN, review the considerations in the Oracle RAC and Clusterware Interconnect Virtual Local Area Networks (VLANs) white paper.

Consider using Infiniband on the interconnect for workloads that have high volume requirements. Infiniband can also improve performance by lowering latency. When Infiniband is in place the RDS protocol can be used to further reduce latency. See Document 751343.1 for additional details.

In IPv6 is supported for the Public Network, IPv4 must be used for the Private Network. Starting with IPv6 is fully supported for both the public and private interfaces. Please see the Oracle Database IPv6 State of Direction white paper for details.
For version Grid Infrastructure multicast traffic must be allowed on the private network for the subnet. Patch: 9974223 (Included in GI PSU and above) for Oracle Grid Infrastructure enables multicasting on the multicast address on the private network. Multicast must be allowed on the private network for one of these 2 addresses (assuming the patch has been applied). Additional information as well as a program to test multicast functionality is provided in Document 1212703.1.