(Doc ID 1367153.1)
Issue #1: The Node rebooted, but the log files do not show any error or cause.
Issue #2: The node rebooted because it was evicted due to missing network heartbeats.
Issue #3: The node rebooted after a problem with storage.
The ocssd.log file shows that the node rebooted because it cannot access a majority of voting disks.
Solution: Fix the problem with the voting disk. Make sure that voting disks are available and accessible by user oracle or grid or any user who owns CRS or GI HOME. If the voting disk is not in ASM, use “dd if= of=/dev/null bs=1024 count=10240” to test the accessibility.
Issue #4: Node is rebooted after asm or database instance hang or eviction.
The ocssd.log of surviving node shows a member kill request escalated to node kill request.
Cause: Starting 11.1, inability to evict a database or asm instance at the database level means that CRS gets involved and tries to kill the problem instance. This is a member kill request. If CRS cannot kill the problem instance, then CRS reboots the node because the member kill request is escalated to a node kill request.