oracle12C RAC GI + UDEV + ASM 在centos6下安装详细步骤

2024-01-23 18:10

本文主要是介绍oracle12C RAC GI + UDEV + ASM 在centos6下安装详细步骤,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

Centos配置网络yum源:

http://blog.csdn.net/kadwf123/article/details/78231694

Centos配置安装vnc客户端和服务端:

http://blog.csdn.net/kadwf123/article/details/78232672

Centos配置dns服务器:

http://blog.csdn.net/kadwf123/article/details/78232853

Centos公网双网卡绑定bond0

http://blog.csdn.net/kadwf123/article/details/78234727

Centos下安装GI前准备详细执行步骤:

http://blog.csdn.net/kadwf123/article/details/78235488

Centos下安装oracle12c需要的依赖包详细安装过程:

http://blog.csdn.net/kadwf123/article/details/78238022

Centos下安装oracle12c节点1完成后检查过程:

http://blog.csdn.net/kadwf123/article/details/78241445

Centos下安装oracle12c四个节点都完成共享磁盘连接后对共享磁盘分区详细过程:

http://blog.csdn.net/kadwf123/article/details/78244863

centos6.4 /etc/resolv.conf文件改了重启网络就自动还原了

http://blog.csdn.net/kadwf123/article/details/78786947

 

操作系统Centos6.4:

安装前网络规划:

 

1、使用之前已经配置好的公网bond0的ip通过crt登陆主机

2、编辑eth2的配置文件

 

[root@rac1 network-scripts]# vi ifcfg-eth2

DEVICE=eth2 HWADDR=08:00:27:18:29:48 TYPE=Ethernet ONBOOT=yes NM_CONTROLLED=yes BOOTPROTO=none IPV6INIT=no USERCTL=no IPADDR=10.0.10.1 NETMASK=255.255.255.0 GATEWAY=192.168.0.1

 

3、编辑eth3的配置文件

[plain] view plain copy print?

  1. <code class="language-plain">[root@rac1 network-scripts]# vi ifcfg-eth3  
  2.   
  3. DEVICE=eth3  
  4. HWADDR=08:00:27:59:1e:79  
  5. TYPE=Ethernet  
  6. ONBOOT=yes  
  7. BOOTPROTO=none  
  8. IPADDR=10.0.10.2  
  9. NETMASK=255.255.255.0  
  10. GATEWAY=192.168.0.1  
  11. IPV6INIT=no  
  12. USERCTL=no</code>  
 
  1. [root@rac1 network-scripts]# vi ifcfg-eth3

  2.  
  3. DEVICE=eth3

  4. HWADDR=08:00:27:59:1e:79

  5. TYPE=Ethernet

  6. ONBOOT=yes

  7. BOOTPROTO=none

  8. IPADDR=10.0.10.2

  9. NETMASK=255.255.255.0

  10. GATEWAY=192.168.0.1

  11. IPV6INIT=no

  12. USERCTL=no


4、bootproto=no表示手动设置ip地址。onboot=yes表示开机自动加载网卡。

注意设置ipaddr、geteway、netmask选项。

 

5、设置完成重启网络服务。

 
  1. [root@rac1 network-scripts]# service network restart

  2. 正在关闭接口 bond0: [确定]

  3. 正在关闭接口 eth2: [确定]

  4. 正在关闭接口 eth3: [确定]

  5. 关闭环回接口: [确定]

  6. 弹出环回接口: [确定]

  7. 弹出界面 bond0: [确定]

  8. 弹出界面 eth2: [确定]

  9. 弹出界面 eth3: [确定]


6、ifconfig查看eth2和eth3网卡IP地址是否正确;

 
  1. [root@rac1 network-scripts]# ifconfig

  2. bond0 Link encap:Ethernet HWaddr 08:00:27:FC:7E:5B

  3. inet addr:192.168.0.51 Bcast:192.168.0.255 Mask:255.255.255.0

  4. inet6 addr: fe80::a00:27ff:fefc:7e5b/64 Scope:Link

  5. UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1

  6. RX packets:5185 errors:0 dropped:0 overruns:0 frame:0

  7. TX packets:2571 errors:0 dropped:0 overruns:0 carrier:0

  8. collisions:0 txqueuelen:0

  9. RX bytes:463220 (452.3 KiB) TX bytes:319388 (311.9 KiB)

  10.  
  11. eth0 Link encap:Ethernet HWaddr 08:00:27:FC:7E:5B

  12. UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1

  13. RX packets:4262 errors:0 dropped:0 overruns:0 frame:0

  14. TX packets:1912 errors:0 dropped:0 overruns:0 carrier:0

  15. collisions:0 txqueuelen:1000

  16. RX bytes:377145 (368.3 KiB) TX bytes:247430 (241.6 KiB)

  17.  
  18. eth1 Link encap:Ethernet HWaddr 08:00:27:FC:7E:5B

  19. UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1

  20. RX packets:923 errors:0 dropped:0 overruns:0 frame:0

  21. TX packets:659 errors:0 dropped:0 overruns:0 carrier:0

  22. collisions:0 txqueuelen:1000

  23. RX bytes:86075 (84.0 KiB) TX bytes:71958 (70.2 KiB)

  24.  
  25. eth2 Link encap:Ethernet HWaddr 08:00:27:18:29:48

  26. inet addr:10.0.10.1 Bcast:10.0.10.255 Mask:255.255.255.0

  27. inet6 addr: fe80::a00:27ff:fe18:2948/64 Scope:Link

  28. UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

  29. RX packets:1036 errors:0 dropped:0 overruns:0 frame:0

  30. TX packets:123 errors:0 dropped:0 overruns:0 carrier:0

  31. collisions:0 txqueuelen:1000

  32. RX bytes:92021 (89.8 KiB) TX bytes:13614 (13.2 KiB)

  33.  
  34. eth3 Link encap:Ethernet HWaddr 08:00:27:59:1E:79

  35. inet addr:10.0.10.2 Bcast:10.0.10.255 Mask:255.255.255.0

  36. inet6 addr: fe80::a00:27ff:fe59:1e79/64 Scope:Link

  37. UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

  38. RX packets:937 errors:0 dropped:0 overruns:0 frame:0

  39. TX packets:50 errors:0 dropped:0 overruns:0 carrier:0

  40. collisions:0 txqueuelen:1000

  41. RX bytes:87752 (85.6 KiB) TX bytes:3208 (3.1 KiB)

  42.  
  43. lo Link encap:Local Loopback

  44. inet addr:127.0.0.1 Mask:255.0.0.0

  45. inet6 addr: ::1/128 Scope:Host

  46. UP LOOPBACK RUNNING MTU:16436 Metric:1

  47. RX packets:8 errors:0 dropped:0 overruns:0 frame:0

  48. TX packets:8 errors:0 dropped:0 overruns:0 carrier:0

  49. collisions:0 txqueuelen:0

  50. RX bytes:728 (728.0 b) TX bytes:728 (728.0 b)

按照上面配置的私网ip,因为ip在一个网段,所以会检查出错误

详细错误说明如下:

 
  1. Node Connectivity - This is a prerequisite condition to test whether connectivity exists amongst all the nodes. The connectivity is being tested for the subnets "10.0.10.0,10.0.10.0,192.168.0.0"

  2. Check Failed on Nodes: [rac2, ?rac1, ?rac4, ?rac3]

  3. Verification result of failed node: rac2 ?Details:

  4. ?-?

  5. PRVG-11073 : Subnet on interface "eth2" of node "rac1" is overlapping with the subnet on interface "eth3". IP address range ["10.0.10.0"-"10.0.10.255"] is overlapping with IP address range ["10.0.10.0"-"10.0.10.255"]. PRVG-11073 : Subnet on interface "eth2" of node "rac2" is overlapping with the subnet on interface "eth3". IP address range ["10.0.10.0"-"10.0.10.255"] is overlapping with IP address range ["10.0.10.0"-"10.0.10.255"]. PRVG-11073 : Subnet on interface "eth2" of node "rac3" is overlapping with the subnet on interface "eth3". IP address range ["10.0.10.0"-"10.0.10.255"] is overlapping with IP address range ["10.0.10.0"-"10.0.10.255"]. PRVG-11073 : Subnet on interface "eth2" of node "rac4" is overlapping with the subnet on interface "eth3". IP address range ["10.0.10.0"-"10.0.10.255"] is overlapping with IP address range ["10.0.10.0"-"10.0.10.255"]. ?- Cause:?Cause Of Problem Not Available ?- Action:?User Action Not Available

  6. Back to Top

  7. Verification result of failed node: rac1 ?Details:

  8. ?-?

  9. PRVG-11073 : Subnet on interface "eth2" of node "rac1" is overlapping with the subnet on interface "eth3". IP address range ["10.0.10.0"-"10.0.10.255"] is overlapping with IP address range ["10.0.10.0"-"10.0.10.255"]. PRVG-11073 : Subnet on interface "eth2" of node "rac2" is overlapping with the subnet on interface "eth3". IP address range ["10.0.10.0"-"10.0.10.255"] is overlapping with IP address range ["10.0.10.0"-"10.0.10.255"]. PRVG-11073 : Subnet on interface "eth2" of node "rac3" is overlapping with the subnet on interface "eth3". IP address range ["10.0.10.0"-"10.0.10.255"] is overlapping with IP address range ["10.0.10.0"-"10.0.10.255"]. PRVG-11073 : Subnet on interface "eth2" of node "rac4" is overlapping with the subnet on interface "eth3". IP address range ["10.0.10.0"-"10.0.10.255"] is overlapping with IP address range ["10.0.10.0"-"10.0.10.255"]. ?- Cause:?Cause Of Problem Not Available ?- Action:?User Action Not Available

  10. Back to Top

  11. Verification result of failed node: rac4 ?Details:

  12. ?-?

  13. PRVG-11073 : Subnet on interface "eth2" of node "rac1" is overlapping with the subnet on interface "eth3". IP address range ["10.0.10.0"-"10.0.10.255"] is overlapping with IP address range ["10.0.10.0"-"10.0.10.255"]. PRVG-11073 : Subnet on interface "eth2" of node "rac2" is overlapping with the subnet on interface "eth3". IP address range ["10.0.10.0"-"10.0.10.255"] is overlapping with IP address range ["10.0.10.0"-"10.0.10.255"]. PRVG-11073 : Subnet on interface "eth2" of node "rac3" is overlapping with the subnet on interface "eth3". IP address range ["10.0.10.0"-"10.0.10.255"] is overlapping with IP address range ["10.0.10.0"-"10.0.10.255"]. PRVG-11073 : Subnet on interface "eth2" of node "rac4" is overlapping with the subnet on interface "eth3". IP address range ["10.0.10.0"-"10.0.10.255"] is overlapping with IP address range ["10.0.10.0"-"10.0.10.255"]. ?- Cause:?Cause Of Problem Not Available ?- Action:?User Action Not Available

  14. Back to Top

  15. Verification result of failed node: rac3 ?Details:

  16. ?-?

  17. PRVG-11073 : Subnet on interface "eth2" of node "rac1" is overlapping with the subnet on interface "eth3". IP address range ["10.0.10.0"-"10.0.10.255"] is overlapping with IP address range ["10.0.10.0"-"10.0.10.255"]. PRVG-11073 : Subnet on interface "eth2" of node "rac2" is overlapping with the subnet on interface "eth3". IP address range ["10.0.10.0"-"10.0.10.255"] is overlapping with IP address range ["10.0.10.0"-"10.0.10.255"]. PRVG-11073 : Subnet on interface "eth2" of node "rac3" is overlapping with the subnet on interface "eth3". IP address range ["10.0.10.0"-"10.0.10.255"] is overlapping with IP address range ["10.0.10.0"-"10.0.10.255"]. PRVG-11073 : Subnet on interface "eth2" of node "rac4" is overlapping with the subnet on interface "eth3". IP address range ["10.0.10.0"-"10.0.10.255"] is overlapping with IP address range ["10.0.10.0"-"10.0.10.255"]. ?- Cause:?Cause Of Problem Not Available ?- Action:?User Action Not Available

  18. Back to Top

如果遇到上面的错误,你可以选择忽略。如果不想忽略,也可以选择把两块私网ip配置成不同网段的ip地址。比如eth2配置成10.0.10.1,eth3配置成10.0.11.2就可以了。

 

7、本地ping

 
  1. [root@rac1 network-scripts]# ping 10.0.10.1

  2. PING 10.0.10.1 (10.0.10.1) 56(84) bytes of data.

  3. 64 bytes from 10.0.10.1: icmp_seq=1 ttl=64 time=0.024 ms

  4. 64 bytes from 10.0.10.1: icmp_seq=2 ttl=64 time=0.030 ms

  5. ^C

  6. --- 10.0.10.1 ping statistics ---

  7. 2 packets transmitted, 2 received, 0% packet loss, time 1893ms

  8. rtt min/avg/max/mdev = 0.024/0.027/0.030/0.003 ms

  9. [root@rac1 network-scripts]# ping 10.0.10.2

  10. PING 10.0.10.2 (10.0.10.2) 56(84) bytes of data.

  11. 64 bytes from 10.0.10.2: icmp_seq=1 ttl=64 time=0.023 ms

  12. 64 bytes from 10.0.10.2: icmp_seq=2 ttl=64 time=0.032 ms

  13. 64 bytes from 10.0.10.2: icmp_seq=3 ttl=64 time=0.038 ms

  14. ^C

  15. --- 10.0.10.2 ping statistics ---

  16. 3 packets transmitted, 3 received, 0% packet loss, time 2007ms

  17. rtt min/avg/max/mdev = 0.023/0.031/0.038/0.006 ms

  18. [root@rac1 network-scripts]#

 

8、ok,私网配置没问题。

 

9、确认该节点主机名是rac1

vi /etc/sysconfig/network

NETWORKDING=yes


HOSTNAME=rac1

 

NOZEROCONF=yes

 

命令:

hostname rac1

更改主机名立即生效

 

10、配置节点rac1使用dns服务器进行解析

vi /etc/resolv.conf

options attempts: 2

options timeout: 1

search taryartar.com

nameserver 192.168.0.88

nameserver 192.168.0.1

nameserver 8.8.8.8

 

11、验证dns是否能够生效

正向解析:

host rac1

nslookup rac1

反向解析:

nslookup 192.168.0.51

都没问题。

 
  1. [root@rac1 network-scripts]# nslookup rac1

  2. Server: 192.168.0.88

  3. Address: 192.168.0.88#53

  4.  
  5. Name: rac1.taryartar.com

  6. Address: 192.168.0.51

  7.  
  8. [root@rac1 network-scripts]# nslookup 192.168.0.51

  9. Server: 192.168.0.88

  10. Address: 192.168.0.88#53

  11.  
  12. 51.0.168.192.in-addr.arpa name = rac1.taryartar.com.

  13.  
  14. [root@rac1 network-scripts]# host rac1

  15. rac1.taryartar.com has address 192.168.0.51

  16. [root@rac1 network-scripts]#

 

12、创建用户和用户组

 
  1. [root@rac1 network-scripts]# groupadd -g 1000 oinstall

  2. [root@rac1 network-scripts]# groupadd -g 1031 dba

  3. [root@rac1 network-scripts]# groupadd -g 1032 asmdba

  4. [root@rac1 network-scripts]# useradd -u 1101 -g oinstall -G dba,asmdba oracle

  5. [root@rac1 network-scripts]# useradd -u 1100 -g oinstall -G asmdba grid

  6. [root@rac1 network-scripts]# mkdir -p /taryartar/12c/grid_base

  7. [root@rac1 network-scripts]# mkdir -p /taryartar/12c/grid_home

  8. [root@rac1 network-scripts]# mkdir -p /taryartar/12c/db_base/db_home

  9. [root@rac1 network-scripts]# chown -R grid:oinstall /taryartar/12c/grid_base

  10. [root@rac1 network-scripts]# chown -R grid:oinstall /taryartar/12c/grid_home

  11. [root@rac1 network-scripts]# chown -R oracle:oinstall /taryartar/12c/db_base

  12. [root@rac1 network-scripts]# chmod -R 775 /taryartar/12c/db_base

  13. [root@rac1 network-scripts]# chmod -R 775 /taryartar/12c/grid_base

  14. [root@rac1 network-scripts]# chmod -R 775 /taryartar/12c/grid_home

  15. [root@rac1 network-scripts]#

 

13、系统配置

Uname -a

Lsb_release

Each system must meet the following minimum memory requirements:

· 

At least 4 GB of RAM for Oracle Grid Infrastructure for a Cluster installations, including installations where you plan to install Oracle RAC.

· 

· 

Swap space equivalent to the multiple of the available RAM, as indicated in the following table:

· 

Table 2-1 Swap Space Required for 64-bit Linux and Linux on System z

· 

Available RAM

Swap Space Required

Between 4 GB and 16 GB

Equal to RAM

More than 16 GB

16 GB of RAM

 

14、交换分区不够,可以如下扩展

 

 
  1. [root@rac1 selinux]# dd if=/dev/zero of=/root/swapfile1 bs=1M count=1024

  2. 记录了500+0 的读入

  3. 记录了500+0 的写出

  4. 524288000字节(524 MB)已复制,1.34808 秒,389 MB/秒

  5. [root@rac1 selinux]# chmod 600 /root/swapfile1

  6. [root@rac1 selinux]# mkswap /root/swapfile1

  7. Setting up swapspace version 1, size = 511996 KiB

  8. no label, UUID=a44009c0-9d5e-4ea5-b9db-f8ae81b16a0b

  9. [root@rac1 selinux]# swapon /root/swapfile1

 
  1. [root@rac1 selinux]# free -m

  2. total used free shared buffers cached

  3. Mem: 1500 1428 71 0 20 1238

  4. -/+ buffers/cache: 169 1331

  5. Swap: 4007 0 4007

要想下次重启仍然生效,需要写入下面的配置文件

vi /etc/fstab

新增下面一行

/root/swapfile1 swap swap defaults 0 0


15、注意集群安装可能受防火墙和selinux干扰,最好关闭防火墙和selinux,否则出一些某明奇妙的问题排查起来很麻烦

关闭防火墙并关闭自启动

 
  1. [root@rac1 yum.repos.d]# service iptables stop

  2. iptables:清除防火墙规则:[确定]

  3. iptables:将链设置为政策 ACCEPT:filter [确定]

  4. iptables:正在卸载模块:[确定]

  5. [root@rac1 yum.repos.d]# chkconfig iptables off


16、关闭selinux

 
  1. [root@rac1 yum.repos.d]# getenforce

  2. Enforcing

  3. [root@rac1 yum.repos.d]# setenforce 0

  4. [root@rac1 yum.repos.d]# getenforce

  5. Permissive

vi /etc/selinux/config

修改SELINUX的值为permissive

SELINUX=permissive

17、关闭ipv6

 
  1. [root@rac1 selinux]# service ip6tables stop

  2. ip6tables:清除防火墙规则:[确定]

  3. ip6tables:将 chains 设置为 ACCEPT 策略:filter [确定]

  4. :正在卸载模块:[确定]

  5. [root@rac1 selinux]# chkconfig ip6tables off

  6. [root@rac1 selinux]#

 

18、修改ssh连接配置文件

vi /etc/ssh/sshd_config

新增

LoginGraceTime 0

表示远程连接节点不会被关闭连接。


19、安装rpm包

针对redhat6(同centos6)官网指定安装包如下:

Packages for Oracle Linux 6 and Red Hat Enterprise Linux 6

The following packages (or later versions) must be installed:

binutils-2.20.51.0.2-5.11.el6 (x86_64)

compat-libcap1-1.10-1 (x86_64)

compat-libstdc++-33-3.2.3-69.el6 (x86_64)

compat-libstdc++-33-3.2.3-69.el6.i686

gcc-4.4.4-13.el6 (x86_64)

gcc-c++-4.4.4-13.el6 (x86_64)

glibc-2.12-1.7.el6 (i686)

glibc-2.12-1.7.el6 (x86_64)

glibc-devel-2.12-1.7.el6 (x86_64)

glibc-devel-2.12-1.7.el6.i686

ksh

libgcc-4.4.4-13.el6 (i686)

libgcc-4.4.4-13.el6 (x86_64)

libstdc++-4.4.4-13.el6 (x86_64)

libstdc++-4.4.4-13.el6.i686

libstdc++-devel-4.4.4-13.el6 (x86_64)

libstdc++-devel-4.4.4-13.el6.i686

libaio-0.3.107-10.el6 (x86_64)

libaio-0.3.107-10.el6.i686

libaio-devel-0.3.107-10.el6 (x86_64)

libaio-devel-0.3.107-10.el6.i686

libXext-1.1 (x86_64)

libXext-1.1 (i686)

libXtst-1.0.99.2 (x86_64)

libXtst-1.0.99.2 (i686)

libX11-1.3 (x86_64)

libX11-1.3 (i686)

libXau-1.0.5 (x86_64)

libXau-1.0.5 (i686)

libxcb-1.5 (x86_64)

libxcb-1.5 (i686)

libXi-1.3 (x86_64)

libXi-1.3 (i686)

make-3.81-19.el6

sysstat-9.0.4-11.el6 (x86_64)

nfs-utils-1.2.3-15.0.1


老老实实配置好yum源然后一个个安装,链接如下:

配置yum源:

http://blog.csdn.net/kadwf123/article/details/78231694

安装依赖包:

http://blog.csdn.net/kadwf123/article/details/78238022

 

20、修改环境变量

为grid用户设置环境变量
Su - grid
Echo $SHELL
Vi /home/grid/.bash_profile

新增下列内容:

export ORACLE_SID=+ASM1
export ORACLE_BASE=/taryartar/12c/grid_base
export ORACLE_HOME=/taryartar/12c/grid_home
export GRID_HOME=$ORACLE_HOME
export ORACLE_TERM=xterm
export TMP=/tmp
export TMPDIR=$TMP
#PATH=$PATH:$HOME/bin
export PATH=/usr/sbin:$PATH:$ORACLE_HOME/bin
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
export ORACLE_HOSTNAME=rac1.taryartar.com
export DB_UNIQUE_NAME=tar
export CVUQDISK_GRP=oinstall
umask 022

 

 

[grid@rac1 ~]$ vi .bash_profile

# .bash_profile

# Get the aliases and functions
if [ -f ~/.bashrc ]; then
        . ~/.bashrc
fi

# User specific environment and startup programs

PATH=$PATH:$HOME/bin

export PATH

#add by wufan 2017/10/15
export ORACLE_SID=+ASM1
export ORACLE_BASE=/taryartar/12c/grid_base
export ORACLE_HOME=/taryartar/12c/grid_home
export GRID_HOME=$ORACLE_HOME
export ORACLE_TERM=xterm
export TMP=/tmp
export TMPDIR=$TMP
#PATH=$PATH:$HOME/bin
export PATH=/usr/sbin:$PATH:$ORACLE_HOME/bin
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
export ORACLE_HOSTNAME=rac1.taryartar.com
export DB_UNIQUE_NAME=tar
export CVUQDISK_GRP=oinstall
umask 022

使环境变量立即生效:

source .bash_profile

 

 

21、修改oracle环境变量

Su - oracle
Echo $SHELL
Vi /home/oracle/.bash_profile

新增下列内容:

export ORACLE_BASE=/taryartar/12c/db_base
export ORACLE_HOME=$ORACLE_BASE/db_home
export ORACLE_SID=tar1
export ORACLE_OWNER=oracle
export ORACLE_TERM=vt100
export PATH=$PATH:$ORACLE_HOME/bin
export PATH=$ORACLE_HOME/Apache/Apache/bin:$PATH
export BASE_PATH=/usr/sbin:$PATH:$BASE_PATH
export PATH=$BASE_PATH
LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib:/usr/local/lib
export LD_LIBRARY_PATH
CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
CLASSPATH=$CLASSPATH:$ORACLE_HOME/network/jlib
export CLASSPATH
export ORACLE_HOSTNAME=rac1.taryartar.com
export DB_UNIQUE_NAME=tar
export CVUQDISK=oinstall
umask 022

 

 
  1. [oracle@rac1 ~]$ vi .bash_profile

  2.  
  3. # .bash_profile

  4.  
  5. # Get the aliases and functions

  6. if [ -f ~/.bashrc ]; then

  7. . ~/.bashrc

  8. fi

  9.  
  10. # User specific environment and startup programs

  11.  
  12. PATH=$PATH:$HOME/bin

  13.  
  14. export PATH

  15.  
  16. #add by wufan 2017/10/15

  17. export ORACLE_BASE=/taryartar/12c/db_base

  18. export ORACLE_HOME=$ORACLE_BASE/db_home

  19. export ORACLE_SID=tar1

  20. export ORACLE_OWNER=oracle

  21. export ORACLE_TERM=vt100

  22. export PATH=$PATH:$ORACLE_HOME/bin

  23. export PATH=$ORACLE_HOME/Apache/Apache/bin:$PATH

  24. export BASE_PATH=/usr/sbin:$PATH:$BASE_PATH

  25. export PATH=$BASE_PATH

  26. LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib:/usr/local/lib

  27. export LD_LIBRARY_PATH

  28. CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib

  29. CLASSPATH=$CLASSPATH:$ORACLE_HOME/network/jlib

  30. export CLASSPATH

  31. export ORACLE_HOSTNAME=rac1.taryartar.com

  32. export DB_UNIQUE_NAME=tar

  33. export CVUQDISK=oinstall

  34. umask 022

使环境变量立即生效:

source .bash_profile

 

22、修改内核参数

Su - root
Vi /etc/sysctl.conf
先把kernel.shmmax和kernel.shmall
两个注释掉
新增下面一段:

fs.aio-max-nr = 1048576
fs.file-max = 6815744
kernel.shmall = 2097152
#物理内存的50-60%
kernel.shmmax = 1046898278
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048586

[root@rac1 ssh]# vi /etc/sysctl.conf
 
  1. # Controls the maximum shared segment size, in bytes

  2. #kernel.shmmax = 68719476736

  3.  
  4. # Controls the maximum number of shared memory segments, in pages

  5. #kernel.shmall = 4294967296

  6.  
  7. ##add by wufan 2017/10/15

  8.  
  9. fs.aio-max-nr = 1048576

  10. fs.file-max = 6815744

  11. kernel.shmall = 2097152

  12. #物理内存的50-60%

  13. kernel.shmmax = 1046898278

  14. kernel.shmmni = 4096

  15. kernel.sem = 250 32000 100 128

  16. net.ipv4.ip_local_port_range = 9000 65500

  17. net.core.rmem_default = 262144

  18. net.core.rmem_max = 4194304

  19. net.core.wmem_default = 262144

  20. net.core.wmem_max = 1048586

使内核参数立即生效

 
  1. [root@rac1 ssh]# sysctl -p

  2. net.ipv4.ip_forward = 0

  3. net.ipv4.conf.default.rp_filter = 1

  4. net.ipv4.conf.default.accept_source_route = 0

  5. kernel.sysrq = 0

  6. kernel.core_uses_pid = 1

  7. net.ipv4.tcp_syncookies = 1

  8. error: "net.bridge.bridge-nf-call-ip6tables" is an unknown key

  9. error: "net.bridge.bridge-nf-call-iptables" is an unknown key

  10. error: "net.bridge.bridge-nf-call-arptables" is an unknown key

  11. kernel.msgmnb = 65536

  12. kernel.msgmax = 65536

  13. fs.aio-max-nr = 1048576

  14. fs.file-max = 6815744

  15. kernel.shmall = 2097152

  16. kernel.shmmax = 1046898278

  17. kernel.shmmni = 4096

  18. kernel.sem = 250 32000 100 128

  19. net.ipv4.ip_local_port_range = 9000 65500

  20. net.core.rmem_default = 262144

  21. net.core.rmem_max = 4194304

  22. net.core.wmem_default = 262144

  23. net.core.wmem_max = 1048586

查看内核是否生效

 
  1. [root@rac1 ssh]# sysctl -a |grep shmmax

  2. kernel.shmmax = 1046898278


23、修改用户资源限制

[root@rac1 ssh]# vi /etc/security/limits.conf 

新增如下内容:

 
  1. oracle soft nproc 2047

  2. oracle hard nproc 16384

  3. oracle soft nofile 1024

  4. oracle hard nofile 65536

  5. grid soft nproc 16384

  6. grid hard nproc 16384

  7. grid soft nofile 65536

  8. grid hard nofile 65536

  9. grid soft stack 10240

  10. grid hard stack 10240

nproc 最大进程数限制

nofile 最大文件数限制


24、至此,节点1安装前准备就算基本完成,下面需要对节点1重启后检查一遍:

http://blog.csdn.net/kadwf123/article/details/78241445

 

25、对第一个节点执行导出(需要注意该节点处于停止状态)。

导出后对导出的文件进行导入,然后启动新节点,用crt直接通过节点1的ip进行登录。

 

26、然后修改主机名和ip

 
  1. [oracle@rac2 ~]$ vi /etc/sysconfig/network

  2.  
  3. NETWORKING=yes

  4. HOSTNAME=rac2

[root@rac2 ~]# hostname rac2
 
  1. [root@rac2 ~]# hostname

  2. rac2

hostname改成rac2

 
  1. [root@rac2 ~]# vi /etc/sysconfig/network-scripts/ifcfg-bond0

  2.  
  3. DEVICE=bond0

  4. IPADDR=192.168.0.52

  5. NETMASK=255.255.255.0

  6. USERCTL=no

  7. BOOTPROTO=none

  8. ONBOOT=yes

  9. GATEWAY=192.168.0.1

  10. IPV6INIT=no

  11. TYPE=Ethernet

  12. #DNS1=192.168.0.1

公网ipaddr改成192.168.0.52,其它保持不变

改第一块私网网卡eth2

 
  1. [root@rac2 ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth2

  2.  
  3. DEVICE=eth2

  4. HWADDR=08:00:27:18:29:48

  5. TYPE=Ethernet

  6. ONBOOT=yes

  7. NM_CONTROLLED=yes

  8. BOOTPROTO=none

  9. IPV6INIT=no

  10. USERCTL=no

  11. IPADDR=10.0.10.3

  12. NETMASK=255.255.255.0

  13. GATEWAY=192.168.0.1

ipaddr改成10.0.10.3,其它保持不变。

改第二块私网网卡eth3

 
  1. [root@rac2 ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth3

  2.  
  3. DEVICE=eth3

  4. HWADDR=08:00:27:59:1e:79

  5. TYPE=Ethernet

  6. ONBOOT=yes

  7. BOOTPROTO=none

  8. IPADDR=10.0.10.4

  9. NETMASK=255.255.255.0

  10. GATEWAY=192.168.0.1

  11. IPV6INIT=no

  12. USERCTL=no


ipaddr改成10.0.10.4,其它不变。

27、重启网络服务

[root@rac2 ~]# service network restart

重启后需要重新登陆crt建立到新节点ip192.168.0.52的网络连接。

28、登陆后检查rac2的dns是否可用。

 
  1. [root@rac2 ~]# nslookup rac3

  2. Server: 192.168.0.88

  3. Address: 192.168.0.88#53

  4.  
  5. Name: rac3.taryartar.com

  6. Address: 192.168.0.53

  7.  
  8. [root@rac2 ~]# nslookup rac2.taryartar.com

  9. Server: 192.168.0.88

  10. Address: 192.168.0.88#53

  11.  
  12. Name: rac2.taryartar.com

  13. Address: 192.168.0.52

  14.  
  15. [root@rac2 ~]# nslookup 192.168.0.54

  16. Server: 192.168.0.88

  17. Address: 192.168.0.88#53

  18.  
  19. 54.0.168.192.in-addr.arpa name = rac4.taryartar.com.

  20.  
  21. [root@rac2 ~]#


ok,正向反向解析都没问题。

29、修改grid用户环境变量配置

 
  1. [root@rac2 ~]# su - grid

  2. [grid@rac2 ~]$ vi .bash_profile

  3.  
  4. # User specific environment and startup programs

  5.  
  6. PATH=$PATH:$HOME/bin

  7.  
  8. export PATH

  9.  
  10. #add by wufan 2017/1015

  11. export ORACLE_SID=+ASM2

  12. export ORACLE_BASE=/taryartar/12c/grid_base

  13. export ORACLE_HOME=/taryartar/12c/grid_home

  14. export GRID_HOME=$ORACLE_HOME

  15. export ORACLE_TERM=xterm

  16. export TMP=/tmp

  17. export TMPDIR=$TMP

  18. #PATH=$PATH:$HOME/bin

  19. export PATH=/usr/sbin:$PATH:$ORACLE_HOME/bin

  20. export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib

  21. export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib

  22. export ORACLE_HOSTNAME=rac2.taryartar.com

  23. export DB_UNIQUE_NAME=tar

  24. export CVUQDISK_GRP=oinstall

  25. umask 022

只需修改ORACLE_SID和ORACLE_HOSTNAME两个变量值

改完立即生效,执行命令

source .bash_profile

检查更改是否生效:

 
  1. [grid@rac2 ~]$ env |grep -i oracle_sid

  2. ORACLE_SID=+ASM2

  3. [grid@rac2 ~]$ env|grep -i oracle_hostname

  4. ORACLE_HOSTNAME=rac2.taryartar.com

  5. [grid@rac2 ~]$


30、修改oracle用户环境变量

 
  1. [root@rac2 ~]# su - oracle

  2. [oracle@rac2 ~]$ vi .bash_profile

  3.  
  4.  
  5. PATH=$PATH:$HOME/bin

  6.  
  7. export PATH

  8.  
  9. #add by wufan 2017/10/15

  10. export ORACLE_BASE=/taryartar/12c/db_base

  11. export ORACLE_HOME=$ORACLE_BASE/db_home

  12. export ORACLE_SID=tar2

  13. export ORACLE_OWNER=oracle

  14. export ORACLE_TERM=vt100

  15. export PATH=$PATH:$ORACLE_HOME/bin

  16. export PATH=$ORACLE_HOME/Apache/Apache/bin:$PATH

  17. export BASE_PATH=/usr/sbin:$PATH:$BASE_PATH

  18. export PATH=$BASE_PATH

  19. LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib:/usr/local/lib

  20. export LD_LIBRARY_PATH

  21. CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib

  22. CLASSPATH=$CLASSPATH:$ORACLE_HOME/network/jlib

  23. export CLASSPATH

  24. export ORACLE_HOSTNAME=rac2.taryartar.com

  25. export DB_UNIQUE_NAME=tar

  26. export CVUQDISK=oinstall

只需修改ORACLE_SID和ORACLE_HOSTNAME两个变量

改完立即生效,执行命令

source .bash_profile

检查更改是否生效:

 

 
  1. [oracle@rac2 ~]$ env|grep -i oracle_sid

  2. ORACLE_SID=tar2

  3. [oracle@rac2 ~]$ env|grep -i oracle_hostname

  4. ORACLE_HOSTNAME=rac2.taryartar.com

  5. [oracle@rac2 ~]$

 

 

 

 

 

 

 

 

 

31、至此,第二个节点更改完成。

32、按照上述方法建立第3-4个节点。

 

33、下面配置共享存储

一共七块盘,两块10g,外加5块各2g。

34、创建磁盘virtualbox命令创建:

创建第一块磁盘:

E:\Program Files\Oracle\VirtualBox\VBoxManage createhd --filename E:\实验环境\12CRAC\V_SHARES\taryartar1.vdi --size 12288 --format VDI --variant Fixed

如果你跟我一样,virtualbox安装目录有空格Program Files,那么可能遇到命令不存在的问题。为了方便,我直接把virtual安装目录直接配置到环境变量PATH中。

这样就可以直接执行命令了。

 

 

好了成功了。创建剩下的六块盘,附上脚本。

 
  1. VBoxManage createhd --filename E:\实验环境\12CRAC\V_SHARES\taryartar1.vdi --size 12288 --format VDI --variant Fixed

  2. VBoxManage createhd --filename E:\实验环境\12CRAC\V_SHARES\taryartar2.vdi --size 12288 --format VDI --variant Fixed

  3. VBoxManage createhd --filename E:\实验环境\12CRAC\V_SHARES\taryartar3.vdi --size 2048 --format VDI --variant Fixed

  4. VBoxManage createhd --filename E:\实验环境\12CRAC\V_SHARES\taryartar4.vdi --size 2048 --format VDI --variant Fixed

  5. VBoxManage createhd --filename E:\实验环境\12CRAC\V_SHARES\taryartar5.vdi --size 2048 --format VDI --variant Fixed

  6. VBoxManage createhd --filename E:\实验环境\12CRAC\V_SHARES\taryartar6.vdi --size 2048 --format VDI --variant Fixed

  7. VBoxManage createhd --filename E:\实验环境\12CRAC\V_SHARES\taryartar7.vdi --size 2048 --format VDI --variant Fixed

命令说明:

VBoxManage 是virtualbox安装目录中的命令,VBoxManage.exe可执行文件。

--filename指定需要创建的磁盘的文件位置和名字。

--size指定创建的磁盘大小,单位M。

--format指定磁盘的格式。

 

35、创建磁盘完成后,需要连接到虚拟机。

在连接到虚拟机前,需要先关闭虚拟机。

连接第一块磁盘到虚拟机rac1-12c,命令如下:

VBoxManage storageattach rac1-12c --storagectl "SATA" --port 4 --device 0 --type hdd --medium E:\实验环境\12CRAC\V_SHARES\taryartar1.vdi --mtype shareable

图中虚拟机名称:

端口号指的是:

 
  1. VBoxManage storageattach rac1-12c --storagectl "SATA" --port 4 --device 0 --type hdd --medium E:\实验环境\12CRAC\V_SHARES\taryartar1.vdi --mtype shareable

  2. VBoxManage storageattach rac1-12c --storagectl "SATA" --port 5 --device 0 --type hdd --medium E:\实验环境\12CRAC\V_SHARES\taryartar2.vdi --mtype shareable

  3. VBoxManage storageattach rac1-12c --storagectl "SATA" --port 6 --device 0 --type hdd --medium E:\实验环境\12CRAC\V_SHARES\taryartar3.vdi --mtype shareable

  4. VBoxManage storageattach rac1-12c --storagectl "SATA" --port 7 --device 0 --type hdd --medium E:\实验环境\12CRAC\V_SHARES\taryartar4.vdi --mtype shareable

  5. VBoxManage storageattach rac1-12c --storagectl "SATA" --port 8 --device 0 --type hdd --medium E:\实验环境\12CRAC\V_SHARES\taryartar5.vdi --mtype shareable

  6. VBoxManage storageattach rac1-12c --storagectl "SATA" --port 9 --device 0 --type hdd --medium E:\实验环境\12CRAC\V_SHARES\taryartar6.vdi --mtype shareable

  7. VBoxManage storageattach rac1-12c --storagectl "SATA" --port 10 --device 0 --type hdd --medium E:\实验环境\12CRAC\V_SHARES\taryartar7.vdi --mtype shareable

将7块磁盘全部连接至rac1-12c完成后,在图形界面看到如下:

 

 

36、连接完成后,需要把7块磁盘改成共享磁盘属性。

 
  1. VBoxManage modifyhd E:\实验环境\12CRAC\V_SHARES\taryartar1.vdi --type shareable

  2. VBoxManage modifyhd E:\实验环境\12CRAC\V_SHARES\taryartar2.vdi --type shareable

  3. VBoxManage modifyhd E:\实验环境\12CRAC\V_SHARES\taryartar3.vdi --type shareable

  4. VBoxManage modifyhd E:\实验环境\12CRAC\V_SHARES\taryartar4.vdi --type shareable

  5. VBoxManage modifyhd E:\实验环境\12CRAC\V_SHARES\taryartar5.vdi --type shareable

  6. VBoxManage modifyhd E:\实验环境\12CRAC\V_SHARES\taryartar6.vdi --type shareable

  7. VBoxManage modifyhd E:\实验环境\12CRAC\V_SHARES\taryartar7.vdi --type shareable

执行完成后,在把7块盘都连接到另外3台虚拟机

 

37、脚本如下:

 
  1. VBoxManage storageattach rac2-12c --storagectl "SATA" --port 4 --device 0 --type hdd --medium E:\实验环境\12CRAC\V_SHARES\taryartar1.vdi --mtype shareable

  2. VBoxManage storageattach rac2-12c --storagectl "SATA" --port 5 --device 0 --type hdd --medium E:\实验环境\12CRAC\V_SHARES\taryartar2.vdi --mtype shareable

  3. VBoxManage storageattach rac2-12c --storagectl "SATA" --port 6 --device 0 --type hdd --medium E:\实验环境\12CRAC\V_SHARES\taryartar3.vdi --mtype shareable

  4. VBoxManage storageattach rac2-12c --storagectl "SATA" --port 7 --device 0 --type hdd --medium E:\实验环境\12CRAC\V_SHARES\taryartar4.vdi --mtype shareable

  5. VBoxManage storageattach rac2-12c --storagectl "SATA" --port 8 --device 0 --type hdd --medium E:\实验环境\12CRAC\V_SHARES\taryartar5.vdi --mtype shareable

  6. VBoxManage storageattach rac2-12c --storagectl "SATA" --port 9 --device 0 --type hdd --medium E:\实验环境\12CRAC\V_SHARES\taryartar6.vdi --mtype shareable

  7. VBoxManage storageattach rac2-12c --storagectl "SATA" --port 10 --device 0 --type hdd --medium E:\实验环境\12CRAC\V_SHARES\taryartar7.vdi --mtype shareable

  8.  
  9.  
  10. VBoxManage storageattach rac3-12c --storagectl "SATA" --port 4 --device 0 --type hdd --medium E:\实验环境\12CRAC\V_SHARES\taryartar1.vdi --mtype shareable

  11. VBoxManage storageattach rac3-12c --storagectl "SATA" --port 5 --device 0 --type hdd --medium E:\实验环境\12CRAC\V_SHARES\taryartar2.vdi --mtype shareable

  12. VBoxManage storageattach rac3-12c --storagectl "SATA" --port 6 --device 0 --type hdd --medium E:\实验环境\12CRAC\V_SHARES\taryartar3.vdi --mtype shareable

  13. VBoxManage storageattach rac3-12c --storagectl "SATA" --port 7 --device 0 --type hdd --medium E:\实验环境\12CRAC\V_SHARES\taryartar4.vdi --mtype shareable

  14. VBoxManage storageattach rac3-12c --storagectl "SATA" --port 8 --device 0 --type hdd --medium E:\实验环境\12CRAC\V_SHARES\taryartar5.vdi --mtype shareable

  15. VBoxManage storageattach rac3-12c --storagectl "SATA" --port 9 --device 0 --type hdd --medium E:\实验环境\12CRAC\V_SHARES\taryartar6.vdi --mtype shareable

  16. VBoxManage storageattach rac3-12c --storagectl "SATA" --port 10 --device 0 --type hdd --medium E:\实验环境\12CRAC\V_SHARES\taryartar7.vdi --mtype shareable

  17.  
  18.  
  19. VBoxManage storageattach rac4-12c --storagectl "SATA" --port 4 --device 0 --type hdd --medium E:\实验环境\12CRAC\V_SHARES\taryartar1.vdi --mtype shareable

  20. VBoxManage storageattach rac4-12c --storagectl "SATA" --port 5 --device 0 --type hdd --medium E:\实验环境\12CRAC\V_SHARES\taryartar2.vdi --mtype shareable

  21. VBoxManage storageattach rac4-12c --storagectl "SATA" --port 6 --device 0 --type hdd --medium E:\实验环境\12CRAC\V_SHARES\taryartar3.vdi --mtype shareable

  22. VBoxManage storageattach rac4-12c --storagectl "SATA" --port 7 --device 0 --type hdd --medium E:\实验环境\12CRAC\V_SHARES\taryartar4.vdi --mtype shareable

  23. VBoxManage storageattach rac4-12c --storagectl "SATA" --port 8 --device 0 --type hdd --medium E:\实验环境\12CRAC\V_SHARES\taryartar5.vdi --mtype shareable

  24. VBoxManage storageattach rac4-12c --storagectl "SATA" --port 9 --device 0 --type hdd --medium E:\实验环境\12CRAC\V_SHARES\taryartar6.vdi --mtype shareable

  25. VBoxManage storageattach rac4-12c --storagectl "SATA" --port 10 --device 0 --type hdd --medium E:\实验环境\12CRAC\V_SHARES\taryartar7.vdi --mtype shareable

 

38、共享存储添加完成,启动4台虚拟机,然后用crt登陆。

 

使用fdisk -l |grep Di查看每台主机的硬盘。

 

 
  1. [root@rac1 ~]# fdisk -l|grep Di

  2. Disk /dev/sda: 32.2 GB, 32212254720 bytes

  3. Disk identifier: 0x000ca0fb

  4. Disk /dev/sdb: 12.9 GB, 12884901888 bytes

  5. Disk identifier: 0x00000000

  6. Disk /dev/sdc: 12.9 GB, 12884901888 bytes

  7. Disk identifier: 0x00000000

  8. Disk /dev/sdd: 2147 MB, 2147483648 bytes

  9. Disk identifier: 0x00000000

  10. Disk /dev/sde: 2147 MB, 2147483648 bytes

  11. Disk identifier: 0x00000000

  12. Disk /dev/sdf: 2147 MB, 2147483648 bytes

  13. Disk identifier: 0x00000000

  14. Disk /dev/sdg: 2147 MB, 2147483648 bytes

  15. Disk identifier: 0x00000000

  16. Disk /dev/sdh: 2147 MB, 2147483648 bytes

  17. Disk identifier: 0x00000000

  18. Disk /dev/mapper/vg_rac1-lv_root: 28.5 GB, 28529655808 bytes

  19. Disk identifier: 0x00000000

  20. Disk /dev/mapper/vg_rac1-lv_swap: 3154 MB, 3154116608 bytes

  21. Disk identifier: 0x00000000

四台虚拟机都显示如上,表示一切正常。

sda是虚拟机本地普通磁盘。从sd[b-h]是新添加的磁盘。其中sd[b-c]是两块10g的共享盘,后面5块sd[d-h]是2g的共享盘。


39、对7块共享盘进行分区,每个磁盘只分一个区。注意共享盘分区只需要在一个节点上做就可以了,做完其它节点就能看到新的分区 。

在节点rac1上进行分区操作。

如下:

http://blog.csdn.net/kadwf123/article/details/78244863

分区完成后,在所有的节点执行fdisk -l

可以看到sd[b-h]都已经增加了一个分区了,如下我在节点1下分区,我截取了节点2看到的分区情况:

 
  1. [root@rac2 ~]# fdisk -l

  2.  
  3. Disk /dev/sda: 32.2 GB, 32212254720 bytes

  4. 255 heads, 63 sectors/track, 3916 cylinders

  5. Units = cylinders of 16065 * 512 = 8225280 bytes

  6. Sector size (logical/physical): 512 bytes / 512 bytes

  7. I/O size (minimum/optimal): 512 bytes / 512 bytes

  8. Disk identifier: 0x000ca0fb

  9.  
  10. Device Boot Start End Blocks Id System

  11. /dev/sda1 * 1 64 512000 83 Linux

  12. Partition 1 does not end on cylinder boundary.

  13. /dev/sda2 64 3917 30944256 8e Linux LVM

  14.  
  15. Disk /dev/sdb: 12.9 GB, 12884901888 bytes

  16. 255 heads, 63 sectors/track, 1566 cylinders

  17. Units = cylinders of 16065 * 512 = 8225280 bytes

  18. Sector size (logical/physical): 512 bytes / 512 bytes

  19. I/O size (minimum/optimal): 512 bytes / 512 bytes

  20. Disk identifier: 0x06b87c6d

  21.  
  22. Device Boot Start End Blocks Id System

  23. /dev/sdb1 1 1566 12578863+ 83 Linux

  24.  
  25. Disk /dev/sdc: 12.9 GB, 12884901888 bytes

  26. 255 heads, 63 sectors/track, 1566 cylinders

  27. Units = cylinders of 16065 * 512 = 8225280 bytes

  28. Sector size (logical/physical): 512 bytes / 512 bytes

  29. I/O size (minimum/optimal): 512 bytes / 512 bytes

  30. Disk identifier: 0x15863b60

  31.  
  32. Device Boot Start End Blocks Id System

  33. /dev/sdc1 1 1566 12578863+ 83 Linux

  34.  
  35. Disk /dev/sdd: 2147 MB, 2147483648 bytes

  36. 255 heads, 63 sectors/track, 261 cylinders

  37. Units = cylinders of 16065 * 512 = 8225280 bytes

  38. Sector size (logical/physical): 512 bytes / 512 bytes

  39. I/O size (minimum/optimal): 512 bytes / 512 bytes

  40. Disk identifier: 0x12264bdf

  41.  
  42. Device Boot Start End Blocks Id System

  43. /dev/sdd1 1 261 2096451 83 Linux

  44.  
  45. Disk /dev/sde: 2147 MB, 2147483648 bytes

  46. 255 heads, 63 sectors/track, 261 cylinders

  47. Units = cylinders of 16065 * 512 = 8225280 bytes

  48. Sector size (logical/physical): 512 bytes / 512 bytes

  49. I/O size (minimum/optimal): 512 bytes / 512 bytes

  50. Disk identifier: 0xb60ea544

  51.  
  52. Device Boot Start End Blocks Id System

  53. /dev/sde1 1 261 2096451 83 Linux

  54.  
  55. Disk /dev/sdf: 2147 MB, 2147483648 bytes

  56. 255 heads, 63 sectors/track, 261 cylinders

  57. Units = cylinders of 16065 * 512 = 8225280 bytes

  58. Sector size (logical/physical): 512 bytes / 512 bytes

  59. I/O size (minimum/optimal): 512 bytes / 512 bytes

  60. Disk identifier: 0xe73a738f

  61.  
  62. Device Boot Start End Blocks Id System

  63. /dev/sdf1 1 261 2096451 83 Linux

  64.  
  65. Disk /dev/sdg: 2147 MB, 2147483648 bytes

  66. 255 heads, 63 sectors/track, 261 cylinders

  67. Units = cylinders of 16065 * 512 = 8225280 bytes

  68. Sector size (logical/physical): 512 bytes / 512 bytes

  69. I/O size (minimum/optimal): 512 bytes / 512 bytes

  70. Disk identifier: 0x41119c7e

  71.  
  72. Device Boot Start End Blocks Id System

  73. /dev/sdg1 1 261 2096451 83 Linux

  74.  
  75. Disk /dev/sdh: 2147 MB, 2147483648 bytes

  76. 255 heads, 63 sectors/track, 261 cylinders

  77. Units = cylinders of 16065 * 512 = 8225280 bytes

  78. Sector size (logical/physical): 512 bytes / 512 bytes

  79. I/O size (minimum/optimal): 512 bytes / 512 bytes

  80. Disk identifier: 0xeac8d462

  81.  
  82. Device Boot Start End Blocks Id System

  83. /dev/sdh1 1 261 2096451 83 Linux

  84.  
  85. Disk /dev/mapper/vg_rac1-lv_root: 28.5 GB, 28529655808 bytes

  86. 255 heads, 63 sectors/track, 3468 cylinders

  87. Units = cylinders of 16065 * 512 = 8225280 bytes

  88. Sector size (logical/physical): 512 bytes / 512 bytes

  89. I/O size (minimum/optimal): 512 bytes / 512 bytes

  90. Disk identifier: 0x00000000

  91.  
  92.  
  93. Disk /dev/mapper/vg_rac1-lv_swap: 3154 MB, 3154116608 bytes

  94. 255 heads, 63 sectors/track, 383 cylinders

  95. Units = cylinders of 16065 * 512 = 8225280 bytes

  96. Sector size (logical/physical): 512 bytes / 512 bytes

  97. I/O size (minimum/optimal): 512 bytes / 512 bytes

  98. Disk identifier: 0x00000000

  99.  
  100. [root@rac2 ~]#

显示如上,说明共享磁盘和分区没问题。

 

40、配置裸设备

 
  1. [root@rac1 ~]# fdisk -l|grep Linux

  2. /dev/sda1 * 1 64 512000 83 Linux

  3. /dev/sda2 64 3917 30944256 8e Linux LVM

  4. /dev/sdb1 1 1566 12578863+ 83 Linux

  5. /dev/sdc1 1 1566 12578863+ 83 Linux

  6. /dev/sdd1 1 261 2096451 83 Linux

  7. /dev/sde1 1 261 2096451 83 Linux

  8. /dev/sdf1 1 261 2096451 83 Linux

  9. /dev/sdg1 1 261 2096451 83 Linux

  10. /dev/sdh1 1 261 2096451 83 Linux

  11. [root@rac1 ~]#

 
  1. [root@rac2 ~]# fdisk -l|grep Linux

  2. /dev/sda1 * 1 64 512000 83 Linux

  3. /dev/sda2 64 3917 30944256 8e Linux LVM

  4. /dev/sdb1 1 1566 12578863+ 83 Linux

  5. /dev/sdc1 1 1566 12578863+ 83 Linux

  6. /dev/sdd1 1 261 2096451 83 Linux

  7. /dev/sde1 1 261 2096451 83 Linux

  8. /dev/sdf1 1 261 2096451 83 Linux

  9. /dev/sdg1 1 261 2096451 83 Linux

  10. /dev/sdh1 1 261 2096451 83 Linux

  11. [root@rac2 ~]#

 
  1. [root@rac3 ~]# fdisk -l|grep Linux

  2. /dev/sda1 * 1 64 512000 83 Linux

  3. /dev/sda2 64 3917 30944256 8e Linux LVM

  4. /dev/sdb1 1 1566 12578863+ 83 Linux

  5. /dev/sdc1 1 1566 12578863+ 83 Linux

  6. /dev/sdd1 1 261 2096451 83 Linux

  7. /dev/sde1 1 261 2096451 83 Linux

  8. /dev/sdf1 1 261 2096451 83 Linux

  9. /dev/sdg1 1 261 2096451 83 Linux

  10. /dev/sdh1 1 261 2096451 83 Linux

  11. [root@rac3 ~]#

 
  1. [root@rac4 ~]# fdisk -l|grep Linux

  2. /dev/sda1 * 1 64 512000 83 Linux

  3. /dev/sda2 64 3917 30944256 8e Linux LVM

  4. /dev/sdb1 1 1566 12578863+ 83 Linux

  5. /dev/sdc1 1 1566 12578863+ 83 Linux

  6. /dev/sdd1 1 261 2096451 83 Linux

  7. /dev/sde1 1 261 2096451 83 Linux

  8. /dev/sdf1 1 261 2096451 83 Linux

  9. /dev/sdg1 1 261 2096451 83 Linux

  10. /dev/sdh1 1 261 2096451 83 Linux

  11. [root@rac4 ~]#

查看得到分区的名字

每个节点都进入到下面的目录:

cd /etc/udev/rules.d

然后创建一个规则文件,后缀一定要是.rules:

vi 99-ASM.rules

写入如下内容:

ACTION=="add", KERNEL=="sdb1", RUN+="/bin/raw /dev/raw/raw1 %N"
ACTION=="add", KERNEL=="sdc1", RUN+="/bin/raw /dev/raw/raw2 %N"
ACTION=="add", KERNEL=="sdd1", RUN+="/bin/raw /dev/raw/raw3 %N"
ACTION=="add", KERNEL=="sde1", RUN+="/bin/raw /dev/raw/raw4 %N"
ACTION=="add", KERNEL=="sdf1", RUN+="/bin/raw /dev/raw/raw5 %N"
ACTION=="add", KERNEL=="sdg1", RUN+="/bin/raw /dev/raw/raw6 %N"
ACTION=="add", KERNEL=="sdh1", RUN+="/bin/raw /dev/raw/raw7 %N"
KERNEL=="raw[1-7]*", OWNER="grid", GROUP="asmdba", MODE="775"

 
  1. [root@rac1 rules.d]# vi 99-ASM.rules

  2.  
  3. ACTION=="add", KERNEL=="sdb1", RUN+="/bin/raw /dev/raw/raw1 %N"

  4. ACTION=="add", KERNEL=="sdc1", RUN+="/bin/raw /dev/raw/raw2 %N"

  5. ACTION=="add", KERNEL=="sdd1", RUN+="/bin/raw /dev/raw/raw3 %N"

  6. ACTION=="add", KERNEL=="sde1", RUN+="/bin/raw /dev/raw/raw4 %N"

  7. ACTION=="add", KERNEL=="sdf1", RUN+="/bin/raw /dev/raw/raw5 %N"

  8. ACTION=="add", KERNEL=="sdg1", RUN+="/bin/raw /dev/raw/raw6 %N"

  9. ACTION=="add", KERNEL=="sdh1", RUN+="/bin/raw /dev/raw/raw7 %N"

  10. KERNEL=="raw[1-7]*", OWNER="grid", GROUP="asmdba", MODE="775"

 

41、然后启动udev,四个节点都需要执行下面的命令,最好一个个执行。

下面是四个节点下启动的情况:

 
  1. [root@rac1 rules.d]# start_udev

  2. 正在启动 udev:[确定]

  3. [root@rac1 rules.d]#

 
  1. [root@rac2 rules.d]# start_udev

  2. 正在启动 udev:[确定]

  3. [root@rac2 rules.d]#

 
  1. [root@rac3 rules.d]# start_udev

  2. 正在启动 udev:[确定]

  3. [root@rac3 rules.d]#

 
  1. [root@rac4 rules.d]# start_udev

  2. 正在启动 udev:[确定]

  3. [root@rac4 rules.d]#

都启动成功后,可以看下四个节点的裸设备:



我这边发现只有节点1能看到raw[1-7],其它三个节点只能看到rawctl:


查看另外三个节点中的随便一台日志

view /var/log/messages

找到udev关键字,并且在刚才启动的时间的日志。

发现报如下错误:

 
  1. Oct 15 21:46:36 rac2 kdump: No crashkernel parameter specified for running kernel

  2. Oct 15 21:46:36 rac2 acpid: starting up

  3. Oct 15 21:46:36 rac2 acpid: 1 rule loaded

  4. Oct 15 21:46:36 rac2 acpid: waiting for events: event logging is off

  5. Oct 15 21:46:37 rac2 acpid: client connected from 1669[68:68]

  6. Oct 15 21:46:37 rac2 acpid: 1 client rule loaded

  7. Oct 15 21:46:39 rac2 automount[1690]: lookup_read_master: lookup(nisplus): couldn't locate nis+ table auto.master

  8. Oct 15 21:46:39 rac2 mcelog: mcelog read: No such device

  9. Oct 15 21:46:39 rac2 abrtd: Init complete, entering main loop

  10. Oct 15 22:37:15 rac2 kernel: udev: starting version 147

  11. Oct 15 22:37:16 rac2 udevd-work[2074]: error changing netif name 'eth1' to 'eth0': Device or resource busy




此情况只需要重启另外三个节点即可。

重启后的节点2-4如下:

ok,裸设备配置完成。

这篇关于oracle12C RAC GI + UDEV + ASM 在centos6下安装详细步骤的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/637096

相关文章

Zookeeper安装和配置说明

一、Zookeeper的搭建方式 Zookeeper安装方式有三种,单机模式和集群模式以及伪集群模式。 ■ 单机模式:Zookeeper只运行在一台服务器上,适合测试环境; ■ 伪集群模式:就是在一台物理机上运行多个Zookeeper 实例; ■ 集群模式:Zookeeper运行于一个集群上,适合生产环境,这个计算机集群被称为一个“集合体”(ensemble) Zookeeper通过复制来实现

CentOS7安装配置mysql5.7 tar免安装版

一、CentOS7.4系统自带mariadb # 查看系统自带的Mariadb[root@localhost~]# rpm -qa|grep mariadbmariadb-libs-5.5.44-2.el7.centos.x86_64# 卸载系统自带的Mariadb[root@localhost ~]# rpm -e --nodeps mariadb-libs-5.5.44-2.el7

Centos7安装Mongodb4

1、下载源码包 curl -O https://fastdl.mongodb.org/linux/mongodb-linux-x86_64-rhel70-4.2.1.tgz 2、解压 放到 /usr/local/ 目录下 tar -zxvf mongodb-linux-x86_64-rhel70-4.2.1.tgzmv mongodb-linux-x86_64-rhel70-4.2.1/

Centos7安装JDK1.8保姆版

工欲善其事,必先利其器。这句话同样适用于学习Java编程。在开始Java的学习旅程之前,我们必须首先配置好适合的开发环境。 通过事先准备好这些工具和配置,我们可以避免在学习过程中遇到因环境问题导致的代码异常或错误。一个稳定、高效的开发环境能够让我们更加专注于代码的学习和编写,提升学习效率,减少不必要的困扰和挫折感。因此,在学习Java之初,投入一些时间和精力来配置好开发环境是非常值得的。这将为我

安装nodejs环境

本文介绍了如何通过nvm(NodeVersionManager)安装和管理Node.js及npm的不同版本,包括下载安装脚本、检查版本并安装特定版本的方法。 1、安装nvm curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.0/install.sh | bash 2、查看nvm版本 nvm --version 3、安装

计算机毕业设计 大学志愿填报系统 Java+SpringBoot+Vue 前后端分离 文档报告 代码讲解 安装调试

🍊作者:计算机编程-吉哥 🍊简介:专业从事JavaWeb程序开发,微信小程序开发,定制化项目、 源码、代码讲解、文档撰写、ppt制作。做自己喜欢的事,生活就是快乐的。 🍊心愿:点赞 👍 收藏 ⭐评论 📝 🍅 文末获取源码联系 👇🏻 精彩专栏推荐订阅 👇🏻 不然下次找不到哟~Java毕业设计项目~热门选题推荐《1000套》 目录 1.技术选型 2.开发工具 3.功能

SWAP作物生长模型安装教程、数据制备、敏感性分析、气候变化影响、R模型敏感性分析与贝叶斯优化、Fortran源代码分析、气候数据降尺度与变化影响分析

查看原文>>>全流程SWAP农业模型数据制备、敏感性分析及气候变化影响实践技术应用 SWAP模型是由荷兰瓦赫宁根大学开发的先进农作物模型,它综合考虑了土壤-水分-大气以及植被间的相互作用;是一种描述作物生长过程的一种机理性作物生长模型。它不但运用Richard方程,使其能够精确的模拟土壤中水分的运动,而且耦合了WOFOST作物模型使作物的生长描述更为科学。 本文让更多的科研人员和农业工作者

K8S(Kubernetes)开源的容器编排平台安装步骤详解

K8S(Kubernetes)是一个开源的容器编排平台,用于自动化部署、扩展和管理容器化应用程序。以下是K8S容器编排平台的安装步骤、使用方式及特点的概述: 安装步骤: 安装Docker:K8S需要基于Docker来运行容器化应用程序。首先要在所有节点上安装Docker引擎。 安装Kubernetes Master:在集群中选择一台主机作为Master节点,安装K8S的控制平面组件,如AP

衡石分析平台使用手册-单机安装及启动

单机安装及启动​ 本文讲述如何在单机环境下进行 HENGSHI SENSE 安装的操作过程。 在安装前请确认网络环境,如果是隔离环境,无法连接互联网时,请先按照 离线环境安装依赖的指导进行依赖包的安装,然后按照本文的指导继续操作。如果网络环境可以连接互联网,请直接按照本文的指导进行安装。 准备工作​ 请参考安装环境文档准备安装环境。 配置用户与安装目录。 在操作前请检查您是否有 sud

沁恒CH32在MounRiver Studio上环境配置以及使用详细教程

目录 1.  RISC-V简介 2.  CPU架构现状 3.  MounRiver Studio软件下载 4.  MounRiver Studio软件安装 5.  MounRiver Studio软件介绍 6.  创建工程 7.  编译代码 1.  RISC-V简介         RISC就是精简指令集计算机(Reduced Instruction SetCom