本文主要是介绍oracle12C RAC GI + UDEV + ASM 在centos6下安装详细步骤,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!
Centos配置网络yum源:
http://blog.csdn.net/kadwf123/article/details/78231694
Centos配置安装vnc客户端和服务端:
http://blog.csdn.net/kadwf123/article/details/78232672
Centos配置dns服务器:
http://blog.csdn.net/kadwf123/article/details/78232853
Centos公网双网卡绑定bond0
http://blog.csdn.net/kadwf123/article/details/78234727
Centos下安装GI前准备详细执行步骤:
http://blog.csdn.net/kadwf123/article/details/78235488
Centos下安装oracle12c需要的依赖包详细安装过程:
http://blog.csdn.net/kadwf123/article/details/78238022
Centos下安装oracle12c节点1完成后检查过程:
http://blog.csdn.net/kadwf123/article/details/78241445
Centos下安装oracle12c四个节点都完成共享磁盘连接后对共享磁盘分区详细过程:
http://blog.csdn.net/kadwf123/article/details/78244863
centos6.4 /etc/resolv.conf文件改了重启网络就自动还原了
http://blog.csdn.net/kadwf123/article/details/78786947
操作系统Centos6.4:
安装前网络规划:
1、使用之前已经配置好的公网bond0的ip通过crt登陆主机
2、编辑eth2的配置文件
[root@rac1 network-scripts]# vi ifcfg-eth2
DEVICE=eth2 HWADDR=08:00:27:18:29:48 TYPE=Ethernet ONBOOT=yes NM_CONTROLLED=yes BOOTPROTO=none IPV6INIT=no USERCTL=no IPADDR=10.0.10.1 NETMASK=255.255.255.0 GATEWAY=192.168.0.1
3、编辑eth3的配置文件
[plain] view plain copy print?
- <code class="language-plain">[root@rac1 network-scripts]# vi ifcfg-eth3
- DEVICE=eth3
- HWADDR=08:00:27:59:1e:79
- TYPE=Ethernet
- ONBOOT=yes
- BOOTPROTO=none
- IPADDR=10.0.10.2
- NETMASK=255.255.255.0
- GATEWAY=192.168.0.1
- IPV6INIT=no
- USERCTL=no</code>
-
[root@rac1 network-scripts]# vi ifcfg-eth3
-
DEVICE=eth3
-
HWADDR=08:00:27:59:1e:79
-
TYPE=Ethernet
-
ONBOOT=yes
-
BOOTPROTO=none
-
IPADDR=10.0.10.2
-
NETMASK=255.255.255.0
-
GATEWAY=192.168.0.1
-
IPV6INIT=no
-
USERCTL=no
4、bootproto=no表示手动设置ip地址。onboot=yes表示开机自动加载网卡。
注意设置ipaddr、geteway、netmask选项。
5、设置完成重启网络服务。
-
[root@rac1 network-scripts]# service network restart
-
正在关闭接口 bond0: [确定]
-
正在关闭接口 eth2: [确定]
-
正在关闭接口 eth3: [确定]
-
关闭环回接口: [确定]
-
弹出环回接口: [确定]
-
弹出界面 bond0: [确定]
-
弹出界面 eth2: [确定]
-
弹出界面 eth3: [确定]
6、ifconfig查看eth2和eth3网卡IP地址是否正确;
-
[root@rac1 network-scripts]# ifconfig
-
bond0 Link encap:Ethernet HWaddr 08:00:27:FC:7E:5B
-
inet addr:192.168.0.51 Bcast:192.168.0.255 Mask:255.255.255.0
-
inet6 addr: fe80::a00:27ff:fefc:7e5b/64 Scope:Link
-
UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1
-
RX packets:5185 errors:0 dropped:0 overruns:0 frame:0
-
TX packets:2571 errors:0 dropped:0 overruns:0 carrier:0
-
collisions:0 txqueuelen:0
-
RX bytes:463220 (452.3 KiB) TX bytes:319388 (311.9 KiB)
-
eth0 Link encap:Ethernet HWaddr 08:00:27:FC:7E:5B
-
UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
-
RX packets:4262 errors:0 dropped:0 overruns:0 frame:0
-
TX packets:1912 errors:0 dropped:0 overruns:0 carrier:0
-
collisions:0 txqueuelen:1000
-
RX bytes:377145 (368.3 KiB) TX bytes:247430 (241.6 KiB)
-
eth1 Link encap:Ethernet HWaddr 08:00:27:FC:7E:5B
-
UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
-
RX packets:923 errors:0 dropped:0 overruns:0 frame:0
-
TX packets:659 errors:0 dropped:0 overruns:0 carrier:0
-
collisions:0 txqueuelen:1000
-
RX bytes:86075 (84.0 KiB) TX bytes:71958 (70.2 KiB)
-
eth2 Link encap:Ethernet HWaddr 08:00:27:18:29:48
-
inet addr:10.0.10.1 Bcast:10.0.10.255 Mask:255.255.255.0
-
inet6 addr: fe80::a00:27ff:fe18:2948/64 Scope:Link
-
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
-
RX packets:1036 errors:0 dropped:0 overruns:0 frame:0
-
TX packets:123 errors:0 dropped:0 overruns:0 carrier:0
-
collisions:0 txqueuelen:1000
-
RX bytes:92021 (89.8 KiB) TX bytes:13614 (13.2 KiB)
-
eth3 Link encap:Ethernet HWaddr 08:00:27:59:1E:79
-
inet addr:10.0.10.2 Bcast:10.0.10.255 Mask:255.255.255.0
-
inet6 addr: fe80::a00:27ff:fe59:1e79/64 Scope:Link
-
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
-
RX packets:937 errors:0 dropped:0 overruns:0 frame:0
-
TX packets:50 errors:0 dropped:0 overruns:0 carrier:0
-
collisions:0 txqueuelen:1000
-
RX bytes:87752 (85.6 KiB) TX bytes:3208 (3.1 KiB)
-
lo Link encap:Local Loopback
-
inet addr:127.0.0.1 Mask:255.0.0.0
-
inet6 addr: ::1/128 Scope:Host
-
UP LOOPBACK RUNNING MTU:16436 Metric:1
-
RX packets:8 errors:0 dropped:0 overruns:0 frame:0
-
TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
-
collisions:0 txqueuelen:0
-
RX bytes:728 (728.0 b) TX bytes:728 (728.0 b)
按照上面配置的私网ip,因为ip在一个网段,所以会检查出错误
详细错误说明如下:
-
Node Connectivity - This is a prerequisite condition to test whether connectivity exists amongst all the nodes. The connectivity is being tested for the subnets "10.0.10.0,10.0.10.0,192.168.0.0"
-
Check Failed on Nodes: [rac2, ?rac1, ?rac4, ?rac3]
-
Verification result of failed node: rac2 ?Details:
-
?-?
-
PRVG-11073 : Subnet on interface "eth2" of node "rac1" is overlapping with the subnet on interface "eth3". IP address range ["10.0.10.0"-"10.0.10.255"] is overlapping with IP address range ["10.0.10.0"-"10.0.10.255"]. PRVG-11073 : Subnet on interface "eth2" of node "rac2" is overlapping with the subnet on interface "eth3". IP address range ["10.0.10.0"-"10.0.10.255"] is overlapping with IP address range ["10.0.10.0"-"10.0.10.255"]. PRVG-11073 : Subnet on interface "eth2" of node "rac3" is overlapping with the subnet on interface "eth3". IP address range ["10.0.10.0"-"10.0.10.255"] is overlapping with IP address range ["10.0.10.0"-"10.0.10.255"]. PRVG-11073 : Subnet on interface "eth2" of node "rac4" is overlapping with the subnet on interface "eth3". IP address range ["10.0.10.0"-"10.0.10.255"] is overlapping with IP address range ["10.0.10.0"-"10.0.10.255"]. ?- Cause:?Cause Of Problem Not Available ?- Action:?User Action Not Available
-
Back to Top
-
Verification result of failed node: rac1 ?Details:
-
?-?
-
PRVG-11073 : Subnet on interface "eth2" of node "rac1" is overlapping with the subnet on interface "eth3". IP address range ["10.0.10.0"-"10.0.10.255"] is overlapping with IP address range ["10.0.10.0"-"10.0.10.255"]. PRVG-11073 : Subnet on interface "eth2" of node "rac2" is overlapping with the subnet on interface "eth3". IP address range ["10.0.10.0"-"10.0.10.255"] is overlapping with IP address range ["10.0.10.0"-"10.0.10.255"]. PRVG-11073 : Subnet on interface "eth2" of node "rac3" is overlapping with the subnet on interface "eth3". IP address range ["10.0.10.0"-"10.0.10.255"] is overlapping with IP address range ["10.0.10.0"-"10.0.10.255"]. PRVG-11073 : Subnet on interface "eth2" of node "rac4" is overlapping with the subnet on interface "eth3". IP address range ["10.0.10.0"-"10.0.10.255"] is overlapping with IP address range ["10.0.10.0"-"10.0.10.255"]. ?- Cause:?Cause Of Problem Not Available ?- Action:?User Action Not Available
-
Back to Top
-
Verification result of failed node: rac4 ?Details:
-
?-?
-
PRVG-11073 : Subnet on interface "eth2" of node "rac1" is overlapping with the subnet on interface "eth3". IP address range ["10.0.10.0"-"10.0.10.255"] is overlapping with IP address range ["10.0.10.0"-"10.0.10.255"]. PRVG-11073 : Subnet on interface "eth2" of node "rac2" is overlapping with the subnet on interface "eth3". IP address range ["10.0.10.0"-"10.0.10.255"] is overlapping with IP address range ["10.0.10.0"-"10.0.10.255"]. PRVG-11073 : Subnet on interface "eth2" of node "rac3" is overlapping with the subnet on interface "eth3". IP address range ["10.0.10.0"-"10.0.10.255"] is overlapping with IP address range ["10.0.10.0"-"10.0.10.255"]. PRVG-11073 : Subnet on interface "eth2" of node "rac4" is overlapping with the subnet on interface "eth3". IP address range ["10.0.10.0"-"10.0.10.255"] is overlapping with IP address range ["10.0.10.0"-"10.0.10.255"]. ?- Cause:?Cause Of Problem Not Available ?- Action:?User Action Not Available
-
Back to Top
-
Verification result of failed node: rac3 ?Details:
-
?-?
-
PRVG-11073 : Subnet on interface "eth2" of node "rac1" is overlapping with the subnet on interface "eth3". IP address range ["10.0.10.0"-"10.0.10.255"] is overlapping with IP address range ["10.0.10.0"-"10.0.10.255"]. PRVG-11073 : Subnet on interface "eth2" of node "rac2" is overlapping with the subnet on interface "eth3". IP address range ["10.0.10.0"-"10.0.10.255"] is overlapping with IP address range ["10.0.10.0"-"10.0.10.255"]. PRVG-11073 : Subnet on interface "eth2" of node "rac3" is overlapping with the subnet on interface "eth3". IP address range ["10.0.10.0"-"10.0.10.255"] is overlapping with IP address range ["10.0.10.0"-"10.0.10.255"]. PRVG-11073 : Subnet on interface "eth2" of node "rac4" is overlapping with the subnet on interface "eth3". IP address range ["10.0.10.0"-"10.0.10.255"] is overlapping with IP address range ["10.0.10.0"-"10.0.10.255"]. ?- Cause:?Cause Of Problem Not Available ?- Action:?User Action Not Available
-
Back to Top
如果遇到上面的错误,你可以选择忽略。如果不想忽略,也可以选择把两块私网ip配置成不同网段的ip地址。比如eth2配置成10.0.10.1,eth3配置成10.0.11.2就可以了。
7、本地ping
-
[root@rac1 network-scripts]# ping 10.0.10.1
-
PING 10.0.10.1 (10.0.10.1) 56(84) bytes of data.
-
64 bytes from 10.0.10.1: icmp_seq=1 ttl=64 time=0.024 ms
-
64 bytes from 10.0.10.1: icmp_seq=2 ttl=64 time=0.030 ms
-
^C
-
--- 10.0.10.1 ping statistics ---
-
2 packets transmitted, 2 received, 0% packet loss, time 1893ms
-
rtt min/avg/max/mdev = 0.024/0.027/0.030/0.003 ms
-
[root@rac1 network-scripts]# ping 10.0.10.2
-
PING 10.0.10.2 (10.0.10.2) 56(84) bytes of data.
-
64 bytes from 10.0.10.2: icmp_seq=1 ttl=64 time=0.023 ms
-
64 bytes from 10.0.10.2: icmp_seq=2 ttl=64 time=0.032 ms
-
64 bytes from 10.0.10.2: icmp_seq=3 ttl=64 time=0.038 ms
-
^C
-
--- 10.0.10.2 ping statistics ---
-
3 packets transmitted, 3 received, 0% packet loss, time 2007ms
-
rtt min/avg/max/mdev = 0.023/0.031/0.038/0.006 ms
-
[root@rac1 network-scripts]#
8、ok,私网配置没问题。
9、确认该节点主机名是rac1
vi /etc/sysconfig/network
NETWORKDING=yes
HOSTNAME=rac1
NOZEROCONF=yes
命令:
hostname rac1
更改主机名立即生效
10、配置节点rac1使用dns服务器进行解析
vi /etc/resolv.conf
options attempts: 2
options timeout: 1
search taryartar.com
nameserver 192.168.0.88
nameserver 192.168.0.1
nameserver 8.8.8.8
11、验证dns是否能够生效
正向解析:
host rac1
nslookup rac1
反向解析:
nslookup 192.168.0.51
都没问题。
-
[root@rac1 network-scripts]# nslookup rac1
-
Server: 192.168.0.88
-
Address: 192.168.0.88#53
-
Name: rac1.taryartar.com
-
Address: 192.168.0.51
-
[root@rac1 network-scripts]# nslookup 192.168.0.51
-
Server: 192.168.0.88
-
Address: 192.168.0.88#53
-
51.0.168.192.in-addr.arpa name = rac1.taryartar.com.
-
[root@rac1 network-scripts]# host rac1
-
rac1.taryartar.com has address 192.168.0.51
-
[root@rac1 network-scripts]#
12、创建用户和用户组
-
[root@rac1 network-scripts]# groupadd -g 1000 oinstall
-
[root@rac1 network-scripts]# groupadd -g 1031 dba
-
[root@rac1 network-scripts]# groupadd -g 1032 asmdba
-
[root@rac1 network-scripts]# useradd -u 1101 -g oinstall -G dba,asmdba oracle
-
[root@rac1 network-scripts]# useradd -u 1100 -g oinstall -G asmdba grid
-
[root@rac1 network-scripts]# mkdir -p /taryartar/12c/grid_base
-
[root@rac1 network-scripts]# mkdir -p /taryartar/12c/grid_home
-
[root@rac1 network-scripts]# mkdir -p /taryartar/12c/db_base/db_home
-
[root@rac1 network-scripts]# chown -R grid:oinstall /taryartar/12c/grid_base
-
[root@rac1 network-scripts]# chown -R grid:oinstall /taryartar/12c/grid_home
-
[root@rac1 network-scripts]# chown -R oracle:oinstall /taryartar/12c/db_base
-
[root@rac1 network-scripts]# chmod -R 775 /taryartar/12c/db_base
-
[root@rac1 network-scripts]# chmod -R 775 /taryartar/12c/grid_base
-
[root@rac1 network-scripts]# chmod -R 775 /taryartar/12c/grid_home
-
[root@rac1 network-scripts]#
13、系统配置
Uname -a
Lsb_release
Each system must meet the following minimum memory requirements:
·
At least 4 GB of RAM for Oracle Grid Infrastructure for a Cluster installations, including installations where you plan to install Oracle RAC.
·
·
Swap space equivalent to the multiple of the available RAM, as indicated in the following table:
·
Table 2-1 Swap Space Required for 64-bit Linux and Linux on System z
·
Available RAM | Swap Space Required |
Between 4 GB and 16 GB | Equal to RAM |
More than 16 GB | 16 GB of RAM |
14、交换分区不够,可以如下扩展
-
[root@rac1 selinux]# dd if=/dev/zero of=/root/swapfile1 bs=1M count=1024
-
记录了500+0 的读入
-
记录了500+0 的写出
-
524288000字节(524 MB)已复制,1.34808 秒,389 MB/秒
-
[root@rac1 selinux]# chmod 600 /root/swapfile1
-
[root@rac1 selinux]# mkswap /root/swapfile1
-
Setting up swapspace version 1, size = 511996 KiB
-
no label, UUID=a44009c0-9d5e-4ea5-b9db-f8ae81b16a0b
-
[root@rac1 selinux]# swapon /root/swapfile1
-
[root@rac1 selinux]# free -m
-
total used free shared buffers cached
-
Mem: 1500 1428 71 0 20 1238
-
-/+ buffers/cache: 169 1331
-
Swap: 4007 0 4007
要想下次重启仍然生效,需要写入下面的配置文件
vi /etc/fstab
新增下面一行
/root/swapfile1 swap swap defaults 0 0
15、注意集群安装可能受防火墙和selinux干扰,最好关闭防火墙和selinux,否则出一些某明奇妙的问题排查起来很麻烦
关闭防火墙并关闭自启动
-
[root@rac1 yum.repos.d]# service iptables stop
-
iptables:清除防火墙规则:[确定]
-
iptables:将链设置为政策 ACCEPT:filter [确定]
-
iptables:正在卸载模块:[确定]
-
[root@rac1 yum.repos.d]# chkconfig iptables off
16、关闭selinux
-
[root@rac1 yum.repos.d]# getenforce
-
Enforcing
-
[root@rac1 yum.repos.d]# setenforce 0
-
[root@rac1 yum.repos.d]# getenforce
-
Permissive
vi /etc/selinux/config
修改SELINUX的值为permissive
SELINUX=permissive
17、关闭ipv6
-
[root@rac1 selinux]# service ip6tables stop
-
ip6tables:清除防火墙规则:[确定]
-
ip6tables:将 chains 设置为 ACCEPT 策略:filter [确定]
-
:正在卸载模块:[确定]
-
[root@rac1 selinux]# chkconfig ip6tables off
-
[root@rac1 selinux]#
18、修改ssh连接配置文件
vi /etc/ssh/sshd_config
新增
LoginGraceTime 0
表示远程连接节点不会被关闭连接。
19、安装rpm包
针对redhat6(同centos6)官网指定安装包如下:
Packages for Oracle Linux 6 and Red Hat Enterprise Linux 6 | The following packages (or later versions) must be installed: binutils-2.20.51.0.2-5.11.el6 (x86_64) compat-libcap1-1.10-1 (x86_64) compat-libstdc++-33-3.2.3-69.el6 (x86_64) compat-libstdc++-33-3.2.3-69.el6.i686 gcc-4.4.4-13.el6 (x86_64) gcc-c++-4.4.4-13.el6 (x86_64) glibc-2.12-1.7.el6 (i686) glibc-2.12-1.7.el6 (x86_64) glibc-devel-2.12-1.7.el6 (x86_64) glibc-devel-2.12-1.7.el6.i686 ksh libgcc-4.4.4-13.el6 (i686) libgcc-4.4.4-13.el6 (x86_64) libstdc++-4.4.4-13.el6 (x86_64) libstdc++-4.4.4-13.el6.i686 libstdc++-devel-4.4.4-13.el6 (x86_64) libstdc++-devel-4.4.4-13.el6.i686 libaio-0.3.107-10.el6 (x86_64) libaio-0.3.107-10.el6.i686 libaio-devel-0.3.107-10.el6 (x86_64) libaio-devel-0.3.107-10.el6.i686 libXext-1.1 (x86_64) libXext-1.1 (i686) libXtst-1.0.99.2 (x86_64) libXtst-1.0.99.2 (i686) libX11-1.3 (x86_64) libX11-1.3 (i686) libXau-1.0.5 (x86_64) libXau-1.0.5 (i686) libxcb-1.5 (x86_64) libxcb-1.5 (i686) libXi-1.3 (x86_64) libXi-1.3 (i686) make-3.81-19.el6 sysstat-9.0.4-11.el6 (x86_64) nfs-utils-1.2.3-15.0.1 |
老老实实配置好yum源然后一个个安装,链接如下:
配置yum源:
http://blog.csdn.net/kadwf123/article/details/78231694
安装依赖包:
http://blog.csdn.net/kadwf123/article/details/78238022
20、修改环境变量
为grid用户设置环境变量
Su - grid
Echo $SHELL
Vi /home/grid/.bash_profile
新增下列内容:
export ORACLE_SID=+ASM1
export ORACLE_BASE=/taryartar/12c/grid_base
export ORACLE_HOME=/taryartar/12c/grid_home
export GRID_HOME=$ORACLE_HOME
export ORACLE_TERM=xterm
export TMP=/tmp
export TMPDIR=$TMP
#PATH=$PATH:$HOME/bin
export PATH=/usr/sbin:$PATH:$ORACLE_HOME/bin
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
export ORACLE_HOSTNAME=rac1.taryartar.com
export DB_UNIQUE_NAME=tar
export CVUQDISK_GRP=oinstall
umask 022
[grid@rac1 ~]$ vi .bash_profile
# .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
PATH=$PATH:$HOME/bin
export PATH
#add by wufan 2017/10/15
export ORACLE_SID=+ASM1
export ORACLE_BASE=/taryartar/12c/grid_base
export ORACLE_HOME=/taryartar/12c/grid_home
export GRID_HOME=$ORACLE_HOME
export ORACLE_TERM=xterm
export TMP=/tmp
export TMPDIR=$TMP
#PATH=$PATH:$HOME/bin
export PATH=/usr/sbin:$PATH:$ORACLE_HOME/bin
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
export ORACLE_HOSTNAME=rac1.taryartar.com
export DB_UNIQUE_NAME=tar
export CVUQDISK_GRP=oinstall
umask 022
使环境变量立即生效:
source .bash_profile
21、修改oracle环境变量
Su - oracle
Echo $SHELL
Vi /home/oracle/.bash_profile
新增下列内容:
export ORACLE_BASE=/taryartar/12c/db_base
export ORACLE_HOME=$ORACLE_BASE/db_home
export ORACLE_SID=tar1
export ORACLE_OWNER=oracle
export ORACLE_TERM=vt100
export PATH=$PATH:$ORACLE_HOME/bin
export PATH=$ORACLE_HOME/Apache/Apache/bin:$PATH
export BASE_PATH=/usr/sbin:$PATH:$BASE_PATH
export PATH=$BASE_PATH
LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib:/usr/local/lib
export LD_LIBRARY_PATH
CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
CLASSPATH=$CLASSPATH:$ORACLE_HOME/network/jlib
export CLASSPATH
export ORACLE_HOSTNAME=rac1.taryartar.com
export DB_UNIQUE_NAME=tar
export CVUQDISK=oinstall
umask 022
-
[oracle@rac1 ~]$ vi .bash_profile
-
# .bash_profile
-
# Get the aliases and functions
-
if [ -f ~/.bashrc ]; then
-
. ~/.bashrc
-
fi
-
# User specific environment and startup programs
-
PATH=$PATH:$HOME/bin
-
export PATH
-
#add by wufan 2017/10/15
-
export ORACLE_BASE=/taryartar/12c/db_base
-
export ORACLE_HOME=$ORACLE_BASE/db_home
-
export ORACLE_SID=tar1
-
export ORACLE_OWNER=oracle
-
export ORACLE_TERM=vt100
-
export PATH=$PATH:$ORACLE_HOME/bin
-
export PATH=$ORACLE_HOME/Apache/Apache/bin:$PATH
-
export BASE_PATH=/usr/sbin:$PATH:$BASE_PATH
-
export PATH=$BASE_PATH
-
LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib:/usr/local/lib
-
export LD_LIBRARY_PATH
-
CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
-
CLASSPATH=$CLASSPATH:$ORACLE_HOME/network/jlib
-
export CLASSPATH
-
export ORACLE_HOSTNAME=rac1.taryartar.com
-
export DB_UNIQUE_NAME=tar
-
export CVUQDISK=oinstall
-
umask 022
使环境变量立即生效:
source .bash_profile
22、修改内核参数
Su - root
Vi /etc/sysctl.conf
先把kernel.shmmax和kernel.shmall
两个注释掉
新增下面一段:
fs.aio-max-nr = 1048576
fs.file-max = 6815744
kernel.shmall = 2097152
#物理内存的50-60%
kernel.shmmax = 1046898278
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048586
[root@rac1 ssh]# vi /etc/sysctl.conf
-
# Controls the maximum shared segment size, in bytes
-
#kernel.shmmax = 68719476736
-
# Controls the maximum number of shared memory segments, in pages
-
#kernel.shmall = 4294967296
-
##add by wufan 2017/10/15
-
fs.aio-max-nr = 1048576
-
fs.file-max = 6815744
-
kernel.shmall = 2097152
-
#物理内存的50-60%
-
kernel.shmmax = 1046898278
-
kernel.shmmni = 4096
-
kernel.sem = 250 32000 100 128
-
net.ipv4.ip_local_port_range = 9000 65500
-
net.core.rmem_default = 262144
-
net.core.rmem_max = 4194304
-
net.core.wmem_default = 262144
-
net.core.wmem_max = 1048586
使内核参数立即生效
-
[root@rac1 ssh]# sysctl -p
-
net.ipv4.ip_forward = 0
-
net.ipv4.conf.default.rp_filter = 1
-
net.ipv4.conf.default.accept_source_route = 0
-
kernel.sysrq = 0
-
kernel.core_uses_pid = 1
-
net.ipv4.tcp_syncookies = 1
-
error: "net.bridge.bridge-nf-call-ip6tables" is an unknown key
-
error: "net.bridge.bridge-nf-call-iptables" is an unknown key
-
error: "net.bridge.bridge-nf-call-arptables" is an unknown key
-
kernel.msgmnb = 65536
-
kernel.msgmax = 65536
-
fs.aio-max-nr = 1048576
-
fs.file-max = 6815744
-
kernel.shmall = 2097152
-
kernel.shmmax = 1046898278
-
kernel.shmmni = 4096
-
kernel.sem = 250 32000 100 128
-
net.ipv4.ip_local_port_range = 9000 65500
-
net.core.rmem_default = 262144
-
net.core.rmem_max = 4194304
-
net.core.wmem_default = 262144
-
net.core.wmem_max = 1048586
查看内核是否生效
-
[root@rac1 ssh]# sysctl -a |grep shmmax
-
kernel.shmmax = 1046898278
23、修改用户资源限制
[root@rac1 ssh]# vi /etc/security/limits.conf
新增如下内容:
-
oracle soft nproc 2047
-
oracle hard nproc 16384
-
oracle soft nofile 1024
-
oracle hard nofile 65536
-
grid soft nproc 16384
-
grid hard nproc 16384
-
grid soft nofile 65536
-
grid hard nofile 65536
-
grid soft stack 10240
-
grid hard stack 10240
nproc 最大进程数限制
nofile 最大文件数限制
24、至此,节点1安装前准备就算基本完成,下面需要对节点1重启后检查一遍:
http://blog.csdn.net/kadwf123/article/details/78241445
25、对第一个节点执行导出(需要注意该节点处于停止状态)。
导出后对导出的文件进行导入,然后启动新节点,用crt直接通过节点1的ip进行登录。
26、然后修改主机名和ip
-
[oracle@rac2 ~]$ vi /etc/sysconfig/network
-
NETWORKING=yes
-
HOSTNAME=rac2
[root@rac2 ~]# hostname rac2
-
[root@rac2 ~]# hostname
-
rac2
hostname改成rac2
-
[root@rac2 ~]# vi /etc/sysconfig/network-scripts/ifcfg-bond0
-
DEVICE=bond0
-
IPADDR=192.168.0.52
-
NETMASK=255.255.255.0
-
USERCTL=no
-
BOOTPROTO=none
-
ONBOOT=yes
-
GATEWAY=192.168.0.1
-
IPV6INIT=no
-
TYPE=Ethernet
-
#DNS1=192.168.0.1
公网ipaddr改成192.168.0.52,其它保持不变
改第一块私网网卡eth2
-
[root@rac2 ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth2
-
DEVICE=eth2
-
HWADDR=08:00:27:18:29:48
-
TYPE=Ethernet
-
ONBOOT=yes
-
NM_CONTROLLED=yes
-
BOOTPROTO=none
-
IPV6INIT=no
-
USERCTL=no
-
IPADDR=10.0.10.3
-
NETMASK=255.255.255.0
-
GATEWAY=192.168.0.1
ipaddr改成10.0.10.3,其它保持不变。
改第二块私网网卡eth3
-
[root@rac2 ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth3
-
DEVICE=eth3
-
HWADDR=08:00:27:59:1e:79
-
TYPE=Ethernet
-
ONBOOT=yes
-
BOOTPROTO=none
-
IPADDR=10.0.10.4
-
NETMASK=255.255.255.0
-
GATEWAY=192.168.0.1
-
IPV6INIT=no
-
USERCTL=no
ipaddr改成10.0.10.4,其它不变。
27、重启网络服务
[root@rac2 ~]# service network restart
重启后需要重新登陆crt建立到新节点ip192.168.0.52的网络连接。
28、登陆后检查rac2的dns是否可用。
-
[root@rac2 ~]# nslookup rac3
-
Server: 192.168.0.88
-
Address: 192.168.0.88#53
-
Name: rac3.taryartar.com
-
Address: 192.168.0.53
-
[root@rac2 ~]# nslookup rac2.taryartar.com
-
Server: 192.168.0.88
-
Address: 192.168.0.88#53
-
Name: rac2.taryartar.com
-
Address: 192.168.0.52
-
[root@rac2 ~]# nslookup 192.168.0.54
-
Server: 192.168.0.88
-
Address: 192.168.0.88#53
-
54.0.168.192.in-addr.arpa name = rac4.taryartar.com.
-
[root@rac2 ~]#
ok,正向反向解析都没问题。
29、修改grid用户环境变量配置
-
[root@rac2 ~]# su - grid
-
[grid@rac2 ~]$ vi .bash_profile
-
# User specific environment and startup programs
-
PATH=$PATH:$HOME/bin
-
export PATH
-
#add by wufan 2017/1015
-
export ORACLE_SID=+ASM2
-
export ORACLE_BASE=/taryartar/12c/grid_base
-
export ORACLE_HOME=/taryartar/12c/grid_home
-
export GRID_HOME=$ORACLE_HOME
-
export ORACLE_TERM=xterm
-
export TMP=/tmp
-
export TMPDIR=$TMP
-
#PATH=$PATH:$HOME/bin
-
export PATH=/usr/sbin:$PATH:$ORACLE_HOME/bin
-
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
-
export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
-
export ORACLE_HOSTNAME=rac2.taryartar.com
-
export DB_UNIQUE_NAME=tar
-
export CVUQDISK_GRP=oinstall
-
umask 022
只需修改ORACLE_SID和ORACLE_HOSTNAME两个变量值
改完立即生效,执行命令
source .bash_profile
检查更改是否生效:
-
[grid@rac2 ~]$ env |grep -i oracle_sid
-
ORACLE_SID=+ASM2
-
[grid@rac2 ~]$ env|grep -i oracle_hostname
-
ORACLE_HOSTNAME=rac2.taryartar.com
-
[grid@rac2 ~]$
30、修改oracle用户环境变量
-
[root@rac2 ~]# su - oracle
-
[oracle@rac2 ~]$ vi .bash_profile
-
PATH=$PATH:$HOME/bin
-
export PATH
-
#add by wufan 2017/10/15
-
export ORACLE_BASE=/taryartar/12c/db_base
-
export ORACLE_HOME=$ORACLE_BASE/db_home
-
export ORACLE_SID=tar2
-
export ORACLE_OWNER=oracle
-
export ORACLE_TERM=vt100
-
export PATH=$PATH:$ORACLE_HOME/bin
-
export PATH=$ORACLE_HOME/Apache/Apache/bin:$PATH
-
export BASE_PATH=/usr/sbin:$PATH:$BASE_PATH
-
export PATH=$BASE_PATH
-
LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib:/usr/local/lib
-
export LD_LIBRARY_PATH
-
CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
-
CLASSPATH=$CLASSPATH:$ORACLE_HOME/network/jlib
-
export CLASSPATH
-
export ORACLE_HOSTNAME=rac2.taryartar.com
-
export DB_UNIQUE_NAME=tar
-
export CVUQDISK=oinstall
只需修改ORACLE_SID和ORACLE_HOSTNAME两个变量
改完立即生效,执行命令
source .bash_profile
检查更改是否生效:
-
[oracle@rac2 ~]$ env|grep -i oracle_sid
-
ORACLE_SID=tar2
-
[oracle@rac2 ~]$ env|grep -i oracle_hostname
-
ORACLE_HOSTNAME=rac2.taryartar.com
-
[oracle@rac2 ~]$
31、至此,第二个节点更改完成。
32、按照上述方法建立第3-4个节点。
33、下面配置共享存储
一共七块盘,两块10g,外加5块各2g。
34、创建磁盘virtualbox命令创建:
创建第一块磁盘:
E:\Program Files\Oracle\VirtualBox\VBoxManage createhd --filename E:\实验环境\12CRAC\V_SHARES\taryartar1.vdi --size 12288 --format VDI --variant Fixed
如果你跟我一样,virtualbox安装目录有空格Program Files,那么可能遇到命令不存在的问题。为了方便,我直接把virtual安装目录直接配置到环境变量PATH中。
这样就可以直接执行命令了。
好了成功了。创建剩下的六块盘,附上脚本。
-
VBoxManage createhd --filename E:\实验环境\12CRAC\V_SHARES\taryartar1.vdi --size 12288 --format VDI --variant Fixed
-
VBoxManage createhd --filename E:\实验环境\12CRAC\V_SHARES\taryartar2.vdi --size 12288 --format VDI --variant Fixed
-
VBoxManage createhd --filename E:\实验环境\12CRAC\V_SHARES\taryartar3.vdi --size 2048 --format VDI --variant Fixed
-
VBoxManage createhd --filename E:\实验环境\12CRAC\V_SHARES\taryartar4.vdi --size 2048 --format VDI --variant Fixed
-
VBoxManage createhd --filename E:\实验环境\12CRAC\V_SHARES\taryartar5.vdi --size 2048 --format VDI --variant Fixed
-
VBoxManage createhd --filename E:\实验环境\12CRAC\V_SHARES\taryartar6.vdi --size 2048 --format VDI --variant Fixed
-
VBoxManage createhd --filename E:\实验环境\12CRAC\V_SHARES\taryartar7.vdi --size 2048 --format VDI --variant Fixed
命令说明:
VBoxManage 是virtualbox安装目录中的命令,VBoxManage.exe可执行文件。
--filename指定需要创建的磁盘的文件位置和名字。
--size指定创建的磁盘大小,单位M。
--format指定磁盘的格式。
35、创建磁盘完成后,需要连接到虚拟机。
在连接到虚拟机前,需要先关闭虚拟机。
连接第一块磁盘到虚拟机rac1-12c,命令如下:
VBoxManage storageattach rac1-12c --storagectl "SATA" --port 4 --device 0 --type hdd --medium E:\实验环境\12CRAC\V_SHARES\taryartar1.vdi --mtype shareable
图中虚拟机名称:
端口号指的是:
-
VBoxManage storageattach rac1-12c --storagectl "SATA" --port 4 --device 0 --type hdd --medium E:\实验环境\12CRAC\V_SHARES\taryartar1.vdi --mtype shareable
-
VBoxManage storageattach rac1-12c --storagectl "SATA" --port 5 --device 0 --type hdd --medium E:\实验环境\12CRAC\V_SHARES\taryartar2.vdi --mtype shareable
-
VBoxManage storageattach rac1-12c --storagectl "SATA" --port 6 --device 0 --type hdd --medium E:\实验环境\12CRAC\V_SHARES\taryartar3.vdi --mtype shareable
-
VBoxManage storageattach rac1-12c --storagectl "SATA" --port 7 --device 0 --type hdd --medium E:\实验环境\12CRAC\V_SHARES\taryartar4.vdi --mtype shareable
-
VBoxManage storageattach rac1-12c --storagectl "SATA" --port 8 --device 0 --type hdd --medium E:\实验环境\12CRAC\V_SHARES\taryartar5.vdi --mtype shareable
-
VBoxManage storageattach rac1-12c --storagectl "SATA" --port 9 --device 0 --type hdd --medium E:\实验环境\12CRAC\V_SHARES\taryartar6.vdi --mtype shareable
-
VBoxManage storageattach rac1-12c --storagectl "SATA" --port 10 --device 0 --type hdd --medium E:\实验环境\12CRAC\V_SHARES\taryartar7.vdi --mtype shareable
将7块磁盘全部连接至rac1-12c完成后,在图形界面看到如下:
36、连接完成后,需要把7块磁盘改成共享磁盘属性。
-
VBoxManage modifyhd E:\实验环境\12CRAC\V_SHARES\taryartar1.vdi --type shareable
-
VBoxManage modifyhd E:\实验环境\12CRAC\V_SHARES\taryartar2.vdi --type shareable
-
VBoxManage modifyhd E:\实验环境\12CRAC\V_SHARES\taryartar3.vdi --type shareable
-
VBoxManage modifyhd E:\实验环境\12CRAC\V_SHARES\taryartar4.vdi --type shareable
-
VBoxManage modifyhd E:\实验环境\12CRAC\V_SHARES\taryartar5.vdi --type shareable
-
VBoxManage modifyhd E:\实验环境\12CRAC\V_SHARES\taryartar6.vdi --type shareable
-
VBoxManage modifyhd E:\实验环境\12CRAC\V_SHARES\taryartar7.vdi --type shareable
执行完成后,在把7块盘都连接到另外3台虚拟机
37、脚本如下:
-
VBoxManage storageattach rac2-12c --storagectl "SATA" --port 4 --device 0 --type hdd --medium E:\实验环境\12CRAC\V_SHARES\taryartar1.vdi --mtype shareable
-
VBoxManage storageattach rac2-12c --storagectl "SATA" --port 5 --device 0 --type hdd --medium E:\实验环境\12CRAC\V_SHARES\taryartar2.vdi --mtype shareable
-
VBoxManage storageattach rac2-12c --storagectl "SATA" --port 6 --device 0 --type hdd --medium E:\实验环境\12CRAC\V_SHARES\taryartar3.vdi --mtype shareable
-
VBoxManage storageattach rac2-12c --storagectl "SATA" --port 7 --device 0 --type hdd --medium E:\实验环境\12CRAC\V_SHARES\taryartar4.vdi --mtype shareable
-
VBoxManage storageattach rac2-12c --storagectl "SATA" --port 8 --device 0 --type hdd --medium E:\实验环境\12CRAC\V_SHARES\taryartar5.vdi --mtype shareable
-
VBoxManage storageattach rac2-12c --storagectl "SATA" --port 9 --device 0 --type hdd --medium E:\实验环境\12CRAC\V_SHARES\taryartar6.vdi --mtype shareable
-
VBoxManage storageattach rac2-12c --storagectl "SATA" --port 10 --device 0 --type hdd --medium E:\实验环境\12CRAC\V_SHARES\taryartar7.vdi --mtype shareable
-
VBoxManage storageattach rac3-12c --storagectl "SATA" --port 4 --device 0 --type hdd --medium E:\实验环境\12CRAC\V_SHARES\taryartar1.vdi --mtype shareable
-
VBoxManage storageattach rac3-12c --storagectl "SATA" --port 5 --device 0 --type hdd --medium E:\实验环境\12CRAC\V_SHARES\taryartar2.vdi --mtype shareable
-
VBoxManage storageattach rac3-12c --storagectl "SATA" --port 6 --device 0 --type hdd --medium E:\实验环境\12CRAC\V_SHARES\taryartar3.vdi --mtype shareable
-
VBoxManage storageattach rac3-12c --storagectl "SATA" --port 7 --device 0 --type hdd --medium E:\实验环境\12CRAC\V_SHARES\taryartar4.vdi --mtype shareable
-
VBoxManage storageattach rac3-12c --storagectl "SATA" --port 8 --device 0 --type hdd --medium E:\实验环境\12CRAC\V_SHARES\taryartar5.vdi --mtype shareable
-
VBoxManage storageattach rac3-12c --storagectl "SATA" --port 9 --device 0 --type hdd --medium E:\实验环境\12CRAC\V_SHARES\taryartar6.vdi --mtype shareable
-
VBoxManage storageattach rac3-12c --storagectl "SATA" --port 10 --device 0 --type hdd --medium E:\实验环境\12CRAC\V_SHARES\taryartar7.vdi --mtype shareable
-
VBoxManage storageattach rac4-12c --storagectl "SATA" --port 4 --device 0 --type hdd --medium E:\实验环境\12CRAC\V_SHARES\taryartar1.vdi --mtype shareable
-
VBoxManage storageattach rac4-12c --storagectl "SATA" --port 5 --device 0 --type hdd --medium E:\实验环境\12CRAC\V_SHARES\taryartar2.vdi --mtype shareable
-
VBoxManage storageattach rac4-12c --storagectl "SATA" --port 6 --device 0 --type hdd --medium E:\实验环境\12CRAC\V_SHARES\taryartar3.vdi --mtype shareable
-
VBoxManage storageattach rac4-12c --storagectl "SATA" --port 7 --device 0 --type hdd --medium E:\实验环境\12CRAC\V_SHARES\taryartar4.vdi --mtype shareable
-
VBoxManage storageattach rac4-12c --storagectl "SATA" --port 8 --device 0 --type hdd --medium E:\实验环境\12CRAC\V_SHARES\taryartar5.vdi --mtype shareable
-
VBoxManage storageattach rac4-12c --storagectl "SATA" --port 9 --device 0 --type hdd --medium E:\实验环境\12CRAC\V_SHARES\taryartar6.vdi --mtype shareable
-
VBoxManage storageattach rac4-12c --storagectl "SATA" --port 10 --device 0 --type hdd --medium E:\实验环境\12CRAC\V_SHARES\taryartar7.vdi --mtype shareable
38、共享存储添加完成,启动4台虚拟机,然后用crt登陆。
使用fdisk -l |grep Di查看每台主机的硬盘。
-
[root@rac1 ~]# fdisk -l|grep Di
-
Disk /dev/sda: 32.2 GB, 32212254720 bytes
-
Disk identifier: 0x000ca0fb
-
Disk /dev/sdb: 12.9 GB, 12884901888 bytes
-
Disk identifier: 0x00000000
-
Disk /dev/sdc: 12.9 GB, 12884901888 bytes
-
Disk identifier: 0x00000000
-
Disk /dev/sdd: 2147 MB, 2147483648 bytes
-
Disk identifier: 0x00000000
-
Disk /dev/sde: 2147 MB, 2147483648 bytes
-
Disk identifier: 0x00000000
-
Disk /dev/sdf: 2147 MB, 2147483648 bytes
-
Disk identifier: 0x00000000
-
Disk /dev/sdg: 2147 MB, 2147483648 bytes
-
Disk identifier: 0x00000000
-
Disk /dev/sdh: 2147 MB, 2147483648 bytes
-
Disk identifier: 0x00000000
-
Disk /dev/mapper/vg_rac1-lv_root: 28.5 GB, 28529655808 bytes
-
Disk identifier: 0x00000000
-
Disk /dev/mapper/vg_rac1-lv_swap: 3154 MB, 3154116608 bytes
-
Disk identifier: 0x00000000
四台虚拟机都显示如上,表示一切正常。
sda是虚拟机本地普通磁盘。从sd[b-h]是新添加的磁盘。其中sd[b-c]是两块10g的共享盘,后面5块sd[d-h]是2g的共享盘。
39、对7块共享盘进行分区,每个磁盘只分一个区。注意共享盘分区只需要在一个节点上做就可以了,做完其它节点就能看到新的分区 。
在节点rac1上进行分区操作。
如下:
http://blog.csdn.net/kadwf123/article/details/78244863
分区完成后,在所有的节点执行fdisk -l
可以看到sd[b-h]都已经增加了一个分区了,如下我在节点1下分区,我截取了节点2看到的分区情况:
-
[root@rac2 ~]# fdisk -l
-
Disk /dev/sda: 32.2 GB, 32212254720 bytes
-
255 heads, 63 sectors/track, 3916 cylinders
-
Units = cylinders of 16065 * 512 = 8225280 bytes
-
Sector size (logical/physical): 512 bytes / 512 bytes
-
I/O size (minimum/optimal): 512 bytes / 512 bytes
-
Disk identifier: 0x000ca0fb
-
Device Boot Start End Blocks Id System
-
/dev/sda1 * 1 64 512000 83 Linux
-
Partition 1 does not end on cylinder boundary.
-
/dev/sda2 64 3917 30944256 8e Linux LVM
-
Disk /dev/sdb: 12.9 GB, 12884901888 bytes
-
255 heads, 63 sectors/track, 1566 cylinders
-
Units = cylinders of 16065 * 512 = 8225280 bytes
-
Sector size (logical/physical): 512 bytes / 512 bytes
-
I/O size (minimum/optimal): 512 bytes / 512 bytes
-
Disk identifier: 0x06b87c6d
-
Device Boot Start End Blocks Id System
-
/dev/sdb1 1 1566 12578863+ 83 Linux
-
Disk /dev/sdc: 12.9 GB, 12884901888 bytes
-
255 heads, 63 sectors/track, 1566 cylinders
-
Units = cylinders of 16065 * 512 = 8225280 bytes
-
Sector size (logical/physical): 512 bytes / 512 bytes
-
I/O size (minimum/optimal): 512 bytes / 512 bytes
-
Disk identifier: 0x15863b60
-
Device Boot Start End Blocks Id System
-
/dev/sdc1 1 1566 12578863+ 83 Linux
-
Disk /dev/sdd: 2147 MB, 2147483648 bytes
-
255 heads, 63 sectors/track, 261 cylinders
-
Units = cylinders of 16065 * 512 = 8225280 bytes
-
Sector size (logical/physical): 512 bytes / 512 bytes
-
I/O size (minimum/optimal): 512 bytes / 512 bytes
-
Disk identifier: 0x12264bdf
-
Device Boot Start End Blocks Id System
-
/dev/sdd1 1 261 2096451 83 Linux
-
Disk /dev/sde: 2147 MB, 2147483648 bytes
-
255 heads, 63 sectors/track, 261 cylinders
-
Units = cylinders of 16065 * 512 = 8225280 bytes
-
Sector size (logical/physical): 512 bytes / 512 bytes
-
I/O size (minimum/optimal): 512 bytes / 512 bytes
-
Disk identifier: 0xb60ea544
-
Device Boot Start End Blocks Id System
-
/dev/sde1 1 261 2096451 83 Linux
-
Disk /dev/sdf: 2147 MB, 2147483648 bytes
-
255 heads, 63 sectors/track, 261 cylinders
-
Units = cylinders of 16065 * 512 = 8225280 bytes
-
Sector size (logical/physical): 512 bytes / 512 bytes
-
I/O size (minimum/optimal): 512 bytes / 512 bytes
-
Disk identifier: 0xe73a738f
-
Device Boot Start End Blocks Id System
-
/dev/sdf1 1 261 2096451 83 Linux
-
Disk /dev/sdg: 2147 MB, 2147483648 bytes
-
255 heads, 63 sectors/track, 261 cylinders
-
Units = cylinders of 16065 * 512 = 8225280 bytes
-
Sector size (logical/physical): 512 bytes / 512 bytes
-
I/O size (minimum/optimal): 512 bytes / 512 bytes
-
Disk identifier: 0x41119c7e
-
Device Boot Start End Blocks Id System
-
/dev/sdg1 1 261 2096451 83 Linux
-
Disk /dev/sdh: 2147 MB, 2147483648 bytes
-
255 heads, 63 sectors/track, 261 cylinders
-
Units = cylinders of 16065 * 512 = 8225280 bytes
-
Sector size (logical/physical): 512 bytes / 512 bytes
-
I/O size (minimum/optimal): 512 bytes / 512 bytes
-
Disk identifier: 0xeac8d462
-
Device Boot Start End Blocks Id System
-
/dev/sdh1 1 261 2096451 83 Linux
-
Disk /dev/mapper/vg_rac1-lv_root: 28.5 GB, 28529655808 bytes
-
255 heads, 63 sectors/track, 3468 cylinders
-
Units = cylinders of 16065 * 512 = 8225280 bytes
-
Sector size (logical/physical): 512 bytes / 512 bytes
-
I/O size (minimum/optimal): 512 bytes / 512 bytes
-
Disk identifier: 0x00000000
-
Disk /dev/mapper/vg_rac1-lv_swap: 3154 MB, 3154116608 bytes
-
255 heads, 63 sectors/track, 383 cylinders
-
Units = cylinders of 16065 * 512 = 8225280 bytes
-
Sector size (logical/physical): 512 bytes / 512 bytes
-
I/O size (minimum/optimal): 512 bytes / 512 bytes
-
Disk identifier: 0x00000000
-
[root@rac2 ~]#
显示如上,说明共享磁盘和分区没问题。
40、配置裸设备
-
[root@rac1 ~]# fdisk -l|grep Linux
-
/dev/sda1 * 1 64 512000 83 Linux
-
/dev/sda2 64 3917 30944256 8e Linux LVM
-
/dev/sdb1 1 1566 12578863+ 83 Linux
-
/dev/sdc1 1 1566 12578863+ 83 Linux
-
/dev/sdd1 1 261 2096451 83 Linux
-
/dev/sde1 1 261 2096451 83 Linux
-
/dev/sdf1 1 261 2096451 83 Linux
-
/dev/sdg1 1 261 2096451 83 Linux
-
/dev/sdh1 1 261 2096451 83 Linux
-
[root@rac1 ~]#
-
[root@rac2 ~]# fdisk -l|grep Linux
-
/dev/sda1 * 1 64 512000 83 Linux
-
/dev/sda2 64 3917 30944256 8e Linux LVM
-
/dev/sdb1 1 1566 12578863+ 83 Linux
-
/dev/sdc1 1 1566 12578863+ 83 Linux
-
/dev/sdd1 1 261 2096451 83 Linux
-
/dev/sde1 1 261 2096451 83 Linux
-
/dev/sdf1 1 261 2096451 83 Linux
-
/dev/sdg1 1 261 2096451 83 Linux
-
/dev/sdh1 1 261 2096451 83 Linux
-
[root@rac2 ~]#
-
[root@rac3 ~]# fdisk -l|grep Linux
-
/dev/sda1 * 1 64 512000 83 Linux
-
/dev/sda2 64 3917 30944256 8e Linux LVM
-
/dev/sdb1 1 1566 12578863+ 83 Linux
-
/dev/sdc1 1 1566 12578863+ 83 Linux
-
/dev/sdd1 1 261 2096451 83 Linux
-
/dev/sde1 1 261 2096451 83 Linux
-
/dev/sdf1 1 261 2096451 83 Linux
-
/dev/sdg1 1 261 2096451 83 Linux
-
/dev/sdh1 1 261 2096451 83 Linux
-
[root@rac3 ~]#
-
[root@rac4 ~]# fdisk -l|grep Linux
-
/dev/sda1 * 1 64 512000 83 Linux
-
/dev/sda2 64 3917 30944256 8e Linux LVM
-
/dev/sdb1 1 1566 12578863+ 83 Linux
-
/dev/sdc1 1 1566 12578863+ 83 Linux
-
/dev/sdd1 1 261 2096451 83 Linux
-
/dev/sde1 1 261 2096451 83 Linux
-
/dev/sdf1 1 261 2096451 83 Linux
-
/dev/sdg1 1 261 2096451 83 Linux
-
/dev/sdh1 1 261 2096451 83 Linux
-
[root@rac4 ~]#
查看得到分区的名字
每个节点都进入到下面的目录:
cd /etc/udev/rules.d
然后创建一个规则文件,后缀一定要是.rules:
vi 99-ASM.rules
写入如下内容:
ACTION=="add", KERNEL=="sdb1", RUN+="/bin/raw /dev/raw/raw1 %N"
ACTION=="add", KERNEL=="sdc1", RUN+="/bin/raw /dev/raw/raw2 %N"
ACTION=="add", KERNEL=="sdd1", RUN+="/bin/raw /dev/raw/raw3 %N"
ACTION=="add", KERNEL=="sde1", RUN+="/bin/raw /dev/raw/raw4 %N"
ACTION=="add", KERNEL=="sdf1", RUN+="/bin/raw /dev/raw/raw5 %N"
ACTION=="add", KERNEL=="sdg1", RUN+="/bin/raw /dev/raw/raw6 %N"
ACTION=="add", KERNEL=="sdh1", RUN+="/bin/raw /dev/raw/raw7 %N"
KERNEL=="raw[1-7]*", OWNER="grid", GROUP="asmdba", MODE="775"
-
[root@rac1 rules.d]# vi 99-ASM.rules
-
ACTION=="add", KERNEL=="sdb1", RUN+="/bin/raw /dev/raw/raw1 %N"
-
ACTION=="add", KERNEL=="sdc1", RUN+="/bin/raw /dev/raw/raw2 %N"
-
ACTION=="add", KERNEL=="sdd1", RUN+="/bin/raw /dev/raw/raw3 %N"
-
ACTION=="add", KERNEL=="sde1", RUN+="/bin/raw /dev/raw/raw4 %N"
-
ACTION=="add", KERNEL=="sdf1", RUN+="/bin/raw /dev/raw/raw5 %N"
-
ACTION=="add", KERNEL=="sdg1", RUN+="/bin/raw /dev/raw/raw6 %N"
-
ACTION=="add", KERNEL=="sdh1", RUN+="/bin/raw /dev/raw/raw7 %N"
-
KERNEL=="raw[1-7]*", OWNER="grid", GROUP="asmdba", MODE="775"
41、然后启动udev,四个节点都需要执行下面的命令,最好一个个执行。
下面是四个节点下启动的情况:
-
[root@rac1 rules.d]# start_udev
-
正在启动 udev:[确定]
-
[root@rac1 rules.d]#
-
[root@rac2 rules.d]# start_udev
-
正在启动 udev:[确定]
-
[root@rac2 rules.d]#
-
[root@rac3 rules.d]# start_udev
-
正在启动 udev:[确定]
-
[root@rac3 rules.d]#
-
[root@rac4 rules.d]# start_udev
-
正在启动 udev:[确定]
-
[root@rac4 rules.d]#
都启动成功后,可以看下四个节点的裸设备:
我这边发现只有节点1能看到raw[1-7],其它三个节点只能看到rawctl:
查看另外三个节点中的随便一台日志
view /var/log/messages
找到udev关键字,并且在刚才启动的时间的日志。
发现报如下错误:
-
Oct 15 21:46:36 rac2 kdump: No crashkernel parameter specified for running kernel
-
Oct 15 21:46:36 rac2 acpid: starting up
-
Oct 15 21:46:36 rac2 acpid: 1 rule loaded
-
Oct 15 21:46:36 rac2 acpid: waiting for events: event logging is off
-
Oct 15 21:46:37 rac2 acpid: client connected from 1669[68:68]
-
Oct 15 21:46:37 rac2 acpid: 1 client rule loaded
-
Oct 15 21:46:39 rac2 automount[1690]: lookup_read_master: lookup(nisplus): couldn't locate nis+ table auto.master
-
Oct 15 21:46:39 rac2 mcelog: mcelog read: No such device
-
Oct 15 21:46:39 rac2 abrtd: Init complete, entering main loop
-
Oct 15 22:37:15 rac2 kernel: udev: starting version 147
-
Oct 15 22:37:16 rac2 udevd-work[2074]: error changing netif name 'eth1' to 'eth0': Device or resource busy
此情况只需要重启另外三个节点即可。
重启后的节点2-4如下:
ok,裸设备配置完成。
这篇关于oracle12C RAC GI + UDEV + ASM 在centos6下安装详细步骤的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!