RedHat Linux, 64位操作系统
RAC的实施之所以难装,很多时候是因为准备工作不周全,导致不断的出错返工,消耗了大量时间。
1. 准备工作
- 准备三块同样大小的磁盘,1G 。即用于做OCR(保存RAC的配置信息),也用于voting disk, ocr和voting disk共用一块磁盘,三块磁盘提供了普通程度的冗余,也可以5块,提供高冗余度。
- 系统配置确认:
a) 内存 > 1.5G ,空闲内存 >50M
b) SWAP > 3G
c) /tmp > 1G
- 核对内核版本 uname -r 确保该地址存在与该内核版本匹配的asmlib,http://www.oracle.com/technetwork/server-storage/linux/downloads/rhel5-084877.html ,否则就需要升级内核。开始我的内核版本是2.6.18-229,没有对应的asmlib版本;始终无法加载模块,把内核升到了2.6.18-238,问题才解决。
内核升级方法 见附录1
- 配置好yum,因为需要安装一些unixODBC-devel等第三方软件包,没有yum将浪费大量时间。而如果redhat没有注册,yum是不能用的。
yum配置方法,见附录2
以上准备完成,就可以开始RAC的配置了。
2. 配置工作(同时在两个节点做)
2.1 .创建grid和Oracle用户和用户组。
grid 用于安装rac的基础软件包括clusterware和asm
- oracle 用于安装数据库软件。
groupadd -g 501 oinstallgroupadd -g 502 dbagroupadd -g 503 opergroupadd -g 504 asmadmingroupadd -g 505 asmopergroupadd -g 506 asmdbamkdir -p /oracle/app/{grid,oracle,oraInventory}useradd -u 501 -g oinstall -G dba,asmdba,oper -d /oracle/app/oracle oracleuseradd -u 502 -g oinstall -G asmadmin,asmdba,asmoper -m -d /oracle/app/grid gridchmod -R 775 /oracle/app/chown -R grid:oinstall /oracle/app/chown -R grid:oinstall /oracle/app/oraInventorychown -R grid:oinstall /oracle/app/gridchown -R oracle:oinstall /oracle/app/oracle
2.2. 配置系统参数
1. 关闭selinux
vi /etc/selinux/config
SELINUX=disabled
2. vi /etc/security/limits.conf
grid soft nproc 2047
grid hard nproc 16384
grid soft nofile 1024
grid hard nofile 65536
- oracle soft nproc 2047
- oracle hard nproc 16384
- oracle soft nofile 1024
- oracle hard nofile 65536
3. vi /etc/pam.d/login
session required pam_limits.so
4. vi /etc/sysctl.conf
kernel.shmmax = 536870912
kernel.shmall = 2097152
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
fs.aio-max-nr=1048576
fs.file-max = 6815744
net.ipv4.ip_local_port_range=9000 65500
net.core.rmem_default=262144
net.core.rmem_max=4194304
net.core.wmem_default=262144
net.core.wmem_max=1048586
5. 时钟同步
由于grid有自己的时钟同步,所以需要取消现有的时钟同步。
gird时间同步所需要的设置(11gR2新增检查项)
#Network Time Protocol Setting
/sbin/service ntpd stop
chkconfig ntpd off
mv /etc/ntp.conf /etc/ntp.conf.org
2.3. 配置grid和oracle帐号的环境文件。
a. grid帐号的.bash_profile
TMP=/tmp; export TMPTMPDIR=$TMP; export TMPDIRORACLE_SID=+ASM1; export ORACLE_SIDORACLE_BASE=/oracle/app/grid; export ORACLE_BASEORACLE_HOME=/oracle/app/grid/product/11.2.0; export ORACLE_HOMENLS_DATE_FORMAT="yyyy-mm-dd HH24:MI:SS"; export NLS_DATE_FORMATTHREADS_FLAG=native; export THREADS_FLAGPATH=$ORACLE_HOME/bin:$PATH; export PATHif [ $USER = "oracle" ] || [ $USER = "grid" ]; then if [ $SHELL = "/bin/ksh" ]; then ulimit -p 16384 ulimit -n 65536 else ulimit -u 16384 -n 65536 fi umask 022fi
b. oracle 帐号的.bash_profile
TMP=/tmp; export TMPTMPDIR=$TMP; export TMPDIRORACLE_SID=racdb1; export ORACLE_SIDORACLE_BASE=/oracle/app/oracle; export ORACLE_BASEORACLE_HOME=/oracle/app/oracle/product/11.2.0; export ORACLE_HOMEORACLE_TERM=xterm;export ORACLE_TERMLD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATHCLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib; export CLASSPATHNLS_DATE_FORMAT="yyyy-mm-dd HH24:MI:SS"; export NLS_DATE_FORMATNLS_LANG=AMERICAN_AMERICA.ZHS16GBK;export NLS_LANG PATH=/usr/sbin:$PATH; export PATHPATH=$ORACLE_HOME/bin:$PATH; export PATHif [ $USER = "oracle" ] || [ $USER = "grid" ]; then if [ $SHELL = "/bin/ksh" ]; then ulimit -p 16384 ulimit -n 65536 else ulimit -u 16384 -n 65536 fi umask 022fi
2.4. 配置节点名称和/etc/hosts
这一步非常重要,很多错误都是因为这一步配置。
1, 注意HOSTNAME,
2, 注意要加域名后缀。
vi /etc/sysconfig/network
HOSTNAME=rac2.domain.com
vi /etc/hosts
192.168.24.204 rac1.domain.com rac1
192.168.24.203 rac2.domain.com rac2
192.168.19.204 rac1priv.domain.com rac1priv
192.168.19.203 rac2priv.domain.com rac2priv
192.168.24.206 rac1vip.domain.com rac1vip
192.168.24.205 rac2vip.domain.com rac2vip
192.168.24.207 racscan.domain.com racscan
2.5. 配置两个节点间grid和oracle帐号的SSH互信认证
以便安装过程中将grid和oracle目录复制到其他节点中。
1).在主节点RAC1上以grid,oracle用户身份生成用户的公匙和私匙
# su - oracle
$ mkdir ~/.ssh
$ ssh-keygen -t rsa
$ ssh-keygen -t dsa
2).在副节点RAC2上执行相同的操作,确保通信无阻
# su - oracle
$ mkdir ~/.ssh
$ ssh-keygen -t rsa
$ ssh-keygen -t dsa
3).在主节点RAC1上oracle用户执行以下操作
$ cat ~/.ssh/id_rsa.pub >> ./.ssh/authorized_keys
$ cat ~/.ssh/id_dsa.pub >> ./.ssh/authorized_keys
$ ssh rac2 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
$ ssh rac2 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
$ scp ~/.ssh/authorized_keys rac2:~/.ssh/authorized_keys
4).主节点RAC1上执行检验操作
$ ssh rac1 date
$ ssh rac2 date
2.6. ASM安装配置及创建ASM盘
- 下载安装以下模块 (两个节点)
http://www.oracle.com/technetwork/server-storage/linux/downloads/rhel5-084877.html
- oracleasm-support-2.1.7-1.el5.x86_64.rpm
- oracleasm-2.6.18-238.el5xen-2.0.5-1.el5.x86_64.rpm
- oracleasmlib-2.0.4-1.el5.x86_64.rpm
- ASMlib配置 (在两个节点做)
root@ora1 ~]# /etc/init.d/oracleasm configure
Configuring the Oracle ASM library driver.
The current values will be shown in brackets ('[]'). Hitting <ENTER> without typing an
answer will keep that current value. Ctrl-C will abort.
Default user to own the driver interface []: grid
Default group to own the driver interface []: asmadmin
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done
Initializing the Oracle ASMLib driver: [ OK ]
Scanning the system for Oracle ASMLib disks: [ OK ]
日志位置:/var/log/oracleasm
- 初始化磁盘 (在一个节点做)
fdisk /dev/sdd
Disk /dev/sdd: 2147 MB, 2147483648 bytes
255 heads, 63 sectors/track, 261 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdd1 1 261 2096451 83 Linux\
… …
4. 创建ASM盘 (在一个节点做)
如果之前属于某磁盘组,可以先deletedisk再创建。
root@ora2 asm]# /etc/init.d/oracleasm createdisk CRS1 /dev/sdd1
Marking disk "CRS1" as an ASM disk: [ OK ]
[root@ora2 asm]# /etc/init.d/oracleasm createdisk CRS2 /dev/sde1
Marking disk "CRS2" as an ASM disk: [ OK ]
[root@ora2 asm]# /etc/init.d/oracleasm createdisk CRS3 /dev/sdh1
Marking disk "CRS3" as an ASM disk: [ OK ]
- 加载asm盘(在两个节点做)
root@ora2 asm]# /etc/init.d/oracleasm scandisks
Scanning the system for Oracle ASMLib disks: [ OK ]
[root@ora2 asm]# /etc/init.d/oracleasm listdisks
CRS1
CRS2
CRS3
2.7. RAC安装前自检
安装cvuqdisk包
cd grid/rpm
rpm –ivh cvuqdisk-1.0.7-1.rpm
作为grid用户进行rac安装前检查
export CVUQDISK_GRP=oinstall
export LANG=C
验证集群安装要求
./runcluvfy.sh stage -pre crsinst -n rac1,rac2 -fixup -verbose
验证硬件和操作系统设置
./runcluvfy.sh stage -post hwos -n rac1,rac2 –verbose
在此步骤可能还缺少一些unixodbc等软件包,用yum下载安装既可。
3. 安装工作(只在一个节点做)
选择“安装和配置集群的网格基础结构”
选择“高级安装“
3.1. 创建scanip
scan名称用/etc/hosts配置的名称
3.2 .创建节点信息
3.3. 创建ASM磁盘组
普通级别: 需要三个候选磁盘;
高级别: 需要五个候选磁盘。
外部: 如果靠盘阵提供冗余,只需要选一个
安装拷贝 本过程大约15分钟
进度条在65%的时候会停顿,此时是向其他节点拷贝程序。
3.4. 执行root脚本
orainstRoot.sh, 按3.2节的节点顺序执行,rac1,rac2注意一定不要并行执行。
root.sh; 按3.2节的节点顺序执行,rac1,rac2注意不要并行执行。
Adding daemon to inittab
CRS-4123: Oracle High Availability Services has been started.
ohasd is starting
FATAL: Module oracleoks not found.
FATAL: Module oracleadvm not found.
FATAL: Module oracleacfs not found.
acfsroot: ACFS-9121: Failed to detect /dev/asm/.asm_ctl_spec.
acfsroot: ACFS-9310: ADVM/ACFS installation failed.
acfsroot: ACFS-9311: not all components were detected after the installation.
CRS-2676: 成功启动 'ora.gipcd' (在 'rac1' 上)
CRS-2676: 成功启动 'ora.mdnsd' (在 'rac1' 上)
CRS-2676: 成功启动 'ora.gpnpd' (在 'rac1' 上)
CRS-2676: 成功启动 'ora.cssdmonitor' (在 'rac1' 上)
CRS-2676: 成功启动 'ora.diskmon' (在 'rac1' 上)
CRS-2676: 成功启动 'ora.cssd' (在 'rac1' 上)
CRS-2676: 成功启动 'ora.ctssd' (在 'rac1' 上)
已成功创建并启动 ASM。
已成功创建磁盘组 OCR。
clscfg: -install mode specified
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-2676: 成功启动 'ora.crsd' (在 'rac1' 上)
CRS-4256: Updating the profile
Successful addition of voting disk 4fb3851c39cc4f3ebf4d12c1d2050474.
Successful addition of voting disk 9bf84fbf94894f88bf05fbd37bc45f04.
Successful addition of voting disk 505604630a784fe6bfafa7d81c2eadb5.
Successfully replaced voting disk group with +OCR.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 4fb3851c39cc4f3ebf4d12c1d2050474 (ORCL:CRS1) [OCR]
2. ONLINE 9bf84fbf94894f88bf05fbd37bc45f04 (ORCL:CRS2) [OCR]
3. ONLINE 505604630a784fe6bfafa7d81c2eadb5 (ORCL:CRS3) [OCR]
Located 3 voting disk(s).
CRS-2677: 成功停止 'ora.crsd' (在 'rac1' 上)
CRS-2677: 成功停止 'ora.asm' (在 'rac1' 上)
CRS-2677: 成功停止 'ora.ctssd' (在 'rac1' 上)
CRS-2677: 成功停止 'ora.cssdmonitor' (在 'rac1' 上)
CRS-2677: 成功停止 'ora.cssd' (在 'rac1' 上)
CRS-2677: 成功停止 'ora.gpnpd' (在 'rac1' 上)
CRS-2677: 成功停止 'ora.gipcd' (在 'rac1' 上)
CRS-2677: 成功停止 'ora.mdnsd' (在 'rac1' 上)
CRS-2676: 成功启动 'ora.mdnsd' (在 'rac1' 上)
CRS-2676: 成功启动 'ora.gipcd' (在 'rac1' 上)
CRS-2676: 成功启动 'ora.gpnpd' (在 'rac1' 上)
CRS-2676: 成功启动 'ora.cssdmonitor' (在 'rac1' 上)
CRS-2676: 成功启动 'ora.diskmon' (在 'rac1' 上)
CRS-2676: 成功启动 'ora.cssd' (在 'rac1' 上)
CRS-2676: 成功启动 'ora.ctssd' (在 'rac1' 上)
CRS-2676: 成功启动 'ora.asm' (在 'rac1' 上)
CRS-2676: 成功启动 'ora.crsd' (在 'rac1' 上)
CRS-2676: 成功启动 'ora.evmd' (在 'rac1' 上)
CRS-2676: 成功启动 'ora.asm' (在 'rac1' 上)
CRS-2676: 成功启动 'ora.OCR.dg' (在 'rac1' 上)
rac1 2012/08/04 17:22:22 /oracle/app/grid/product/11.2.0/cdata/rac1/backup_20120804_172222.olr
The inventory pointer is located at /etc/oraInst.loc
'UpdateNodeList' 成功。
3.5. oracle安装
用oracle帐号,执行runInstaller
oracle安装与单实例安装基本相同,不再赘述。
高级安装
企业版
在85%的时候停滞不前好久
3.6. oracle & RAC卸载
先停掉所有服务
- 卸载oracle
用oracle帐号执行
[oracle@london1 deinstall]$ oracle/product/11.2.0/deinstall/deinstall
输入两个节点,RAC,ASM
Do you want to continue (y - yes, n - no)? [n]: y
- 卸载grid
用oracle帐号执行
[oracle@london1 deinstall]$ grid/product/11.2.0/deinstall/deinstall
3. 在两个节点用root执行
/tmp/deinstall2012-08-04_02-28-58-下午/perl/bin/perl -I/tmp/deinstall2012-08-04_02-28-58-下午/perl/lib -I/tmp/deinstall2012-08-04_02-28-58-下午/crs/install /tmp/deinstall2012-08-04_02-28-58-下午/crs/install/rootcrs.pl -force -delete -paramfile /tmp/deinstall2012-08-04_02-28-58-下午/response/deinstall_Ora11g_gridinfrahome1.rsp
CRS管理
节点起停:
[root@london1]# crsctl start cluster -all
[root@london1]# crsctl stop cluster -all
Alternatively, you could use the -n switch to start Grid Infrastructure on a specific (not local) node.
To check the current status of all nodes in the cluster, execute the following command:
[root@london1]# crsctl check cluster –all
crsctl start crs
服务启动
[node1:grid]$srvctl enable oc4j
[node1:grid]$srvctl start oc4j
[node1:grid]$srvctl enable nodeapps
[node1:grid]$srvctl start nodeapps
状态查询和管理
srvctl enable servname
srvctl start servname
crs_stat -t
crsctl status resource -t
crsctl query css votedisk
olsnodes -l
crsctl status resource –t
olsnodes –l
检查表决磁盘:
crsctl query css votedisk
关闭crs自启动: crsctl disable crs
手工启动crs: crsctl start crs
olr检查: ocrcheck -local
ocr检查: ocrcheck
需要区别rac1和rac2的ORACLE_SID,实例名称不同, +ASM1/+ASM2; racdb11/racdb12
sqlplus / as sysdba
sqlplus / as sysasm
在grid帐号下用sysasm查询v$datafile信息会报nomount错误,因为这些信息需要用oracle帐号的sysdba来查询。
修改系统参数:
alter system set log_archive_dest_2='location=/data/oradata/jssdbn1'
修改某实例系统参数
alter system set log_archive_dest_2='location=/data/oradata/jssdbn1' sid='jssdbn1';
1. 列出所有的数据库:
[grid@rac02 ~]$ srvctl config database
2. 查看数据库的配置:
[grid@rac02 ~]$ srvctl config database -d racdb -a
3.查看所有 Oracle 实例 —(数据库状态):
[grid@rac02 ~]$ srvctl status database -d racdb
4. 检查单个实例状态:
[grid@rac02 ~]$ srvctl status instance -d racdb -i racdb1
5. TNS监听器状态以及配置:
[grid@rac02 ~]$ srvctl status listener
6. SCAN状态以及配置:
[grid@rac02 ~]$ srvctl status scan
7.使用 SRVCTL 启动/停止所有实例:
[oracle@rac01 ~]$srvctl stop database -d racdb
[oracle@rac01 ~]$srvctl start database -d racdb
13、集群中所有正在运行的实例 — (SQL): sysasm
SELECT inst_id , instance_number inst_no , instance_name inst_name , parallel , status ,
database_status db_status , active_state state , host_name host FROM gv$instance ORDER BY inst_id;
14、所有数据库文件及它们所在的 ASM 磁盘组 — (SQL): sysdba
v$datafile ,v$logfile, v$tempfile, v$controlfile
1、检查集群状态:
[grid@rac02 ~]$ crsctl check cluster
4、节点应用程序状态:
[grid@rac02 ~]$ srvctl status nodeapps
10、VIP各个节点的状态以及配置:
[grid@rac02 ~]$ srvctl status vip -n rac01
11、节点应用程序配置 —(VIP、GSD、ONS、监听器)
[grid@rac02 ~]$ srvctl config nodeapps -a -g -s -l
12、验证所有集群节点间的时钟同步:
[grid@rac02 ~]$ cluvfy comp clocksync -verbose
(1)、在本地服务器上停止Oracle Clusterware 系统:
[root@rac01 ~]# /u01/app/11.2.0/grid/bin/crsctl stop cluster
强制停止: 加 -f
在所有服务器上停止clusterware
[root@rac01 ~]# /u01/app/11.2.0/grid/bin/crsctl stop cluster -all
7、ASM状态以及ASM配置:
[grid@rac02 ~]$ srvctl status asm
ASM is running on rac01,rac02
[grid@rac02 ~]$ srvctl config asm –a
SQL> create tablespace datacfg datafile size 2g extent management local segment space management auto;
缺省创建在 “DB_CREATE_FILE_DEST”指定位置
SQL> create tablespace users datafile '+DATA' size 1g extent management local segment space management auto;
SQL> alter database datafile '+DATA/PROD/DATAFILE/users.259.679156903' resize 10G;
删除表空间
SQL> drop tablespace dataflow including contents and datafiles cascade constraints;
ASM管理
创建磁盘组 asmca
管理磁盘: asmcmd
相关URL资源
1. cluster管理 http://candon123.blog.51cto.com/704299/336023
2. asm 管理,asmcmd 类似shell操作 http://space.itpub.net/25574072/viewspace-712245
alter diskgroup dg2 drop disk disk13;
http://blog.csdn.net/wyzxg/article/details/4902439
3. fragment http://blog.csdn.net/hijk139/article/details/7224768
4. device mapper 多路径管理软件http://www.cyberciti.biz/tips/rhel-linux4-setup-device-mapper-multipathing-devicemapper.html
FAQ:
问题:执行root.sh报错
执行#/oracle/app/grid/product/11.2.0/crs/install/rootcrs.pl -deconfig无法清除
解决方法:
强制清除:
#/oracle/app/grid/product/11.2.0/crs/install/rootcrs.pl -delete -force –verbose
参考url:
问题:perl-DBD安装失败
Unable to locate an oracle.mk, proc.mk or other suitable *.mk
解决方法:
- # perl Makefile.PL -l
- # make
- # make install
问题: PL/SQL\TOAD等客户端无法连接SCAN-IP
解决方法: 修改scan-IP为Fully Qualified Domain Name (FQDN) SCAN或者IP,不能用短名称。
rac1:
SQL> show parameter local_listenerNAME TYPE VALUE------------------------------------ ----------- ------------------------------local_listener string (DESCRIPTION=(ADDRESS_LIST=(AD DRESS=(PROTOCOL=TCP)(HOST=rac1 -vip)(PORT=1521))))SQL> alter system set local_listener='(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.2.111)(PORT=1521))))' scope=both sid='orcl1';SQL> alter system register;rac2:SQL> show parameter local_listenerNAME TYPE VALUE------------------------------------ ----------- ------------------------------local_listener string (DESCRIPTION=(ADDRESS_LIST=(AD DRESS=(PROTOCOL=TCP)(HOST=rac2 -vip)(PORT=1521))))SQL> alter system set local_listener='(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.2.112)(PORT=1521))))' scope=both sid='orcl2';SQL> alter system register;
附录1 redhat linux内核升级:
内核源代码下载地址:
其他linux: http://www.kernel.org/pub/linux/kernel/v2.6
redhat: ftp://ftp.redhat.com/pub/redhat/linux/enterprise/5Server/en/os/SRPMS/kernel-2.6.18-238.el5.src.rpm
# rpm -ivh kernel-2.6.9-22.EL.src.rpm
源码被解压至 /usr/src/redhat/SOURCES 目录,并且在 /usr/src/redhat/SPECS 目录中建立 kernel-2.6.spec 文件。
# cd /usr/src/redhat/SPECS/
# vi kernel-2.6.spec
%define buildup 1
%define buildsmp 1
%define buildsource 1
%define buildhugemem 1
将buildsource的值从0改为1
编译内核
# rpmbuild -ba --target=x86_64 ./kernel-2.6.spec
一定要仔细核对rpmbuild命令中的target参数,你所要被安装的机器的体系究竟是i686,i386,还是64位的。不妨用uname命令查对一下。
3.最终目录结构
成功安装后,数据分布如下:
·所有的kernel配置文件生成在 /usr/src/redhat/BUILD/kernel-2.6.9/linux-2.6.9/configs 目录下
kernel-2.6.9-x86_64.config
kernel-2.6.9-x86_64-smp.config
·内核树生成在 /usr/src/redhat/BUILD/kernel-2.6.9/linux-2.6.9 目录下
·内核RPM安装包生成在 /usr/src/redhat/RPMS/{机器体系} 目录下
kernel-2.6.9-22.EL.x86_64.rpm
kernel-debuginfo-2.6.9-22.EL.x86_64.rpm
kernel-devel-2.6.9-22.EL.x86_64.rpm
kernel-smp-2.6.9-22.EL.x86_64.rpm
kernel-smp-devel-2.6.9-22.EL.x86_64.rpm
kernel-sourcecode-2.6.9-22.EL.x86_64.rpm
4. 安装内核rpm -ivh kernel-2.6.9-22.EL.x86_64.rpm
内核被安装到/boot目录中,同时grub.conf会被自动更新
Q: 执行 rpmbuild 报 “Not enough random bytes available. Please do some other work to give”错误。
A: You can see the entropy valu using following command.
#cat /proc/sys/kernel/random/entropy_avail
Now, start the 'rngd' daemon using following command and monitor the entropy on the system.
#rngd -r /dev/urandom -o /dev/random -f -t 1
#watch -n 1 cat /proc/sys/kernel/random/entropy_avail
The 'rngd' daemon is installed by 'kernel-utils' package in RHEL 4 and 'rng-utils' package on RHEL 5.
其实就是在redhat安装iso文件中找到rng-utils包装上去
附录2. YUM安装
Redhat Linux通常由于没有注册,导致yum程序无法使用,需要将其替换为centos的yum程序。
1. 下载Yum的安装包,由于体系结构的不同和包的更新,因此目录和文件名的版本号可能需要调整以下。
#wget
#wget #wget2. 查出当前的yum程序,进行卸载
#rpm -qa|grep yum
# rpm -e yum-3.2.22-20.el5 --nodeps# rpm -e yum-updatesd-0.9-2.el5 --nodeps# rpm -e yum-security-1.1.16-13.el5 --nodeps# rpm -e yum-metadata-parser-1.1.2-3.el5 --nodeps# rpm -e yum-rhn-plugin-0.5.4-13.el5 --nodeps3. 下载并导入KEY
# cd /etc/pki/rpm-gpg/# wget # rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY*4. 安装yum安装包
rpm -ivh yum-3.2.22-39.el5.centos.noarch.rpm \ yum-fastestmirror-1.1.16-21.el5.centos.1.noarch.rpm \ yum-metadata-parser-1.1.2-3.el5.centos.i386.rpm5. 修改配置文件
Vi /etc/yum.repos.d/rhel-debuginfo.repo
[base]
name=Red Hat Enterprise Linux $releasever -Basebaseurl=http://mirrors.sohu.com/centos/5.5/os/$basearch/gpgcheck=1[update]
name=Red Hat Enterprise Linux $releasever -Updatesbaseurl=http://mirrors.sohu.com/centos/5.5/updates/$basearch/gpgcheck=1[extras]
name=Red Hat Enterprise Linux $releasever -Extrasbaseurl=http://mirrors.sohu.com/centos/5.5/extras/$basearch/gpgcheck=1[addons]
name=Red Hat Enterprise Linux $releasever -Addonsbaseurl=http://mirrors.sohu.com/centos/5.5/addons/$basearch/gpgcheck=1至此yum安装完成,可以 yum install 安装程序了
附录3. /dev/shm 共享内存不足的处理
解决方法:
例如:为了将/dev/shm的大小增加到1GB,修改/etc/fstab的这行:默认的:
none /dev/shm tmpfs defaults 0 0
改成:
none /dev/shm tmpfs defaults,size=1024m 0 0
size参数也可以用G作单位:size=1G。
重新mount /dev/shm使之生效:
# mount -o remount /dev/shm
或者:
# umount /dev/shm
# mount -a
马上可以用"df -h"命令检查变化。
http://docs.oracle.com/cd/E14072_01/rac.112/e10717/intro.htm
/etc/inittab h1:35:respawn:/etc/init.d/init.ohasd run >/dev/null 2>&1 </dev/null
/etc/init.d/ohasd->$GRID_HOME/bin/ohasd.bin $GRID_HOME/log/hostname/ohasd/ohasd.log
Oracle High Availability Services (OHAS)
The Grid Plug And Play (GPnP) daemon
The Grid Interprocess Communication (GIPC) daemon
The multicast DNS (mDNS) service
The Grid Naming Service (GNS):
Cluster Ready Services (CRS):
Cluster Synchronization Services (CSS) service
The Cluster Synchronization Services Agent (cssdagent):
The Cluster Synchronization Services Monitor (cssdmonitor) process
The Disk Monitor (diskmon) daemon:
The Oracle Clusterware Kill (oclskd) daemon
The Cluster Time Synchronization Service (CTSS):
The Event Manager (EVM) service
The Event Manager Logger (EVMLOGGER) daemon
The Oracle Notification Service (ONS, eONS):
不能用crsctl改变ora开头的 resources 状态,应该用 srvctl;
/oracle/crs/product/10.2.0.4/log/nmg-nms-db/cssd/ocssd.log
192.168.24.202 root/wanding123
/oracle/app/grid/product/11.2.0/auth/css/
export CVUQDISK_GRP=oinstall
./runcluvfy.sh stage -pre crsinst -n rac1,rac2 -fixup -verbose