Oracle10gR2 RAC.docx

上传人:b****6 文档编号:8496419 上传时间:2023-01-31 格式:DOCX 页数:10 大小:21.26KB
下载 相关 举报
Oracle10gR2 RAC.docx_第1页
第1页 / 共10页
Oracle10gR2 RAC.docx_第2页
第2页 / 共10页
Oracle10gR2 RAC.docx_第3页
第3页 / 共10页
Oracle10gR2 RAC.docx_第4页
第4页 / 共10页
Oracle10gR2 RAC.docx_第5页
第5页 / 共10页
点击查看更多>>
下载资源
资源描述

Oracle10gR2 RAC.docx

《Oracle10gR2 RAC.docx》由会员分享,可在线阅读,更多相关《Oracle10gR2 RAC.docx(10页珍藏版)》请在冰豆网上搜索。

Oracle10gR2 RAC.docx

Oracle10gR2RAC

Oracle10gR2RAC+VMWareServer1.0.1安装

ELM发表于:

2006-12-3023:

51来源:

榆树社区

本文参考了本论坛很多文章,特此致谢!

****************************************************************************

*CentOS4.4+RHCS(DLM)+GFS+Oracle10gR2RAC+VMWareServer1.0.1安装*

****************************************************************************

一、测试环境

      主机:

一台PC,AMD-64位的芯片,4G内存,安装CentOS-4.4-x86_64版本的操作系统

      在这个主机上面安装了2个虚拟机,全部安装CentOS-4.4-x86_64版本的操作系统,未进行内核定制,网上更新到最新

二、安装VMWareServer1.0.1forlinux

三、创建共享磁盘

      vmware-vdiskmanager-c-s6Gb-alsilogic-t2"/vmware/share/ohome.vmdk"  |用于SharedOracleHome

      vmware-vdiskmanager-c-s10Gb-alsilogic-t2"/vmware/share/odata.vmdk"  |用于datafilesandindexes

      vmware-vdiskmanager-c-s3Gb-alsilogic-t2"/vmware/share/oundo1.vmdk"  |用于node1RedologsandUndotablespaces

      vmware-vdiskmanager-c-s3Gb-alsilogic-t2"/vmware/share/oundo2.vmdk"  |用于node2RedologsandUndotablespaces

      vmware-vdiskmanager-c-s512Mb-alsilogic-t2"/vmware/share/oraw.vmdk"  |用于Oracle集群注册表文件和CRS表决磁盘

      2个虚拟机使用一个共享磁盘

      

四、安装虚拟机

      1.在vmwareconsole创建vmwareguestOS,取名gfs-node01,选择customecreate->RedhatEnterpriseLinux464-bit,其它都是默认.

        内存选择1G(>800MB你就看不到warning了),硬盘大小选择12GB,建立方式不选择pre-allocated

      2.创建好后vmwareguestOS之后,给guest加上一块NIC(也就是网卡)

      3.关掉vmwareconsole,在node1目录下面,打开gfs-node1.vmx,在最后空白处添加以下内容

scsi1.present="TRUE"

scsi1.virtualDev="lsilogic"

scsi1.sharedBus="virtual"

scsi1:

1.present="TRUE"

scsi1:

1.mode="independent-persistent"

scsi1:

1.filename="/vmware/share/ohome.vmdk"

scsi1:

1.deviceType="disk"

scsi1:

2.present="TRUE"

scsi1:

2.mode="independent-persistent"

scsi1:

2.filename="/vmware/share/odata.vmdk"

scsi1:

2.deviceType="disk"

scsi1:

3.present="TRUE"

scsi1:

3.mode="independent-persistent"

scsi1:

3.filename="/vmware/share/oundo1.vmdk"

scsi1:

3.deviceType="disk"

scsi1:

4.present="TRUE"

scsi1:

4.mode="independent-persistent"

scsi1:

4.filename="/vmware/share/oundo2.vmdk"

scsi1:

4.deviceType="disk"

scsi1:

5.present="TRUE"

scsi1:

5.mode="independent-persistent"

scsi1:

5.filename="/vmware/share/oundo3.vmdk"

scsi1:

5.deviceType="disk"

scsi1:

6.present="TRUE"

scsi1:

6.mode="independent-persistent"

scsi1:

6.filename="/vmware/share/oraw.vmdk"

scsi1:

6.deviceType="disk"

disk.locking="false"

diskLib.dataCacheMaxSize="0"

diskLib.dataCacheMaxReadAheadSize="0"

diskLib.DataCacheMinReadAheadSize="0"

diskLib.dataCachePageSize="4096"

diskLib.maxUnsyncedWrites="0"

      这段是对vmware使用共享硬盘的方式进行定义,大多数人都知道设置disk.locking="false"却漏掉dataCache

      保存退出之后,重新打开你的vmware-console,你就可以看到vmwareguestOS的配置中,都有这些硬盘出现了.

五、需要安装的包以及顺序

      可以用yum安装:

      1、升级CentOS4.4

          yumupdate

      2、安装csgfs

          yuminstallyumex

          cd/etc/yum.repos.d

          wgethttp:

//mirror.centos.org/centos/4/csgfs/CentOS-csgfs.repo

          yumex

      也可以手动rpm安装:

  包下载地址:

http:

//mirror.centos.org/centos/4/csgfs/x86_64/RPMS/

      1、在所有节点上安装必须的软件包,软件包完整列表请参考GFS6.1用户手册

rgmanager                      —Managesclusterservicesandresources

system-config-cluster      —ContainstheClusterConfigurationTool,usedtographicallyconfiguretheclusterandthedisplayofthecurrentstatusofthenodes,resources,fencingagents,andclusterservices

ccsd                          —Containstheclusterconfigurationservicesdaemon(ccsd)andassociatedfiles

magma                          —Containsaninterfacelibraryforclusterlockmanagement

magma-plugins                —Containspluginsforthemagmalibrary

cman                          —ContainstheClusterManager(CMAN),whichisusedformanagingclustermembership,messaging,andnotification

cman-kernel                      —ContainsrequiredCMANkernelmodules

dlm                                —Containsdistributedlockmanagement(DLM)library

dlm-kernel                      —ContainsrequiredDLMkernelmodules

fence                          —TheclusterI/Ofencingsystemthatallowsclusternodestoconnecttoavarietyofnetworkpowerswitches,fibrechannelswitches,andintegratedpowermanagementinterfaces

iddev                          —Containslibrariesusedtoidentifythefilesystem(orvolumemanager)inwhichadeviceisformattedAlso,youcanoptionallyinstallRedHatGFSonyourRedHatClusterSuite.RedHatGFSconsistsofthefollowingRPMs:

GFS                                —TheRedHatGFSmodule

GFS-kernel                      —TheRedHatGFSkernelmodule

lvm2-cluster                —Clusterextensionsforthelogicalvolumemanager

GFS-kernheaders                —GFSkernelheaderfiles

      2、安装软件和顺序

安装脚本,install.sh

#!

/bin/bash

rpm-ivhkernel-smp-2.6.9-42.EL.x86_64.rpm

rpm-ivhkernel-smp-devel-2.6.9-42.EL.x86_64.rpm

rpm-ivhperl-Net-Telnet-3.03-3.noarch.rpm

rpm-ivhmagma-1.0.6-0.x86_64.rpm

rpm-ivhmagma-devel-1.0.6-0.x86_64.rpm

rpm-ivhccs-1.0.7-0.x86_64.rpm

rpm-ivhccs-devel-1.0.7-0.x86_64.rpm

rpm-ivhcman-kernel-2.6.9-45.4.centos4.x86_64.rpm

rpm-ivhcman-kernheaders-2.6.9-45.4.centos4.x86_64.rpm

rpm-ivhcman-1.0.11-0.x86_64.rpm

rpm-ivhcman-devel-1.0.11-0.x86_64.rpm

rpm-ivhdlm-kernel-2.6.9-42.12.centos4.x86_64.rpm

rpm-ivhdlm-kernheaders-2.6.9-42.12.centos4.x86_64.rpm

rpm-ivhdlm-1.0.1-1.x86_64.rpm

rpm-ivhdlm-devel-1.0.1-1.x86_64.rpm

rpm-ivhfence-1.32.25-1.x86_64.rpm

rpm-ivhGFS-6.1.6-1.x86_64.rpm

rpm-ivhGFS-kernel-2.6.9-58.2.centos4.x86_64.rpm

rpm-ivhGFS-kernheaders-2.6.9-58.2.centos4.x86_64.rpm

rpm-ivhiddev-2.0.0-3.x86_64.rpm

rpm-ivhiddev-devel-2.0.0-3.x86_64.rpm

rpm-ivhmagma-plugins-1.0.9-0.x86_64.rpm

rpm-ivhrgmanager-1.9.53-0.x86_64.rpm

rpm-ivhsystem-config-cluster-1.0.25-1.0.noarch.rpm

rpm-ivhipvsadm-1.24-6.x86_64.rpm

rpmivhpiranha-0.8.2-1.x86_64.rpm--nodeps

注意:

有些包有依赖关系,使用nodeps开关进行安装即可

      3、修改各个节点上的/etc/hosts文件(每个节点都一样)

      如下:

      [root@gfs-node1etc]#cathosts

      #Donotremovethefollowingline,orvariousprograms

      #thatrequirenetworkfunctionalitywillfail.

      127.0.0.1      localhost.localdomainlocalhost

          192.168.154.211gfs-node1

          192.168.154.212gfs-node2

      192.168.10.1  node1-prv

      192.168.10.2  node2-prv

          192.168.154.201node1-vip

          192.168.154.202node2-vip

      注意:

主机名、cluster主机名、ocs中的pub节点名最好相同。

六、运行system-config-cluster进行配置

增加2个节点,节点的权置全部设置为1,即Quorum值设置为1

三个节点的名称为:

gfs-node1

gfs-node2

修改cluster.conf文件,如下:

[root@gfs-node1~]#cat/etc/cluster/cluster.conf

xmlversion="1.0"?

>

      

      

          

                

                      

                          

                      

                

          

          

                

                      

                          

                          

                

          

      

          

      

      

          

      

      

      

          

                

                      

                      

                      

                

          

      

[注意]Usefence_bladecenter.  Thiswillrequirethatyouhavetelnetenabledon

      yourmanagementmodule(mayrequireafirmwareupdate)

使用scp命令把这个配置文件copy到node2节点上

七、在01/02节点上启动dlm,ccsd,fence等服务  

      可能在安装配置完上述步骤后,下面这些服务已经起来了。

      在2个节点上加载dlm模块  

      [root@gfs-node1cluster]#modprobelock_dlm

      [root@gfs-node2cluster]#modprobelock_dlm

      5.2、启动ccsd服务  

      [root@gfs-node1cluster]#ccsd

      [root@gfs-node2cluster]#ccsd

      5.3、启动集群管理器(cman)  

      root@gfs-node1#/sbin/cman_tooljoin  

      root@gfs-node2#/sbin/cman_tooljoin  

      5.4、测试ccsd服务  

      (注意:

ccsd的测试要等cman启动完成后,然后才可以进行下面的测试

      [root@gfs-node1cluster]#ccs_testconnect

      [root@gfs-node2cluster]#ccs_testconnect

      #ccs_testconnect各个节点的返回如下:

      node1:

      [root@gfs-node1cluster]#ccs_testconnect

      Connectsuccessful.

      Connectiondescriptor=0

      node2:

      [root@gfs-node2cluster]#ccs_testconnect

      Connectsuccessful.

      Connectiondescriptor=30

      5.5、查看节点状态

      cat/proc/cluster/nodes,应该返回  

      [root@gfs-node1cluster]#cat/proc/cluster/nodes

      Node  VotesExpSts  Name

      1  1  3  M  gfs-node1

      2  1  3  M  gfs-node2

      [root@g

展开阅读全文
相关资源
猜你喜欢
相关搜索

当前位置:首页 > 人文社科 > 文化宗教

copyright@ 2008-2022 冰豆网网站版权所有

经营许可证编号:鄂ICP备2022015515号-1