Oracle rac quick reference pdf
If you choose the policy-managed deployment model, using a per-node installation of software, then you must deploy the software on all nodes in the cluster, because the dynamic allocation of servers to server pools, in principle, does not predict on which server a database instance can potentially run. To avoid instance startup failures on servers that do not host the respective database home, Oracle strongly recommends that you deploy the database software on all nodes in the cluster.
When you use a shared Oracle Database home, accessibility to this home from all nodes in the cluster is assumed and the setup needs to ensure that the respective file system is mounted on all servers, as required. Oracle Universal Installer will only allow you to deploy an Oracle Database home across nodes in the cluster if you previously installed and configured Oracle Grid Infrastructure for the cluster.
If Oracle Universal Installer does not give you an option to deploy the database home across all nodes in the cluster, then check the prerequisite, as stated, by Oracle Universal Installer. During installation, you can choose to create a database during the database home installation. Before you create a database, a default listener must be running in the Oracle Grid Infrastructure home. By default, the Oracle Database software installation process installs the Oracle RAC option when it recognizes that you are performing the installation on a cluster.
Oracle Universal Installer installs Oracle RAC into a directory structure referred to as the Oracle home, which is separate from the Oracle home directory for other Oracle software running on the system. Because Oracle Universal Installer is cluster aware, it installs the Oracle RAC software on all of the nodes that you defined to be part of the cluster.
You can choose to create a database as part of the database software deployment, or you can choose to only deploy the database software, first, and then, subsequently, create any database that is meant to run out of the newly created Oracle home by using DBCA.
In either case, you must consider the management style that you plan to use for the Oracle RAC databases. For administrator-managed databases, you must ensure that the database software is deployed on the nodes on which you plan to run the respective database instances. You must also ensure that these nodes have access to the storage in which you want to store the database files.
Oracle recommends that you select Oracle ASM during database installation to simplify storage management. Oracle ASM automatically manages the storage of all database files within disk groups. For policy-managed databases, you must ensure that the database software is deployed on all nodes on which database instances can potentially run, given your active server pool setup.
Oracle recommends using Oracle ASM, as previously described for administrator-managed databases. There are different ways you can set up server pools on the Oracle Clusterware level, and Oracle recommends you create server pools for database management before you create the respective databases.
DBCA, however, will present you with a choice of either using precreated server pools or creating a new server pool, when you are creating a policy-managed database. Whether you can create a new server pool during database creation depends on the server pool configuration that is active at the time. This is the default database service and should not be used for user connectivity.
The default service is available on all instances in an Oracle RAC environment, unless the database is in restricted mode. Oracle recommends that you reserve the default database service for maintenance operations and create dynamic database services for user or application connectivity as a post-database-creation step, using either SRVCTL or Oracle Enterprise Manager.
If you want to extend the Oracle RAC cluster also known as cloning and add nodes to the existing environment after your initial deployment, then you must to do this on multiple layers, considering the management style that you currently use in the cluster. Oracle provides various means of extending an Oracle RAC cluster.
In principle, you can choose from the following approaches to extend the current environment:. Adding nodes using the addnode. Both approaches are applicable, regardless of how you initially deployed the environment.
Both approaches copy the required Oracle software on to the node that you plan to add to the cluster. Software that gets copied to the node includes the Oracle Grid Infrastructure software and the Oracle database homes. For Oracle database homes, you must consider the management style deployed in the cluster.
In either case, you must first deploy Oracle Grid Infrastructure on all nodes that are meant to be part of the cluster. Oracle cloning is not a replacement for cloning using Oracle Enterprise Manager as part of the Provisioning Pack. When you clone Oracle RAC using Oracle Enterprise Manager, the provisioning process includes a series of steps where details about the home you want to capture, the location to which you want to deploy, and various other parameters are collected.
For new installations or if you install only one Oracle RAC database, use the traditional automated and interactive installation methods, such as Oracle Universal Installer, Fleet Patching and Provisioning, or the Provisioning Pack feature of Oracle Enterprise Manager.
The cloning process assumes that you successfully installed an Oracle Clusterware home and an Oracle home with Oracle RAC on at least one node. In addition, all root scripts must have run successfully on the node from which you are extending your cluster database. This option adds to the flexibility that Oracle offers for database consolidation while reducing management overhead by providing a standard deployment for Oracle databases in the enterprise.
With Oracle RAC One Node, there is no limit to server scalability and, if applications grow to require more resources than a single node can supply, then you can upgrade your applications online to Oracle RAC. If the node that is running Oracle RAC One Node becomes overloaded, then you can relocate the instance to another node in the cluster. Alternatively, you can limit the CPU consumption of individual database instances per server within the cluster using Resource Manager Instance Caging and dynamically change this limit, if necessary, depending on the demand scenario.
Relocating an Oracle RAC One Node instance is therefore mostly transparent to the client, depending on the client connection. Oracle recommends to use either Application Continuity and Oracle Fast Application Notification or Transparent Application Failover to minimize the impact of a relocation on the client. For administrator-managed Oracle RAC One Node databases, you must monitor the candidate node list and make sure a server is always available for failover, if possible.
Candidate servers reside in the Generic server pool and the database and its services will fail over to one of those servers. For policy-managed Oracle RAC One Node databases, you must ensure that the server pools are configured such that a server will be available for the database to fail over to in case its current node becomes unavailable. In this case, the destination node for online database relocation must be located in the server pool in which the database is located.
Alternatively, you can use a server pool of size 1 one server in the server pool , setting the minimum size to 1 and the importance high enough in relation to all other server pools used in the cluster, to ensure that, upon failure of the one server used in the server pool, a new server from another server pool or the Free server pool is relocated into the server pool, as required.
Oracle Clusterware provides a complete, integrated clusterware management solution on all Oracle Database platforms.
This clusterware functionality provides all of the features required to manage your cluster database including node membership, group services, global resource management, and high availability functions. Oracle Database features, such as services, use the underlying Oracle Clusterware mechanisms to provide advanced capabilities. Oracle Database also continues to support select third-party clusterware products on specified platforms.
You can use Oracle Clusterware to manage high-availability operations in a cluster. These resources are automatically started when the node starts and automatically restart if they fail. The Oracle Clusterware daemons run on each node. Oracle Clusterware provides the framework that enables you to create CRS resources to manage any process running on servers in the cluster which are not predefined by Oracle.
Oracle Clusterware stores the information that describes the configuration of these components in OCR that you can administer. Overview of Oracle Flex Clusters. Overview of Reader Nodes. Overview of Local Temporary Tablespaces.
Oracle Flex Clusters provide a platform for a variety of applications, including Oracle RAC databases with large numbers of nodes. Oracle Flex Clusters also provide a platform for other service deployments that require coordination and automation for high availability. This architecture centralizes policy decisions for deployment of resources based on application needs, to account for various service levels, loads, failure responses, and recovery. Reader nodes are instances of an Oracle RAC database that provide read-only access, primarily for reporting and analytical purposes.
You can create services to direct queries to read-only instances running on reader nodes. These services can use parallel query to further speed up performance. Oracle recommends that you size the memory in these reader nodes as high as possible so that parallel queries can use the memory for best performance. While it is possible for a reader node to host a writable database instance, Oracle recommends that reader nodes be dedicated to hosting read-only instances to achieve the best performance.
Oracle uses local temporary tablespaces to write spill-overs to the local non-shared temporary tablespaces which are created on local disks on the reader nodes. It is still possible for SQL operations, such as hash aggregation, sort, hash join, creation of cursor-duration temporary tables for the WITH clause, and star transformation to spill over to disk specifically to the global temporary tablespace on shared disks.
Management of the local temporary tablespace is similar to that of the existing temporary tablespace. Local Temporary Tablespace Organization. Temporary Tablespace Hierarchy.
Local Temporary Tablespace Features. Metadata Management of Local Temporary Files. Local Temporary Tablespaces for Users. Atomicity Requirement for Commands.
Local Temporary Tablespace and Dictionary Views. The temporary tablespaces created for the WITH clause and star transformation exist in the temporary tablespace on the shared disk. A set of parallel query child processes load intermediate query results into these temporary tablespaces, which are then read later by a different set of child processes. There is no restriction on how these child processes reading these results are allocated, as any parallel query child process on any instance can read the temporary tablespaces residing on the shared disk.
For read-write and read-only instance architecture, as the parallel query child processes load intermediate results to the local temporary tablespaces of these instances, the parallel query child processes belonging to the instance where the intermediate results are stored share affinity with the reads for the intermediate results and can thus read them. Creation of a local temporary tablespace results in the creation of local temporary files on every instance and not a single file, as is currently true for shared global temporary tablespaces.
You can create local temporary tablespaces for both read-only and read-write instances. For example:. When you define local temporary tablespace and shared existing temporary tablespace, there is a hierarchy in which they are used. To understand the hierarchy, remember that there can be multiple shared temporary tablespaces in a database, such the default shared temporary tablespace for the database and multiple temporary tablespaces assigned to individual users.
If a user has a shared temporary tablespace assigned, then that tablespace is used first, otherwise the database default temporary tablespace is used. Once a tablespace has been selected for spilling during query processing, there is no switching to another tablespace.
For example, if a user has a shared temporary tablespace assigned and during spilling it runs out of space, then there is no switching to an alternative tablespace. The spilling, in that case, will result in an error. Additionally, remember that shared temporary tablespaces are shared among instances. The allocation of temporary space for spilling to a local temporary tablespace differs between read-only and read-write instances.
For read-only instances, the following is the priority of selecting which temporary location to use for spills:. For read-write instances, the priority of allocation differs from the preceding allocation order, as shared temporary tablespaces are given priority, as follows:.
Instances cannot share local temporary tablespace, hence one instance cannot take local temporary tablespace from another. If an instance runs out of temporary tablespace during spilling, then the statement resutls in an error. To address contention issues arising from having only one BIGFILE -based local temporary tablespace, multiple local temporary tablespaces can be assigned to different users, as default. One local temporary when the user is connected to the read-only instance running on reader nodes.
One shared temporary tablespace to be used when the same user is connected on the read-write instances running on a Hub Node. Currently, temporary file information such as file name, creation size, creation SCN, temporary block size, and file status is stored in the control file along with the initial and max files, as well as auto extent attributes.
However, the information about local temporary files in the control file is common to all applicable instances. Instance-specific information, such as bitmap for allocation, current size for a temporary file, and the file status, is stored in the SGA on instances and not in the control file because this information can be different for different instances. When an instance starts up, it reads the information in the control file and creates the temporary files that constitute the local temporary tablespace for that instance.
If there are two or more instances running on a node, then each instance will have its own local temporary files. For local temporary tablespaces, there is a separate file for each involved instance. The local temporary file names follow a naming convention such that the instance numbers are appended to the temporary file names specified while creating the local temporary tablespace.
For example, assume that a read-only node, N1, runs two Oracle read-only database instances with numbers 3 and 4. All DDL commands related to local temporary tablespace management and creation are run from the read-write instances. Running all other DDL commands will affect all instances in a homogeneous manner. For local temporary tablespaces, Oracle supports the allocation options and their restrictions currently active for temporary files. To run a DDL command on a local temporary tablespace on a read-only instance, there must be at least one read-only instance in the cluster.
A database administrator can specify default temporary tablespace when creating the database, as follows:. When you create a database, its default local temporary tablespace will point to the default shared temporary tablespace.
Local Temporary Tablespace for Users. When you create a user without explicitly specifying shared or local temporary tablespace, the user inherits shared and local temporary tablespace from the corresponding default database tablespaces. You can specify default local temporary tablespace for a user, as follows:. As previously mentioned, default user local temporary tablespace can be shared temporary space.
You can change the user default local temporary tablespace to any existing local temporary tablespace. If you want to set the user default local temporary tablespace to a shared temporary tablespace, T , then T must be the same as the default shared temporary tablespace. If a default user local temporary tablespace points to a shared temporary tablespace, then, when you change the default shared temporary tablespace of the user, you also change the default local temporary tablespace to that tablespace.
Some read-only instances may be down when you run any of the preceding commands. This does not prevent the commands from succeeding because, when a read-only instance starts up later, it creates the temporary files based on information in the control file.
Creation is fast because Oracle reformats only the header block of the temporary file, recording information about the file size, among other things. If you cannot create any of the temporary files, then the read-only instance stays down. Commands that were submitted from a read-write instance are replayed, immediately, on all open read-only instances. All the commands that you run from the read-write instances are performed in an atomic manner, which means the command succeeds only when it succeeds on all live instances.
Oracle extended dictionary views to display information about local temporary tablespaces. Oracle made the following changes:. All the diagnosibility information related to temporary tablespaces and temporary files exposed through AWR, SQL monitor, and other utilities, is also available for local temporary tablespaces and local temporary files.
For local temporary files, this column contains information about temporary files per instance, such as the size of the file in bytes BYTES column. At a minimum, Oracle RAC requires Oracle Clusterware software infrastructure to provide concurrent access to the same storage and the same set of data files from all nodes in the cluster, a communications protocol for enabling interprocess communication IPC across the nodes in the cluster, enable multiple database instances to process data as if the data resided on a logically combined, single cache, and a mechanism for monitoring and communicating the status of the nodes in the cluster.
Understanding Cluster-Aware Storage Solutions. An Oracle RAC database is a shared everything database. All data files, control files, SPFILEs, and redo log files in Oracle RAC environments must reside on cluster-aware shared disks, so that all of the cluster database instances can access these storage components.
In Oracle RAC, the Oracle Database software manages disk access and is certified for use on a variety of storage architectures. It is your choice how to configure your storage, but you must use a supported cluster-aware storage solution.
A third-party cluster file system on a cluster-aware volume manager that is certified for Oracle RAC. All nodes in an Oracle RAC environment must connect to at least one Local Area Network LAN commonly referred to as the public network to enable users and applications to access the database.
In addition to the public network, Oracle RAC requires private network connectivity used exclusively for communication between the node s and database instances running on those nodes. This network is commonly referred to as the interconnect. The interconnect network is a private network that connects all of the servers in the cluster.
The interconnect network must use at least one switch and a Gigabit Ethernet adapter. Oracle supports interfaces with higher bandwidth but does not support using crossover cables with the interconnect. This may cause logical corruption to the Data. If you want to chang it, you can do that as below. To be safe, Bring crs cluster stack down on all the nodes but one on which you are going to add votedisk from. Tthe below command removes the given votedisk from cluster configuration.
You can get the backup information by executing below command. You can export the contents of the OCR using below command Logical backup. Shutdown CRS on all nodes. Oracle DBA Database Administrator needs useful scripts to monitor, analyze and check Oracle database for routine database operations and monitoring.
To stop Clusterware on specific node, execute following command. Set ASM profile before executing crsctl command. To start Clusterware on specific node, execute following command. To disable Clusterware on specific node, execute following command. To enable Clusterware on specific node, execute following command.
To Query Voting disk location, execute following command. To Delete voting disk, execute following command.
0コメント