Mapeamento de discos ASM para dispositivos físicos.

 

site original:

https://blogs.oracle.com/AlejandroVargas/entry/mapping_asm_disks_to_physical

Mapping ASM disks to Physical Devices

Sometimes you may need to map ASM Disks to its physical devices.
If they are based on ASMLib you will see their ASM name, ie: ORCL:VOL1 when querying v$asm_disk

When running oracleasm querydisk VOL1 you will get in addition the major – minor numbers, that can be used to match the physical device, ie:

[root@orcldb2 ~]# /etc/init.d/oracleasm querydisk VOL1
Disk “VOL1” is a valid ASM disk on device [8, 97]

[root@orcldb2 ~]# ls -l /dev | grep 8, | grep 97
brw-rw—-   1 root disk     8,      81 Nov  4 13:02 sdg1

This script can do the job for a group of ASM Disks:

———- start  here ————
#!/bin/ksh
for i in `/etc/init.d/oracleasm listdisks`
do
v_asmdisk=`/etc/init.d/oracleasm querydisk $i | awk  ‘{print $2}’`
v_minor=`/etc/init.d/oracleasm querydisk $i | awk -F[ ‘{print $2}’| awk -F] ‘{print $1}’ | awk ‘{print $1}’`
v_major=`/etc/init.d/oracleasm querydisk $i | awk -F[ ‘{print $2}’| awk -F] ‘{print $1}’ | awk ‘{print $2}’`
v_device=`ls -la /dev | grep $v_minor | grep $v_major | awk ‘{print $10}’`
echo “ASM disk $v_asmdisk based on /dev/$v_device  [$v_minor $v_major]”
done
———- finish here ————

The output looks like this:

ASM disk “VOL1” based on /dev/sdg1  [8, 97]
ASM disk “VOL10” based on /dev/sdp1  [8, 241]
ASM disk “VOL2” based on /dev/sdh1  [8, 113]
ASM disk “VOL3” based on /dev/sdk1  [8, 161]
ASM disk “VOL4” based on /dev/sdi1  [8, 129]
ASM disk “VOL5” based on /dev/sdl1  [8, 177]
ASM disk “VOL6” based on /dev/sdj1  [8, 145]
ASM disk “VOL7” based on /dev/sdn1  [8, 209]
ASM disk “VOL8” based on /dev/sdo1  [8, 225]
ASM disk “VOL9” based on /dev/sdm1  [8, 193]

If you are using multi-path, you will need an additional step to map the physical device to the multi-path device, for instance if using EMC Powerpath if you want to map sdf1

[root@orclp ~]# /etc/init.d/oracleasm querydisk vol1
Disk “VOL1” is a valid ASM disk on device [8, 81]

[root@orclp ~]# ls -l /dev | grep 8,| grep 81
brw-rw—-   1 root disk     8,      81 Oct 29 20:42 sdf1

[root@orclp ~]# powermt display dev=all


Pseudo name=emcpowerg
Symmetrix ID=000290101698
Logical device ID=0214
state=alive; policy=SymmOpt; priority=0; queued-IOs=0
==============================================================================
—————- Host —————   – Stor –   — I/O Path –  — Stats —
### HW Path                 I/O Paths    Interf.   Mode    State  Q-IOs Errors
==============================================================================
1 qla2xxx                   sdf       FA  7bB   active  alive      0      0
2 qla2xxx                   sdq       FA 10bB   active  alive      0      0

The last step is to check the partition assigned to the emcpower device, ie:

[root@orclp ~]# ls -l /dev/emcpowerg*
brw——-  1 root root 120, 96 Oct 29 20:41 /dev/emcpowerg
brw——-  1 root root 120, 97 Nov 15 13:08 /dev/emcpowerg1
Nota | Publicado em por | Marcado com , , , , , | Deixe um comentário

Streaming Data through Oracle GoldenGate to Elasticsearch

Streaming Data through Oracle GoldenGate to Elasticsearch
14 April 2016 on elasticsearch, goldengate, kafka, logstash, oracle

Recently added to the oracledi project over at java.net is an adaptor enabling Oracle GoldenGate (OGG) to send data to Elasticsearch. This adds a powerful alternative to [micro-]batch extract via JDBC from Oracle to Elasticsearch, which I wrote about recently over at the Elastic blog.

Elasticsearch is a ‘document store’ widely used for both search and analytics. It’s something I’ve written a lot about (here and here for archives), as well as spoken about – preaching the good word, as it were, since the Elastic stack as a whole is very very good at what it does and a pleasure to work with. So, being able to combine that with my “day job” focus of Oracle is fun. Let’s get started!

From the adaptor page, download the zip to your machine. I’m using Oracle’s BigDataLite VM which already has GoldenGate installed and configured, and which I’ve also got Elasticsearch already on following on from this earlier post. If you’ve not got Elasticsearch already, head over to elastic.co to download it. I’m using version 2.3.1, installed in /opt/elasticsearch-2.3.1.
Ready …

Once you’ve got the OGG adaptor zip, you’ll want to unzip it — a word of advice here, specify the destination folder as there’s no containing root within the archive so you’ll end up with a mess of folder and files in amongst your download folder otherwise:

unzip OGG_elasticsearch_v1.0.zip -d /u01/OGG_elasticsearch_v1.0

Copy the provided .prm and .props files to your OGG dirprm folder:

cp /u01/OGG_elasticsearch_v1.0/dirprm/elasticsearch.props /u01/ogg-bd/dirprm/
cp /u01/OGG_elasticsearch_v1.0/dirprm/res.prm /u01/ogg-bd/dirprm/

Edit the elasticsearch.props (e.g. /u01/ogg/dirprm/elasticsearch.props) file to set:

gg.classpath, to pick up both the Elasticsearch jars and the OGG adaptor jar. On my installation this is :

gg.classpath=/opt/elasticsearch-2.3.1/lib/*:/u01/OGG_elasticsearch_v1.0/bin/ogg-elasticsearch-adapter-1.0.jar:

gg.handler.elasticsearch.clusterName, which is the name of your elasticsearch cluster – if you don’t know it you can check with

[oracle@bigdatalite ~]$ curl -s localhost:9200|grep cluster_name
“cluster_name” : “elasticsearch”,

So mine is the default – elasticsearch:

gg.handler.elasticsearch.clusterName=elasticsearch

For gg.handler.elasticsearch.host and gg.handler.elasticsearch.port I left the defaults (localhost / 9300) unchanged – update these for your Elasticsearch instance as required. Note that Elasticsearch listens on two ports, with 9200 by default for HTTP traffic, and 9300 for Java clients which is what we’re using here.

Steady …

Run ggsci to add and start the replicat using the provided res configuration (res = Replicat, ElasticSearch, I’m guessing) and sample trail file (i.e. we don’t need a live extract running to try this thing out):

$ cd /u01/ogg-bd
$ rlwrap ./ggsci

Oracle GoldenGate Command Interpreter
Version 12.2.0.1.0 OGGCORE_12.2.0.1.0_PLATFORMS_151101.1925.2
Linux, x64, 64bit (optimized), Generic on Nov 10 2015 16:18:12
Operating system character set identified as UTF-8.

Copyright (C) 1995, 2015, Oracle and/or its affiliates. All rights reserved.

GGSCI (bigdatalite.localdomain) 1> start mgr
Manager started.

GGSCI (bigdatalite.localdomain) 2> add replicat res, exttrail AdapterExamples/trail/tr
REPLICAT added.

Go!

GGSCI (bigdatalite.localdomain) 3> start res

Sending START request to MANAGER …
REPLICAT RES starting

Yay!

GGSCI (bigdatalite.localdomain) 5> info res

REPLICAT RES Initialized 2016-04-14 22:03 Status STOPPED

STOPPED? Oh …

Time for debug. Open up /u01/ogg-bd/ggserr.log, and the error (`Error loading shared library ggjava.dll) is nice and clear to see:

2016-04-14 22:04:25 INFO OGG-00987 Oracle GoldenGate Command Interpreter: GGSCI command (oracle): start res.
2016-04-14 22:04:25 INFO OGG-00963 Oracle GoldenGate Manager, mgr.prm: Command received from GGSCI on host [127.0.0.1]:13379 (START REPLICAT RES ).
2016-04-14 22:04:25 INFO OGG-00960 Oracle GoldenGate Manager, mgr.prm: Access granted (rule #6).
2016-04-14 22:04:25 INFO OGG-00975 Oracle GoldenGate Manager, mgr.prm: REPLICAT RES starting.
2016-04-14 22:04:25 INFO OGG-00995 Oracle GoldenGate Delivery, res.prm: REPLICAT RES starting.
2016-04-14 22:04:25 INFO OGG-03059 Oracle GoldenGate Delivery, res.prm: Operating system character set identified as UTF-8.
2016-04-14 22:04:25 INFO OGG-02695 Oracle GoldenGate Delivery, res.prm: ANSI SQL parameter syntax is used for parameter parsing.
2016-04-14 22:04:25 ERROR OGG-02554 Oracle GoldenGate Delivery, res.prm: Error loading shared library ggjava.dll: 2 No such file or directory.
2016-04-14 22:04:25 ERROR OGG-01668 Oracle GoldenGate Delivery, res.prm: PROCESS ABENDING.

But hang on … ggjava.dll ? dll? This is Linux, not Windows.

So, a quick change to the prm is in order, switching .dll for .so:

[oracle@bigdatalite ogg-bd]$ diff dirprm/res.prm dirprm/res.prm.bak
5c5
< TARGETDB LIBFILE libggjava.so SET property=dirprm/elasticsearch.props — > TARGETDB LIBFILE ggjava.dll SET property=dirprm/elasticsearch.props

Second time lucky?

Redefine the replicat:

GGSCI (bigdatalite.localdomain) 7> delete res
Deleted REPLICAT RES.

GGSCI (bigdatalite.localdomain) 8> add replicat res, exttrail AdapterExamples/trail/tr
REPLICAT added.

And start it again:

GGSCI (bigdatalite.localdomain) 9> start res

Sending START request to MANAGER …
REPLICAT RES starting

Now it looks better:

GGSCI (bigdatalite.localdomain) 14> info res

REPLICAT RES Last Started 2016-04-14 22:10 Status RUNNING
Checkpoint Lag 00:00:00 (updated 00:00:02 ago)
Process ID 15101
Log Read Checkpoint File AdapterExamples/trail/tr000000000
2015-11-05 18:45:39.000000 RBA 5660

Result!

Let’s check out what’s happened in Elasticsearch. The console log looks promising, showing that an index with two mappings has been created:

[2016-04-14 22:10:08,709][INFO ][cluster.metadata ] [Abner Jenkins] [qasource] creating index, cause [auto(bulk api)], templates [], shards [5]/[1], mappings [tcustmer, tcustord]
[2016-04-14 22:10:09,458][INFO ][cluster.routing.allocation] [Abner Jenkins] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[qasource][4]] …]).
[2016-04-14 22:10:09,488][INFO ][cluster.metadata ] [Abner Jenkins] [qasource] update_mapping [tcustmer]
[2016-04-14 22:10:09,658][INFO ][cluster.metadata ] [Abner Jenkins] [qasource] update_mapping [tcustord]

We can confirm that with the Elasticsearch REST API:

$ curl –silent -XGET http://localhost:9200/_cat/indices?pretty=true
yellow open qasource 5 1 8 6 19.6kb 19.6kb

And see how many documents (“rows”) have been loaded (8):

$ curl -s -XGET ‘http://localhost:9200/qasource/_search?search_type=count&pretty=true&#8217;
{
“took” : 1,
“timed_out” : false,
“_shards” : {
“total” : 5,
“successful” : 5,
“failed” : 0
},
“hits” : {
“total” : 8,
“max_score” : 0.0,
“hits” : [ ]
}
}

You can even see the mappings (“schema”) defined within each index:

$ curl -XGET ‘http://localhost:9200/_mapping?pretty=true&#8217;
{
“.kibana” : {
“mappings” : {
“config” : {
“properties” : {
“buildNum” : {
“type” : “string”,
“index” : “not_analyzed”
}
}
}
}
},
“qasource” : {
“mappings” : {
“tcustord” : {
“properties” : {
“CUST_CODE” : {
“type” : “string”
},
“ORDER_DATE” : {
“type” : “string”
},
“ORDER_ID” : {
“type” : “string”
[…]

All this faffing about with curl is fine, but if you’re doing proper poking with Elasticsearch you may well find kopf handy:

ogges01

It’s easy to install: (modify the path if your Elasticsearch binary is in a different location):

/opt/elasticsearch-2.3.1/bin/plugin install lmenezes/elasticsearch-kopf

After installation, restart Elasticsearch and then go to http://localhost:9200/_plugin/kopf

If you’re using Elasticsearch, you may well be doing so for the whole Elastic experience, using Kibana to view the data:

ogges02

and even start doing quick profiling:

ogges03

One issue with the data that’s come through in this example is that it is all string – even the dates and numerics (AMOUNT, PRICE), which makes instant-analysis in Kibana less possible.
Streaming data from Oracle to Elasticsearch

Now that we’ve tested and proven the replicat load into Elasticsearch, let’s do the full end-to-end. I’m going to use the same Extract as the BigDataLite Oracle by Example (you can see my notes on it here if you’re interested).

Reset & recreate the Extract, in the first OGG instance (/u01/ogg)

$ cd /u01/ogg/
$ rlwrap ./ggsci

GGSCI (bigdatalite.localdomain as system@cdb/CDB$ROOT) 1> obey dirprm/reset_bigdata.oby

[…]

GGSCI (bigdatalite.localdomain as system@cdb/CDB$ROOT) 2> info all

Program Status Group Lag at Chkpt Time Since Chkpt

MANAGER RUNNING

GGSCI (bigdatalite.localdomain) 3> obey dirprm/bigdata.oby

[…]

GGSCI (bigdatalite.localdomain as system@cdb/CDB$ROOT) 9> info all

Program Status Group Lag at Chkpt Time Since Chkpt

MANAGER RUNNING
EXTRACT RUNNING EMOV 00:00:03 00:00:00

Now define a new replicat parameter file, over in the second OGG instance (that we used above for the res test):

cat > /u01/ogg-bd/dirprm/relastic.prm < stop res

Sending STOP request to REPLICAT RES …
Request processed.

GGSCI (bigdatalite.localdomain) 3> delete res
Deleted REPLICAT RES.

GGSCI (bigdatalite.localdomain) 4> info all

Program Status Group Lag at Chkpt Time Since Chkpt

MANAGER RUNNING

Add the new one (relastic):

GGSCI (bigdatalite.localdomain) 1> add replicat relastic, exttrail /u01/ogg/dirdat/tm
REPLICAT added.

And start it:

GGSCI (bigdatalite.localdomain) 2> start relastic

Sending START request to MANAGER …
REPLICAT RELASTIC starting

GGSCI (bigdatalite.localdomain) 4> info relastic

REPLICAT RELASTIC Last Started 2016-04-14 22:55 Status RUNNING
Checkpoint Lag 00:00:00 (updated 00:00:04 ago)
Process ID 17564
Log Read Checkpoint File /u01/ogg/dirdat/tm000000000
First Record RBA 1406

If we head over to the Elasticsearch, we’ll see that …

$ curl –silent -XGET http://localhost:9200/_cat/indices?pretty=true
yellow open qasource 5 1 8 6 19.6kb 19.6kb

… nothing’s changed! Because, of course, nothing’s changed on the source Oracle table that the Extract is set up against.

Let’s rectify that:

$ rlwrap sqlplus system/welcome1@orcl

SQL*Plus: Release 12.1.0.2.0 Production on Thu Apr 14 23:01:57 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Last Successful login time: Thu Apr 14 2016 22:48:35 +01:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 – 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> INSERT INTO “MOVIEDEMO”.”MOVIE” (MOVIE_ID, TITLE, YEAR, BUDGET, GROSS, PLOT_SUMMARY) VALUES (‘42444’, ‘never gonna’, ‘2014’, ‘500000’, ‘20000000’, ‘give you up’);

1 row created.

SQL> COMMIT;

Commit complete.

Check Elasticsearch again:

$ curl –silent -XGET http://localhost:9200/_cat/indices?pretty=true
yellow open qasource 5 1 8 6 19.6kb 19.6kb
yellow open moviedemo 5 1 1 0 4.5kb 4.5kb

Much better – a new index! We’ve got a new index because the replicat is handling a different schema this time – moviedemo, not qasource.

We can look at the data in the index directly:

$ curl -XGET ‘http://localhost:9200/moviedemo/_search?q=*&pretty=true&#8217;
{
“took” : 6,
“timed_out” : false,
“_shards” : {
“total” : 5,
“successful” : 5,
“failed” : 0
},
“hits” : {
“total” : 1,
“max_score” : 1.0,
“hits” : [ {
“_index” : “moviedemo”,
“_type” : “movie”,
“_id” : “42444”,
“_score” : 1.0,
“_source” : {
“PLOT_SUMMARY” : “give you up”,
“YEAR” : “2014”,
“MOVIE_ID” : “42444”,
“BUDGET” : “500000”,
“TITLE” : “never gonna”,
“GROSS” : “20000000”
}
} ]
}
}

You’ll note that the primary key (MOVIE_ID) has been correctly identied as the unique document _id field. The _id is now where things begin to get interesting, because this field enables the new OGG-Elasticsearch adaptor to apparently perform “UPSERT” on documents that already exist.

To doublecheck this apparent method of handling of the data, I first wanted to validate what was coming through from OGG in terms of the data flowing through from the extract. To do this I hooked up a second replicat, to Kafka and on to Logstash into Elasticseach (using this method), and then compared the doc count in the two relevant indices (or strictly speaking, the mapping types, corresponding to each index).

To start with, I deleted all my Elasticsearch data, as this shows:

$ curl “localhost:9200/*/_search?search_type=count&pretty=true” -d ‘{
“aggs”: {
“count_by_type”: {
“terms”: {
“field”: “_type”
}
}
}
}’
{
“took” : 2,
“timed_out” : false,
“_shards” : {
“total” : 1,
“successful” : 1,
“failed” : 0
},
“hits” : {
“total” : 0,
“max_score” : 0.0,
“hits” : [ ]
},
“aggregations” : {
“count_by_type” : {
“doc_count_error_upper_bound” : 0,
“sum_other_doc_count” : 0,
“buckets” : [ ]
}
}
}

Then I insert a row on “MOVIEDEMO”.”MOVIE” in Oracle (having previously truncated it):

SQL> INSERT INTO “MOVIEDEMO”.”MOVIE” (MOVIE_ID, TITLE, YEAR, BUDGET, GROSS, PLOT_SUMMARY) VALUES (‘1’, ‘never gonna’, ‘2014’, ‘500000’, ‘20000000’, ‘give you up’);

1 row created.

SQL> commit;

Commit complete.

and see it shows up in both Elasticsearch indices:

$ curl “localhost:9200/*/_search?search_type=count&pretty=true” -d ‘{
“aggs”: {
“count_by_type”: {
“terms”: {
“field”: “_type”
}
}
}
}’
[…]
}, {
“key” : “logs”,
“doc_count” : 1
}, {
“key” : “movie”,
“doc_count” : 1

logs is the index mapping loaded through OGG –> Kafka –> Logstash –> Elasticsearch
movie is the index mapping loaded through the new adaptor, OGG –> Elasticsearch

So far, so good. Now, let’s add a second row in Oracle:

SQL> INSERT INTO “MOVIEDEMO”.”MOVIE” (MOVIE_ID, TITLE, YEAR, BUDGET, GROSS, PLOT_SUMMARY) VALUES (‘2’, ‘foo’, ‘2014’, ‘500000’, ‘20000000’, ‘bar’);

1 row created.

SQL> commit;

Commit complete.

Both indices match count:

“buckets” : [ {
“key” : “logs”,
“doc_count” : 2
}, {
“key” : “movie”,
“doc_count” : 2
}, {

What about an update?

SQL> UPDATE “MOVIEDEMO”.”MOVIE” SET TITLE =’Foobar’ where movie_id = 1;

1 row updated.

SQL> commit;

Commit complete.

Hmmmm…

“buckets” : [ {
“key” : “logs”,
“doc_count” : 3
}, {
“key” : “movie”,
“doc_count” : 2
}, {

The index loaded from the OGG-Elasticsearch Adaptor has only two documents still, whilst the other route has three. If we look at what’s in the first of these (movie, loaded by OGG-Elasticsearch) for movie_id=1:

[oracle@bigdatalite ogg-bd]$ curl -XGET ‘http://localhost:9200/moviedemo/_search?q=_id=1&pretty=true&#8217;
{
“took” : 2,
“timed_out” : false,
“_shards” : {
“total” : 5,
“successful” : 5,
“failed” : 0
},
“hits” : {
“total” : 1,
“max_score” : 0.014065012,
“hits” : [ {
“_index” : “moviedemo”,
“_type” : “movie”,
“_id” : “1”,
“_score” : 0.014065012,
“_source” : {
“PLOT_SUMMARY” : “give you up”,
“YEAR” : “2014”,
“MOVIE_ID” : “1”,
“BUDGET” : “500000”,
“TITLE” : “Foobar”,
“GROSS” : “20000000”
}
} ]
}
}

You can see it’s the latest version of the row (TITLE=Foobar). In the second index, loaded from the change record sent to Kafka and then on through Logstash, there are both the before and after record for this key:

}
[oracle@bigdatalite ogg-bd]$ curl -XGET ‘http://localhost:9200/logstash*/_search?q=*&pretty=true&#8217;
[…]
“_source” : {
“table” : “ORCL.MOVIEDEMO.MOVIE”,
“op_type” : “I”,
“op_ts” : “2016-04-14 22:34:43.000000”,
“current_ts” : “2016-04-14T23:34:45.131000”,
“pos” : “00000000000000003514”,
“primary_keys” : [ “MOVIE_ID” ],
“tokens” : { },
“before” : null,
“after” : {
“MOVIE_ID” : “1”,
“MOVIE_ID_isMissing” : false,
“TITLE” : “never gonna”,
“TITLE_isMissing” : false,

[…]

“_source” : {
“table” : “ORCL.MOVIEDEMO.MOVIE”,
“op_type” : “U”,
“op_ts” : “2016-04-14 22:39:37.000000”,
“current_ts” : “2016-04-14T23:39:39.583000”,
“pos” : “00000000000000004097”,
“primary_keys” : [ “MOVIE_ID” ],
“tokens” : { },
“before” : {
[…]
“TITLE” : “never gonna”,
[…]
},
“after” : {
[…]
“TITLE” : “Foobar”,
[…]

Finally, if I delete a record in Oracle:

SQL> delete from “MOVIEDEMO”.”MOVIE” where MOVIE_ID = 1;

1 row deleted.

SQL> commit;

Commit complete.

My document counts reflect what I’d expect — the OGG-Elasticsearch adaptor deleted the record from Elasticsearch, whilst the Kafka route just recorded another change record, of op_type=’D’ this time.

“key” : “logs”,
“doc_count” : 4
}, {
“key” : “movie”,
“doc_count” : 1

Summary

This adaptor is a pretty smart way of mirroring a table’s contents from one of the many RDBMS that GoldenGate supports as an extract source, into Elasticsearch.

If you want to retain history of changed records, then using OGG->Kafka->Logstash->Elasticsearch is an option.

And, if you don’t have the spare cash for OGG, you can use Logstash’s JDBC input mechanism to pull data periodically from your RDBMS. This has the additional benefit of being able to specify custom SQL queries with joins etc – useful when pulling in denormalised datasets into Elasticsearch for analytics.

Nota | Publicado em por | Deixe um comentário

Como configurar replicação de DDL com Oracle Goldengate

Tutorial 5: How to Configure Goldengate DDL Replication?

February 23, 2016 by Natik Ameen 12 Comments

Goldengate supports the replication of DDL commands, operating at a schema level, from one database to another.

By default the DDL replication is disabled on the source database (extract side) but is enabled on the target Database (replicat side). Learn more on how to configure Goldengate DDL Replication.

Configure Goldengate DDL Replication

Prerequisite Setup
Navigate to the directory where the Oracle Goldengate software is installed.

Connect to the Oracle database as sysdba.

sqlplus sys/password as sysdba

For DDL synchronization setup, run the marker_setup.sql script. Provide OGG_USER schema name, when prompted.

Here the OGG_USER is the name of the database user, assigned to support DDL replication feature in Oracle Goldengate

SQL> @marker_setup.sql

Then run the ddl_setup.sql script. Provide the setup detail information below.

SQL> @ddl_setup.sql

For 10g:

Schema Name : OGG_USER
Installation mode : initialsetup
To proceed with the installation : yes

For 11g:

Start the installation : yes
Schema Name : OGG_USER
Installation mode : initialsetup

For 12c:

In Oracle database 12c, DDL replication does not require any setup of triggers as it is natively supported at the database level.

So none of the marker, ddl_setup or any of the other scripts need to be run. All that is required is including the “DDL INCLUDE MAPPED” parameter in the Extract parameter file as shown in the last step.

Run the role_setup.sql script. Provide OGG_USER schema name, when prompted.

SQL> @role_setup.sql

Then grant the ggs_ggsuser_role to the OGG_USER.

SQL> grant ggs_ggsuser_role to OGG_USER;

Run the ddl_enable.sql script as shown in below command:

SQL> @ddl_enable;

Run the ddl_pin.sql script as shown below.

SQL> @ddl_pin OGG_USER;

Configure Extract Process with DDL Replication

The following extract ESRC01 was configured previously. Adding “DDL INCLUDE MAPPED” enables extracting the DDL which ran in the database. Here the “MAPPED .. TABLE” are all tables specified in [schema_name].*.

On restart of the ESRC01 process all DDL on the speicfied tables will be picked up and placed in the trail file for applying to the destination database.

EXTRACT ESRC01
USERID OGG_USER PASS_WORD OGG_USER
EXTTRAIL ./dirdat/st
TRANLOGOPTIONS EXCLUDEUSER OGG_USER
DDL INCLUDE MAPPED
TABLE APPOLTP01.*;

Don’t forget to add DDL INCLUDE MAPPED in the Pump and Replicat processes.

DDL replication setup on source completed!

Nota | Publicado em por | Marcado com , , , , , , , , , | Deixe um comentário

Goldengate 12.2 Heartbeat Table

http://blog.perftuning.com/oracle-goldengate-12-2-heartbeat-table-issue-resolved/

Oracle GoldenGate 12.2 Heartbeat Table Issue Resolved

Monitoring lag in GoldenGate has always been an important part of monitoring GoldenGate. Lags are reported in several ways. When using the ggsci command lag only the latest lag is reported with a 1 second resolution providing the last reported lag. This isn’t accurate and does not provide a history for the Oracle GoldenGate 12.2 Heartbeat. In the past, GoldenGate implementers have created heartbeat tables manually.

In Oracle GoldenGate 12.2 a built-in heartbeat table feature has been added. This heartbeat table allows for more accurate heartbeats and heartbeat history. It works by creating an artificial transaction every minute that contains timing information that is used for heartbeats. The heartbeat tables are accessed via views that provide accurate lag data and lag history.

Recently I setup a pair of Oracle databases with Oracle GoldenGate 12.2. I setup these systems in hope of testing the new Heartbeattable feature that was introduced in GoldenGate version 12.2. The internal heartbeat mechanism is a great improvement in that it provides automatic and accurate lag times and includes a lag history.

I implemented GoldenGate 12.2 between two Oracle 11.2.0.4 databases using standard parameter files that I typically use as a starting point for GoldenGate projects or testing. In addition, database connectivity was set up using the GoldenGate credentialstore. In this case, for testing I have set up GoldenGate to replicate the HR example tables. Unfortunately the heartbeat mechanism failed to work. This blog entry describes the issue that I had and potential solutions.

Configuring GoldenGate

In order to configure replication I used the following GLOBALS and extract parameter files:

GLOBALS

GGSCHEMA ggadmin

EXT1HR

———————————-

— Local extract for HR schema

————————————
Extract EXT1HR
SETENV (NLS_LANG = AMERICAN_AMERICA.AL32UTF8
USERIDALIAS ggadm
ReportCount Every 30 Minutes, Rate
Report at 01:00
ReportRollover at 01:15
DiscardFile dirrpt/EXT1HR.dsc, Append
DiscardRollover at 02:00 On Sunday
DDL INCLUDE MAPPED
TRANLOGOPTIONS EXCLUDEUSER ggadmin
GETTRUNCATES
ExtTrail dirdat/la
Table HR.*;

At first look I thought that this should work. The GLOBALS parameter GGSCHEMA ggadmin is required for both DDL replication and the heartbeattable. Unfortunately this turned out to be part of the problem.

The pump parameter file was configured for passthrough as shown here:

PUMP1HR
————————————
— Pump extract for HR schema
————————————
Extract PUMP1HR
PASSTHRU
USERIDALIAS ggadm
ReportCount Every 1000 Records, Rate
Report at 01:00
ReportRollover at 01:15
DiscardFile dirrpt/PUMP1HR.dsc, Append
DiscardRollover at 02:00 ON SUNDAY
RmtHost target, MgrPort 7809
RmtTrail dirdat/ra
Table HR.*;

The initial extract and pump were registered using an obey file containing the following commands:

dblogin useridalias ggadm
add extract EXT1HR, tranlog, begin now
add exttrail dirdat/la, extract EXT1HR, megabytes 100
add extract PUMP1HR, exttrailsource dirdat/la
add rmttrail dirdat/ra, extract PUMP1HR, megabytes 100

On the target GoldenGate 12.2 was setup using the following GLOBALS and replciat parameter files:

GLOBALS
GGSCHEMA ggadmin
REP1HR

————————————
— Replicat for HR schema
————————————
replicat REP1HR
SETENV (NLS_LANG = AMERICAN_AMERICA.AL32UTF8)
USERIDALIAS ggadm
BATCHSQL
HandleCollisions
— Only one of these is used at a time
AssumeTargetDefs
ReportCount Every 30 Minutes, Rate
Report at 01:00
ReportRollover at 01:15

DiscardFile dirrpt/REP1HR.dsc, Append
DiscardRollover at 02:00 ON SUNDAY

Map HR.COUNTRIES, Target HR.COUNTRIES ;
Map HR.DEPARTMENTS, Target HR.DEPARTMENTS ;
Map HR.EMPLOYEES, Target HR.EMPLOYEES ;
Map HR.JOBS, Target HR.JOBS ;
Map HR.JOB_HISTORY, Target HR.JOB_HISTORY ;
Map HR.LOCATIONS, Target HR.LOCATIONS ;
Map HR.REGIONS, Target HR.REGIONS ;

This replicat was registered using an obey file that contained the following commands:

dblogin USERIDALIAS ggadm
add checkpointtable ggadmin.rep1hr_chkpt
add replicat REP1HR, exttrail dirdat/ra, checkpointtable ggadmin.rep1hr_chkpt

All of the GoldenGate processes were started and test transactions run. Once GoldenGate was verified to be running correctly it was time to set up the 12.2 heartbeat table and see if it worked.

Setting up the Built-In Heartbeat Table

The heartbeat table was setup first on the target system and then on the source and the results of that are shown here:

ADD HEARTBEATTABE on target

GGSCI > add heartbeattable

2016-02-18 10:00:13 INFO OGG-14001 Successfully created heartbeat seed table [“GG_HEARTBEAT_SEED”].
2016-02-18 10:00:13 INFO OGG-14032 Successfully added supplemental logging for heartbeat seed table [“GG_HEARTBEAT_SEED”].
2016-02-18 10:00:13 INFO OGG-14000 Successfully created heartbeat table [“GG_HEARTBEAT”].
2016-02-18 10:00:13 INFO OGG-14033 Successfully added supplemental logging for heartbeat table [“GG_HEARTBEAT”].
2016-02-18 10:00:13 INFO OGG-14016 Successfully created heartbeat history table [“GG_HEARTBEAT_HISTORY”].
2016-02-18 10:00:13 INFO OGG-14023 Successfully created heartbeat lag view [“GG_LAG”].
2016-02-18 10:00:13 INFO OGG-14024 Successfully created heartbeat lag history view [“GG_LAG_HISTORY”].
2016-02-18 10:00:13 INFO OGG-14003 Successfully populated heartbeat seed table with [ORCLS].
2016-02-18 10:00:13 INFO OGG-14004 Successfully created procedure [“GG_UPDATE_HB_TAB”] to update the heartbeat tables.
2016-02-18 10:00:13 INFO OGG-14017 Successfully created procedure [“GG_PURGE_HB_TAB”] to purge the heartbeat history table.
2016-02-18 10:00:13 INFO OGG-14005 Successfully created scheduler job [“GG_UPDATE_HEARTBEATS”] to update the heartbeat tables.
2016-02-18 10:00:13 INFO OGG-14018 Successfully created scheduler job [“GG_PURGE_HEARTBEATS”] to purge the heartbeat history table.

ADD HEARTBEATTABLE on source

GGSCI > add heartbeattable

2016-02-18 10:02:36 INFO OGG-14001 Successfully created heartbeat seed table [“GG_HEARTBEAT_SEED”].
2016-02-18 10:02:37 INFO OGG-14032 Successfully added supplemental logging for heartbeat seed table [“GG_HEARTBEAT_SEED”].
2016-02-18 10:02:37 INFO OGG-14000 Successfully created heartbeat table [“GG_HEARTBEAT”].
2016-02-18 10:02:37 INFO OGG-14033 Successfully added supplemental logging for heartbeat table [“GG_HEARTBEAT”].
2016-02-18 10:02:37 INFO OGG-14016 Successfully created heartbeat history table [“GG_HEARTBEAT_HISTORY”].
2016-02-18 10:02:37 INFO OGG-14023 Successfully created heartbeat lag view [“GG_LAG”].
2016-02-18 10:02:37 INFO OGG-14024 Successfully created heartbeat lag history view [“GG_LAG_HISTORY”].
2016-02-18 10:02:37 INFO OGG-14003 Successfully populated heartbeat seed table with [ORCLP].
2016-02-18 10:02:37 INFO OGG-14004 Successfully created procedure [“GG_UPDATE_HB_TAB”] to update the heartbeat tables.
2016-02-18 10:02:37 INFO OGG-14017 Successfully created procedure [“GG_PURGE_HB_TAB”] to purge the heartbeat history table.
2016-02-18 10:02:37 INFO OGG-14005 Successfully created scheduler job [“GG_UPDATE_HEARTBEATS”] to update the heartbeat tables.
2016-02-18 10:02:37 INFO OGG-14018 Successfully created scheduler job [“GG_PURGE_HEARTBEATS”] to purge the heartbeat history table.

Once the heartbeat table was created it should have been an easy matter to go to the target system and query the ggadmin.gg_heartbeat and ggadmin.gg_heartbeat_history tables to see the automated heartbeats. Unfortunately at this point there were no rows in these tables. It would take a little bit of investigation in order to determine what the issue was.

Debugging the Issue

Since there are a number of parts to the heartbeat table including the extract, pump and replicat I had to decide where to start. I knew that the heartbeat mechanism took advantage of the replication that was currently configured in order to move its heartbeat information from the source to the target. I also believed that it was using GoldenGate replication.

In order to see if anything was moving in the trail file I used the GoldenGate stats command against the extract. I only saw the transactions that I had run for my own testing of the replication. This led me to believe that it was a problem at the source side. I also ran logdump and looked at the trail files and I saw no “heartbeat” records in the trail.

I then looked in the database at the database schedule to see if the GG_UPDATE_HEARTBEATS job was running. It was and it was updating the GG_HEARTBEAT_SEED HEARTBEAT_TIMESTAMP column. So, the scheduler job was running and the column was being updated so there was nothing in the trail file, so it was most likely an issue with replication.

Looking back at the extract parameter file it became apparent that this might be related to the parameter TRANLOGOPTIONS EXCLUDEUSER ggadmin parameter. So, I commented out that parameter and suddenly the GG_HEARTBEAT and GG_HEARTBEAT_HISTORY table began populating on the target side. In addition, after a while a stats command against the extract showed updates to the GG_HEARTBEAT_SEED table.

*** Daily statistics since 2016-02-18 00:00:00 ***

Total inserts 0.00

Total updates 603.00

Total deletes 0.00

Total discards 0.00

Total operations 603.00

In addition the gg_lag_history view showed the data that I was looking for:

column heartbeat_received_ts format a30
column incoming_path format a40
column incoming_timestamp format 9.99999999

select heartbeat_received_ts, incoming_path, incoming_lag from gg_lag_history;

18-FEB-16 11.41.41.893670 AM EXT1HR ==> PUMP1HR ==> REP1HR 4.868161000

18-FEB-16 11.42.42.944752 AM EXT1HR ==> PUMP1HR ==> REP1HR 5.925239000

18-FEB-16 11.43.41.993427 AM EXT1HR ==> PUMP1HR ==> REP1HR 4.970112000

18-FEB-16 11.44.42.041501 AM EXT1HR ==> PUMP1HR ==> REP1HR 5.021219000

18-FEB-16 11.45.43.091402 AM EXT1HR ==> PUMP1HR ==> REP1HR 6.065255000

18-FEB-16 11.46.42.140396 AM EXT1HR ==> PUMP1HR ==> REP1HR 5.121159000

18-FEB-16 11.47.41.189519 AM EXT1HR ==> PUMP1HR ==> REP1HR 4.136698000

18-FEB-16 11.48.43.240700 AM EXT1HR ==> PUMP1HR ==> REP1HR 6.215541000

18-FEB-16 11.49.43.289275 AM EXT1HR ==> PUMP1HR ==> REP1HR 6.268965000

18-FEB-16 11.50.42.338042 AM EXT1HR ==> PUMP1HR ==> REP1HR 5.312789000

18-FEB-16 11.51.42.386426 AM EXT1HR ==> PUMP1HR ==> REP1HR 5.368242000

18-FEB-16 11.52.42.435722 AM EXT1HR ==> PUMP1HR ==> REP1HR 5.412282000

18-FEB-16 11.53.43.487633 AM EXT1HR ==> PUMP1HR ==> REP1HR 6.470509000

18-FEB-16 11.54.42.538333 AM EXT1HR ==> PUMP1HR ==> REP1HR 5.523112000

18-FEB-16 11.55.42.588896 AM EXT1HR ==> PUMP1HR ==> REP1HR 5.563893000

This was now successful.

Problem Recap

Because I was setting up for bi-directional replication and using the ggadmin user for both source and target I had configured the excludeuser parameter. This keeps GoldenGate from re-replicating replicated transactions by ignoring transactions submitted by the ggadmin user. This was fine for normal transactions but I didn’t expect that the heartbeat transactions would be excluded as well.

The excludeuser ggadmin caused updates to the GG_HEARTBEAT_SEED table to not be replicated. You cannot exclude the ggadmin user in the extract. The heartbeat schema is defined by the GGSCHEMA parameter in the GLOBALS file. In addition, the GGSCHEMA parameter defines the schema for DDL replication. This causes a bit of a problem for bi-directional replication when you want to use the ggadmin user for both extract and replicat.

Solutions

I thought about a number of different solutions to this problem and consulted some of my colleagues. We decided that the best approach to this problem was to simply use a different Oracle database user for extract and replicat. This would allow us to still maintain the same GGSCHEMA user for heartbeat and DDL replication. The new user account used for the replicat would be excluded in the TRANLOGOPTIONS EXCLUDEUSER parameter and everything should work well.

This would allow the ggadmin user at the source to submit both DDL and heartbeats, since you can only have one setting for GGSCHEMA which both must use. The different user that is used for replicating back to the source will be excluded via TRANLOGOPTIONS EXCLUDEUSER .

I really would like to see the GoldenGate developers take a look at this and internally allow replication of the GoldenGate lag tables to be excluded from the excludeuser option.

Managing Heartbeat Data

As seen above the heartbeat table is created via the ADD HEARTBEATTABLE command within ggsci. By default a heartbeat is generated every minute, retained for 30 days then purged. The frequency of the heartbeat, the history retention and how often the purge process runs is configurable. This is done via the ALTER HEARTBEATTABLE command.

Viewing Heartbeat Data

Viewing the heartbeat table is done via the two heartbeat views; GG_LAG and GG_LAG history. These views provide information on lags for each set of ext -> pump -> replicat that is configured. This information as well as the history is valuable for monitoring the performance of the GoldenGate configuration.

I have implemented viewing these tables via the following scripts. The scripts and output are shown here:

lag.sql

col local_database format a10
col current_local_ts format a30
col remote_database format a10
col incoming_path format a30
col incoming_lag format 999,999.999999

select local_database, current_local_ts, remote_database, incoming_path, incoming_lag from gg_lag;

Output of lag.sql

TARGET on GG16B:ggadmin > @lag

LOCAL_DATA CURRENT_LOCAL_TS REMOTE_DAT INCOMING_PATH INCOMING_LAG

———- —————————— ———- ——————————

ORCLS 19-FEB-16 08.23.40.715171 PM ORCLP EXT1HR ==> PUMP1HR ==> REP1HR 5.848900

Lag_history.sql
set pagesize 100
col local_database format a10
col heartbeat_received_ts format a30
col remote_database format a10
col incoming_path format a32
col incoming_lag format 999,999.999999

select local_database, heartbeat_received_ts, remote_database, incoming_path, incoming_lag from gg_lag_history;

Output of lag_history.sql

ORCLS 19-FEB-16 08.30.40.678817 PM ORCLP EXT1HR ==> PUMP1HR ==> REP1HR 3.656291

ORCLS 19-FEB-16 08.31.41.702019 PM ORCLP EXT1HR ==> PUMP1HR ==> REP1HR 4.681604

ORCLS 19-FEB-16 08.32.41.724873 PM ORCLP EXT1HR ==> PUMP1HR ==> REP1HR 4.700153

ORCLS 19-FEB-16 08.33.41.747616 PM ORCLP EXT1HR ==> PUMP1HR ==> REP1HR 4.729306

ORCLS 19-FEB-16 08.34.41.770055 PM ORCLP EXT1HR ==> PUMP1HR ==> REP1HR 4.748563

ORCLS 19-FEB-16 08.35.41.793370 PM ORCLP EXT1HR ==> PUMP1HR ==> REP1HR 4.769183

ORCLS 19-FEB-16 08.36.41.816100 PM ORCLP EXT1HR ==> PUMP1HR ==> REP1HR 4.798405

ORCLS 19-FEB-16 08.37.41.839139 PM ORCLP EXT1HR ==> PUMP1HR ==> REP1HR 4.817937

The output of the lag history can be used to monitor lags over long periods of time and be used for alerting and monitoring. The lag history can be imported into a spreadsheet and graphed as well.

GGLagGraph

Nota | Publicado em por | Marcado com , , , , , , , , , | Deixe um comentário

GoldenGate 12.2 Big Data Adapters: part 1 – HDFS

https://www.pythian.com/blog/goldengate-12-2-big-data-adapters-part-1-hdfs/

by Gleb OtochkinFebruary 29, 2016
Posted in: Big Data, Hadoop, Oracle, Technical Track
Tags: Big Data, Data Integration, dba, GoldenGate, Hadoop, HDFS, Oracle

December 2015 brought us a new version of GoldenGate, and a new version for Big Data adapters for the GoldenGate. Let’s have a look at what we have now and how it works. I am going to start from the HDFS adapter.

As a first step, we need to prepare our source database for replication. It becomes easier with every new GoldenGate version. We will need to perform several steps:
a) Enable archive logging on our database. This particular step requires downtime.
orcl> alter database mount;

Database altered.

orcl> alter database archivelog;

Database altered.

orcl> alter database open;

b) Enable force logging and minimal supplemental logging. No need to shutdown database for this.

orcl> alter database add supplemental log data;

Database altered.

orcl> alter database force logging;

Database altered.

orcl> SELECT supplemental_log_data_min, force_logging FROM v$database;

SUPPLEME FORCE_LOGGING
——– —————————————
YES YES

c) Switch parameter “enable_goldengate_replication” to “TRUE”. Can be done online.

orcl> alter system set enable_goldengate_replication=true sid=’*’ scope=both;

System altered.

orcl>

And we are almost done. Now we can create a schema for a GoldenGate administrator, and provide required privileges. I’ve just granted DBA role to the user to simplify process. In any case you will need it in case of integrated capture. For a production installation I advise you to have a look at the documentation to verify necessary privileges and roles.

orcl> create user ogg identified by welcome1 default tablespace users temporary tablespace temp;
orcl> grant connect, dba to ogg;

Let’s create a test schema to be replicated. We will call it schema on the source as ggtest and I will name the destination schema as bdtest. It will allow us also to check how the mapping works in our replication.

orcl> create tablespace ggtest; — optional step
orcl> create user ggtest identified by welcome1 default tablespace ggtest temporary tablespace temp;
orcl> grant connect, resource to ggtest;

Everything is ready on our source database for the replication.
Now we are installing Oracle GoledenGate for Oracle to our database server. We can get the software from the Oracle site on the download page in the Middleware section, GoldenGate, Oracle GoldenGate for Oracle databases. We are going to use 12.2.0.1.1 version of the software. The installation is easy – you need to unzip the software and run Installer which will guide you through couple of simple steps. The installer will unpack the software to the destination location, create subdirectories, and register GoldenGate in the Oracle global registry.

[oracle@sandbox distr]$ unzip fbo_ggs_Linux_x64_shiphome.zip
[oracle@sandbox distr]$ cd fbo_ggs_Linux_x64_shiphome/Disk1/
[oracle@sandbox Disk1]$ ./runInstaller

We continue by setting up parameters for Oracle GoldenGate (OGG) manager and starting it up. You can see that I’ve used a default blowfish encryption for the password. In a production environment you may consider another encryption like AES256. I’ve also used a non-default port for the manager since I have more than one GoldenGate installation on my test sandbox.

[oracle@sandbox ~]$ export OGG_HOME=/u01/oggora
[oracle@sandbox ~]$ cd $OGG_HOME
[oracle@sandbox oggora]$ ./ggsci

GGSCI (sandbox.localdomain) 1> encrypt password welcome1 BLOWFISH ENCRYPTKEY DEFAULT
Using Blowfish encryption with DEFAULT key.
Encrypted password: AACAAAAAAAAAAAIARIXFKCQBMFIGFARA
Algorithm used: BLOWFISH

GGSCI (sandbox.localdomain) 2> edit params mgr
PORT 7829
userid ogg@orcl,password AACAAAAAAAAAAAIARIXFKCQBMFIGFARA, BLOWFISH, ENCRYPTKEY DEFAULT
purgeoldextracts /u01/oggora/dirdat/*, usecheckpoints

GGSCI (sandbox.localdomain) 3> start manager
Manager started.

Let’s prepare everything for initial load, and later online replication.
I’ve decided to use GoldenGate initial load extract as the way for initial load for the sake of consistency for the resulted dataset on Hadoop.
I prepared the parameter file to replicate my ggtest schema and upload all data to the trail file on remote site. I’ve used a minimum number of options for all my processes, providing only handful of parameters required for replication. Extract options is a subject deserving a dedicated blog post. Here is my simple initial extract:

[oracle@sandbox oggora]$ cat /u01/oggora/dirprm/ini_ext.prm
SOURCEISTABLE
userid ogg@orcl,password AACAAAAAAAAAAAIARIXFKCQBMFIGFARA, BLOWFISH, ENCRYPTKEY DEFAULT
–RMTHOSTOPTIONS
RMTHOST sandbox, MGRPORT 7839
RMTFILE /u01/oggbd/dirdat/initld, MEGABYTES 2, PURGE
–DDL include objname ggtest.*
TABLE ggtest.*;

Then we run the initial load extract in passive node and it will create a trail file with the data. The trail file will be used later for our initial load on the target side.

[oracle@sandbox oggora]$ ./extract paramfile dirprm/ini_ext.prm reportfile dirrpt/ini_ext.rpt
[oracle@sandbox oggora]$ ll /u01/oggbd/dirdat/initld*
-rw-r—–. 1 oracle oinstall 3028 Feb 16 14:17 /u01/oggbd/dirdat/initld
[oracle@sandbox oggora]$

We can also prepare our extract on the source site as well. I haven’t used datapump in my configuration limiting the topology only by simplest and strait-forward extract to replicat configuration. Of course, in any production configuration I would advise using datapump on source for staging our data.
Here are my extract parameters, and how I added it. I am not starting it yet because I must have an Oracle GoldenGate Manager running on the target, and the directory for the trail file should be created. You may have guessed that the Big Data GoldenGate will be located in /u01/oggbd directory.

[oracle@sandbox oggora]$ ggsci

Oracle GoldenGate Command Interpreter for Oracle
Version 12.2.0.1.1 OGGCORE_12.2.0.1.0_PLATFORMS_151211.1401_FBO
Linux, x64, 64bit (optimized), Oracle 12c on Dec 12 2015 02:56:48
Operating system character set identified as UTF-8.

Copyright (C) 1995, 2015, Oracle and/or its affiliates. All rights reserved.

GGSCI (sandbox.localdomain) 1> edit params ggext

extract ggext
userid ogg@orcl,password AACAAAAAAAAAAAIARIXFKCQBMFIGFARA, BLOWFISH, ENCRYPTKEY DEFAULT
–RMTHOSTOPTIONS
RMTHOST sandbox, MGRPORT 7839
RMTFILE /u01/oggbd/dirdat/or, MEGABYTES 2, PURGE
DDL include objname ggtest.*
TABLE ggtest.*;

GGSCI (sandbox.localdomain) 2> dblogin userid ogg@orcl,password AACAAAAAAAAAAAIARIXFKCQBMFIGFARA, BLOWFISH, ENCRYPTKEY DEFAULT
Successfully logged into database.

GGSCI (sandbox.localdomain as ogg@orcl) 3> register extract GGEXT database

2016-02-16 15:37:21 INFO OGG-02003 Extract GGEXT successfully registered with database at SCN 17151616.

GGSCI (sandbox.localdomain as ogg@orcl) 4> add extract ggext, INTEGRATED TRANLOG, BEGIN NOW
EXTRACT (Integrated) added.

Let’s leave our source site for a while and switch to the target . Our target is going to be a box where we have hadoop client and all requirement java classes.
I used the same box just to save resources on my sandbox environment. You may run different GoldeGate versions on the same box considering, that Manager ports for each of them will be different.
Essentially we need a Hadoop client on the box, which can connect to HDFS and write data there. Installation of Hadoop client is out of the scope for this article, but you can easily get all necessary information from the Hadoop home page .

Having all required Hadoop classes we continue by installing Oracle GoldenGate for Big Data, configuring and starting it up. In the past I received several questions from people struggling to find the exact place where all adapters could be uploaded. The Adapters were well “hidden” on “Oracle edelivery”, but now it is way simpler. You are going to GoldenGate download page on Oracle site and find the section “Oracle GoldenGate for Big Data 12.2.0.1.0” where you can choose the OGG for Linux x86_64, Windows or Solaris. You will need an Oracle account to get it. We upload the file to our linux box, unzip and unpack the tar archive. I created a directory /u01/oggbd as our GoldenGate home and unpacked the tar archive there.
The next step is to create all necessary directories. We start command line utility and create all subdirectories.

[oracle@sandbox ~]$ cd /u01/oggbd/
[oracle@sandbox oggbd]$ ./ggsci

Oracle GoldenGate Command Interpreter
Version 12.2.0.1.0 OGGCORE_12.2.0.1.0_PLATFORMS_151101.1925.2
Linux, x64, 64bit (optimized), Generic on Nov 10 2015 16:18:12
Operating system character set identified as UTF-8.

Copyright (C) 1995, 2015, Oracle and/or its affiliates. All rights reserved.

GGSCI (sandbox.localdomain) 1> create subdirs

Creating subdirectories under current directory /u01/oggbd

Parameter files /u01/oggbd/dirprm: created
Report files /u01/oggbd/dirrpt: created
Checkpoint files /u01/oggbd/dirchk: created
Process status files /u01/oggbd/dirpcs: created
SQL script files /u01/oggbd/dirsql: created
Database definitions files /u01/oggbd/dirdef: created
Extract data files /u01/oggbd/dirdat: created
Temporary files /u01/oggbd/dirtmp: created
Credential store files /u01/oggbd/dircrd: created
Masterkey wallet files /u01/oggbd/dirwlt: created
Dump files /u01/oggbd/dirdmp: created

GGSCI (sandbox.localdomain) 2>

We are changing port for our manager process from default and starting it up. I’ve already mentioned that the port was changed due to the existence off several GoldenGate managers running from different directories.

GGSCI (sandbox.localdomain) 2> edit params mgr
PORT 7839
…..

GGSCI (sandbox.localdomain) 3> start manager
Manager started.

Now we have to prepare parameter files for our replicat processes. Let’s assume the environment variable OGGHOME represents the GoldenGate home and in our case it is going to be /u01/oggbd.
Examples for the parameter files can be taken from $OGGHOME/AdapterExamples/big-data directories. There you will find examples for flume, kafka, hdfs, hbase and for metadata providers. Today we are going to work with HDFS adapter.
I copied files to my parameter files directory ($OGGHOME/dirprm) and modified them accordingly:
1
2

oracle@sandbox oggbd]$ cp /u01/oggbd/AdapterExamples/big-data/hdfs/* /u01/oggbd/dirprm/
oracle@sandbox oggbd]$ vi /u01/oggbd/dirprm/hdfs.props

Here are my values for the hdfs.props file:

32

[oracle@bigdata dirprm]$ cat hdfs.props

gg.handlerlist=hdfs

gg.handler.hdfs.type=hdfs
gg.handler.hdfs.includeTokens=false
gg.handler.hdfs.maxFileSize=1g
gg.handler.hdfs.rootFilePath=/user/oracle/gg
gg.handler.hdfs.fileRollInterval=0
gg.handler.hdfs.inactivityRollInterval=0
gg.handler.hdfs.fileSuffix=.txt
gg.handler.hdfs.partitionByTable=true
gg.handler.hdfs.rollOnMetadataChange=true
gg.handler.hdfs.authType=none
gg.handler.hdfs.format=delimitedtext
#gg.handler.hdfs.format.includeColumnNames=true

gg.handler.hdfs.mode=tx

goldengate.userexit.timestamp=utc
goldengate.userexit.writers=javawriter
javawriter.stats.display=TRUE
javawriter.stats.full=TRUE

gg.log=log4j
gg.log.level=INFO

gg.report.time=30sec

gg.classpath=/usr/lib/hadoop/*:/usr/lib/hadoop/lib/*:/usr/lib/hadoop-hdfs/*:/usr/lib/hadoop/etc/hadoop/:/usr/lib/hadoop/lib/native/*

javawriter.bootoptions=-Xmx512m -Xms32m -Djava.class.path=ggjava/ggjava.jar

You can find information about all those parameters in oracle documentation here, but there are parameters you will most likely need to change from default:

gg.handler.hdfs.rootFilePath – it will tell where the directories and files have to be written on HDFS.
gg.handler.hdfs.format – you can setup one of the four formats supported by adapter.
goldengate.userexit.timestamp – it will depend from your preferences for transactions timestamps written to your hdfs files.
gg.classpath – it will depend from location for your hadoop jar classes and native libraries.

You can see I’ve mentioned the gg.handler.hdfs.format.includeColumnNames parameter. It is supposed to put column name before each value in the output file on HDFS. It may be helpful in some cases, but at the same time it makes the file bigger. If you are planning to create an external Hive table, you may consider commenting on it as I have.
The next parameter file is for our data initialization replicat file. You may consider using a Sqoop or another way to make the initial load for your tables, but I think it makes sense to use the GG replicat if the table size is relatively small. It makes the resulting file-set more consistent with the following replication, since it will be using the same engine and format. So, here is my replicat for initial load:

[oracle@sandbox dirprm]$ cat /u01/oggbd/dirprm/irhdfs.prm
–passive REPLICAT for initial load irhdfs
— Trail file for this example is located in “dirdat/initld”
— Command to run REPLICAT:
— ./replicat paramfile dirprm/irhdfs.prm reportfile dirrpt/ini_rhdfs.rpt
SPECIALRUN
END RUNTIME
EXTFILE /u01/oggbd/dirdat/initld
–DDLERROR default discard
setenv HADOOP_COMMON_LIB_NATIVE_DIR=/usr/lib/hadoop/lib/native
DDL include all
TARGETDB LIBFILE libggjava.so SET property=dirprm/hdfs.props
REPORTCOUNT EVERY 1 MINUTES, RATE
GROUPTRANSOPS 10000
MAP ggtest.*, TARGET bdtest.*;

I was running the initial load in passive mode, without creating a managed process and just running it from command line. Here is an example:

[oracle@sandbox oggbd]$ ./replicat paramfile dirprm/irhdfs.prm reportfile dirrpt/ini_rhdfs.rpt
[oracle@sandbox oggbd]$ hadoop fs -ls /user/oracle/gg/
Found 2 items
drwxr-xr-x – oracle oracle 0 2016-02-16 14:37 /user/oracle/gg/bdtest.test_tab_1
drwxr-xr-x – oracle oracle 0 2016-02-16 14:37 /user/oracle/gg/bdtest.test_tab_2
[oracle@sandbox oggbd]$ hadoop fs -ls /user/oracle/gg/bdtest.test_tab_1
Found 1 items
-rw-r–r– 1 oracle oracle 624 2016-02-16 14:37 /user/oracle/gg/bdtest.test_tab_1/bdtest.test_tab_1_2016-02-16_14-37-43.376.txt
[oracle@sandbox oggbd]$ hadoop fs -tail /user/oracle/gg/bdtest.test_tab_1/bdtest.test_tab_1_2016-02-16_14-37-43.376.txt
IBDTEST.TEST_TAB_12016-02-16 19:17:40.7466992016-02-16T14:37:43.37300000000000-100000020121371O62FX2014-01-24:19:09:20RJ68QYM52014-01-22:12:14:30
IBDTEST.TEST_TAB_12016-02-16 19:17:40.7466992016-02-16T14:37:44.89600000000000-100000021552371O62FX2014-01-24:19:09:20HW82LI732014-05-11:05:23:23
IBDTEST.TEST_TAB_12016-02-16 19:17:40.7466992016-02-16T14:37:44.89600100000000-100000022983RXZT5VUN2013-09-04:23:32:56RJ68QYM52014-01-22:12:14:30
IBDTEST.TEST_TAB_12016-02-16 19:17:40.7466992016-02-16T14:37:44.89600200000000-100000024414RXZT5VUN2013-09-04:23:32:56HW82LI732014-05-11:05:23:23
[oracle@sandbox oggbd]$

You can see the Hadoop directories and files created by the initial load.
As soon as the initial load has run we can start our extract and replicat to keep the destination side updated.
We are moving to source and starting our extract prepared earlier.
1
2
3
4

GGSCI (sandbox.localdomain as ogg@orcl) 6>start extract ggext

Sending START request to MANAGER …
EXTRACT GGEXT starting

So, moving back to target and preparing our replicat. I used the replicat with the following parameters:
1
2
3
4
5
6
7
8
9
10
11
12

[oracle@sandbox oggbd]$ cat /u01/oggbd/dirprm/rhdfs.prm
REPLICAT rhdfs
— Trail file for this example is located in “dirdat/or” directory
— Command to add REPLICAT
— add replicat rhdfs, exttrail dirdat/or
–DDLERROR default discard
setenv HADOOP_COMMON_LIB_NATIVE_DIR=/usr/lib/hadoop/lib/native
DDL include all
TARGETDB LIBFILE libggjava.so SET property=dirprm/hdfs.props
REPORTCOUNT EVERY 1 MINUTES, RATE
GROUPTRANSOPS 10000
MAP ggtest.*, TARGET bdtest.*;

We are adding the replicat to our configuration and it is going to carry on the replication.
1
2
3
4
5
6
7
8
9
10
11

GGSCI (sandbox.localdomain) 1> add replicat rhdfs, exttrail dirdat/or
REPLICAT added.

GGSCI (sandbox.localdomain) 2> start replicat rhdfs

Sending START request to MANAGER …
REPLICAT RHDFS starting

GGSCI (sandbox.localdomain) 3>

Our replication is up and running, the initial load worked fine, and we can test and see what we have on source and target.
Here is the data in one of our source tables:
1
2
3
4
5
6
7
8
9
10

orcl> select * from ggtest.test_tab_1;

PK_ID RND_STR USE_DATE RND_STR_1 ACC_DATE
—————- ———- —————– ———- —————–
1 371O62FX 01/24/14 19:09:20 RJ68QYM5 01/22/14 12:14:30
2 371O62FX 01/24/14 19:09:20 HW82LI73 05/11/14 05:23:23
3 RXZT5VUN 09/04/13 23:32:56 RJ68QYM5 01/22/14 12:14:30
4 RXZT5VUN 09/04/13 23:32:56 HW82LI73 05/11/14 05:23:23

orcl>

I’ve created an external Hive table for the table test_tab_1 to have better look.
1
2
3
4
5
6
7
8
9
10
11
12
13

hive> CREATE EXTERNAL TABLE BDTEST.TEST_TAB_1 (tran_flag string, tab_name string, tran_time_utc timestamp, tran_time_loc string,something string, something1 string,
> PK_ID INT, RND_STR VARCHAR(10),USE_DATE string,RND_STR_1 string, ACC_DATE string)
> stored as textfile location ‘/user/oracle/gg/bdtest.test_tab_1′;
OK
Time taken: 0.327 seconds
hive> select * from BDTEST.TEST_TAB_1;
OK
I BDTEST.TEST_TAB_1 2016-02-16 19:17:40.746699 2016-02-16T14:37:43.373000 00000000-10000002012 1 371O62FX 2014-01-24:19:09:20 RJ68QYM5 2014-01-22:12:14:30
I BDTEST.TEST_TAB_1 2016-02-16 19:17:40.746699 2016-02-16T14:37:44.896000 00000000-10000002155 2 371O62FX 2014-01-24:19:09:20 HW82LI73 2014-05-11:05:23:23
I BDTEST.TEST_TAB_1 2016-02-16 19:17:40.746699 2016-02-16T14:37:44.896001 00000000-10000002298 3 RXZT5VUN 2013-09-04:23:32:56 RJ68QYM5 2014-01-22:12:14:30
I BDTEST.TEST_TAB_1 2016-02-16 19:17:40.746699 2016-02-16T14:37:44.896002 00000000-10000002441 4 RXZT5VUN 2013-09-04:23:32:56 HW82LI73 2014-05-11:05:23:23
Time taken: 0.155 seconds, Fetched: 4 row(s)
hive>

You can see the table definition is a bit different from what we have on the source site and you will see why.
We got additional columns on the destination side. Interesting that while some of them have a pretty clear purpose, the other columns are not totally clear and have null values.
The first column is a flag for operation, and it shows what kind of operation we have gotten in this row. It can be “I” for insert, “D” for delete and “U” for an update. The second column is table name. The third one is a timestamp in UTC showing when the transaction occurred. The next one is another time in local timezone informing the time of commit, and the next column has a commit sequence number. Those columns can help you to construct the proper data set for any given time.
Let’s insert and update some row(s) on source and see how it will show up on the target:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15

orcl> insert into ggtest.test_tab_1 values (5,’TEST_1′,sysdate,’TEST_1′,sysdate);

1 row created.

orcl> commit;

orcl> update ggtest.test_tab_1 set RND_STR=’TEST_1_1’ where PK_ID=5;

1 row updated.

orcl> commit;

Commit complete.

orcl>

Let’s check how it is reflected on the destination site. We see two new rows, where each row represents a DML operation. One was for the insert and the second one was for the update.
1
2
3
4
5
6
7
8
9
10
11

[highlight=”7,8″
hive> select * from BDTEST.TEST_TAB_1;
OK
I BDTEST.TEST_TAB_1 2016-02-16 19:17:40.746699 2016-02-16T14:37:43.373000 00000000-10000002012 1 371O62FX 2014-01-24:19:09:20 RJ68QYM5 2014-01-22:12:14:30
I BDTEST.TEST_TAB_1 2016-02-16 19:17:40.746699 2016-02-16T14:37:44.896000 00000000-10000002155 2 371O62FX 2014-01-24:19:09:20 HW82LI73 2014-05-11:05:23:23
I BDTEST.TEST_TAB_1 2016-02-16 19:17:40.746699 2016-02-16T14:37:44.896001 00000000-10000002298 3 RXZT5VUN 2013-09-04:23:32:56 RJ68QYM5 2014-01-22:12:14:30
I BDTEST.TEST_TAB_1 2016-02-16 19:17:40.746699 2016-02-16T14:37:44.896002 00000000-10000002441 4 RXZT5VUN 2013-09-04:23:32:56 HW82LI73 2014-05-11:05:23:23
I BDTEST.TEST_TAB_1 2016-02-16 20:43:32.000231 2016-02-16T15:43:37.199000 00000000000000002041 5 TEST_1 2016-02-16:15:43:25 TEST_1 2016-02-16:15:43:25
U BDTEST.TEST_TAB_1 2016-02-16 20:43:53.000233 2016-02-16T15:43:56.056000 00000000000000002243 5 TEST_1_1
Time taken: 2.661 seconds, Fetched: 6 row(s)
hive>

It work for deletes too, only flag will be “D” instead of “I” for insert and “U” for updates.

What about DDL support? Let’s truncate the table.
1
2
3
4
5

orcl> truncate table ggtest.test_tab_1;

Table truncated.

orcl>

And here, there is nothing in our hdfs files. Maybe I’ve missed something, but it looks like the truncate operation is not creating any record. I need to dig a bit more. I will try to make a separate blog post about DDL support for the Big Data Adapters.
It works pretty well when we create a new table and insert new rows.

It also works if we change one of the existing tables, adding or dropping a column. Let’s try to create a new table here:
1
2
3
4
5
6
7
8
9
10
11
12
13

orcl> create table ggtest.test_tab_4 (pk_id number, rnd_str_1 varchar2(10),acc_date date);

Table created.

orcl> insert into ggtest.test_tab_4 select * from ggtest.test_tab_2;

1 row created.

orcl> commit;

Commit complete.

orcl>

You can see that it has created a new directory and file for the new table. Additionally, if you add a column the new file will be used for all DML operations for the updated table. It will help to separate tables with different structure.
1
2
3
4
5
6
7
8
9
10
11

[oracle@sandbox oggbd]$ hadoop fs -ls /user/oracle/gg/
Found 3 items
drwxr-xr-x – oracle oracle 0 2016-02-16 15:43 /user/oracle/gg/bdtest.test_tab_1
drwxr-xr-x – oracle oracle 0 2016-02-16 14:37 /user/oracle/gg/bdtest.test_tab_2
drwxr-xr-x – oracle oracle 0 2016-02-16 15:56 /user/oracle/gg/bdtest.test_tab_4
[oracle@sandbox oggbd]$ hadoop fs -ls /user/oracle/gg/bdtest.test_tab_4/
Found 1 items
-rw-r–r– 1 oracle oracle 127 2016-02-16 15:56 /user/oracle/gg/bdtest.test_tab_4/bdtest.test_tab_4_2016-02-16_15-56-50.373.txt
[oracle@sandbox oggbd]$ hadoop fs -tail /user/oracle/gg/bdtest.test_tab_4/bdtest.test_tab_4_2016-02-16_15-56-50.373.txt
IBDTEST.TEST_TAB_42016-02-16 20:56:47.0009532016-02-16T15:56:50.371000000000000000000068327IJWQRO7T2013-07-07:08:13:52
[oracle@sandbox oggbd]$

At first glance it looks good, but let’s try to create a table as select.
1
2
3
4
5

orcl> create table ggtest.test_tab_3 as select * from ggtest.test_tab_2;

Table created.

orcl>

Not sure if it is an expected behavior or bug, but apparently it is not working. Our replicat is broken and complains that “DDL Replication is not supported for this database”.
1
2
3
4
5
6
7
8
9
10
11
12
13

[oracle@sandbox oggbd]$ tail -5 ggserr.log
2016-02-16 15:43:37 INFO OGG-02756 Oracle GoldenGate Delivery, rhdfs.prm: The definition for table GGTEST.TEST_TAB_1 is obtained from the trail file.
2016-02-16 15:43:37 INFO OGG-06511 Oracle GoldenGate Delivery, rhdfs.prm: Using following columns in default map by name: PK_ID, RND_STR, USE_DATE, RND_STR_1, ACC_DATE.
2016-02-16 15:43:37 INFO OGG-06510 Oracle GoldenGate Delivery, rhdfs.prm: Using the following key columns for target table bdtest.TEST_TAB_1: PK_ID.
2016-02-16 15:48:49 ERROR OGG-00453 Oracle GoldenGate Delivery, rhdfs.prm: DDL Replication is not supported for this database.
2016-02-16 15:48:49 ERROR OGG-01668 Oracle GoldenGate Delivery, rhdfs.prm: PROCESS ABENDING.
[oracle@sandbox oggbd]$

[oracle@sandbox oggbd]$ hadoop fs -ls /user/oracle/gg/
Found 2 items
drwxr-xr-x – oracle oracle 0 2016-02-16 15:43 /user/oracle/gg/bdtest.test_tab_1
drwxr-xr-x – oracle oracle 0 2016-02-16 14:37 /user/oracle/gg/bdtest.test_tab_2
[oracle@sandbox oggbd]$

What can we say in summary? The replication works, and supports all DML, and some DDL commands. You will need to prepare to get consistent datasets for any given time using flag for operation, and time for the transaction. In my next few posts, I will cover other Big Data adapters for GoldenGate.

Nota | Publicado em por | Marcado com , , , , , , , , , , , , , , | Deixe um comentário

Installing Oracle GoldenGate for Oracle 12.1.2 on Linux EL 6.x/RHEL 6.x/CentOS 6.x

http://www.oracle.com/webfolder/technetwork/tutorials/obe/fmw/goldengate/12c/OGG12c_Installation/index.html

Nota | Publicado em por | Deixe um comentário

Automating Oracle GoldenGate 11g

http://www.oracle.com/webfolder/technetwork/tutorials/obe/fmw/goldengate/11g/ogg_automate/index.html

Nota | Publicado em por | Deixe um comentário

Novas Views do Goldengate 12C

Oracle 12c GoldenGate View

Show data dictionary views and x$ tables matching the expression “v$GG”…

TABLE_NAME COMMENTS
—————————— ——————————————————————————–
V$GG_APPLY_COORDINATOR Synonym for V_$GG_APPLY_COORDINATOR
V$GG_APPLY_READER Synonym for V_$GG_APPLY_READER
V$GG_APPLY_RECEIVER Synonym for V_$GG_APPLY_RECEIVER
V$GG_APPLY_SERVER Synonym for V_$GG_APPLY_SERVER
GV$GG_APPLY_COORDINATOR Synonym for GV_$GG_APPLY_COORDINATOR
GV$GG_APPLY_READER Synonym for GV_$GG_APPLY_READER
GV$GG_APPLY_RECEIVER Synonym for GV_$GG_APPLY_RECEIVER
GV$GG_APPLY_SERVER Synonym for GV_$GG_APPLY_SERVER

8 rows selected.

TABLE_NAME USED_IN
—————————— ——————————
GV$GG_APPLY_RECEIVER V$GG_APPLY_RECEIVER
V$GG_APPLY_RECEIVER V$GG_APPLY_RECEIVER
GV$GG_APPLY_COORDINATOR V$GG_APPLY_COORDINATOR
V$GG_APPLY_COORDINATOR V$GG_APPLY_COORDINATOR
GV$GG_APPLY_SERVER V$GG_APPLY_SERVER
V$GG_APPLY_SERVER V$GG_APPLY_SERVER
GV$GG_APPLY_READER V$GG_APPLY_READER
V$GG_APPLY_READER V$GG_APPLY_READER

V$GOLDENGATE_CAPABILITIES Synonym for V_$GOLDENGATE_CAPABILITIES
V$GOLDENGATE_CAPTURE Synonym for V_$GOLDENGATE_CAPTURE
V$GOLDENGATE_MESSAGE_TRACKING Synonym for V_$GOLDENGATE_MESSAGE_TRACKING
V$GOLDENGATE_TABLE_STATS Synonym for V_$GOLDENGATE_TABLE_STATS
V$GOLDENGATE_TRANSACTION Synonym for V_$GOLDENGATE_TRANSACTION
GV$GOLDENGATE_CAPABILITIES Synonym for GV_$GOLDENGATE_CAPABILITIES
GV$GOLDENGATE_CAPTURE Synonym for GV_$GOLDENGATE_CAPTURE
GV$GOLDENGATE_MESSAGE_TRACKING Synonym for GV_$GOLDENGATE_MESSAGETRACKING
GV$GOLDENGATE_TABLE_STATS Synonym for GV_$GOLDENGATE_TABLE_STATS
GV$GOLDENGATE_TRANSACTION Synonym for GV_$GOLDENGATE_TRANSACTION

10 rows selected.

TABLE_NAME USED_IN
—————————— ——————————
GV$GOLDENGATE_CAPABILITIES V$GOLDENGATE_CAPABILITIES
V$GOLDENGATE_CAPABILITIES V$GOLDENGATE_CAPABILITIES
GV$GOLDENGATE_TABLE_STATS V$GOLDENGATE_TABLE_STATS
V$GOLDENGATE_TABLE_STATS V$GOLDENGATE_TABLE_STATS
GV$GOLDENGATE_CAPTURE V$GOLDENGATE_CAPTURE
V$GOLDENGATE_CAPTURE V$GOLDENGATE_CAPTURE
GV$GOLDENGATE_TRANSACTION V$GOLDENGATE_TRANSACTION
V$GOLDENGATE_TRANSACTION V$GOLDENGATE_TRANSACTION
GV$GOLDENGATE_MESSAGE_TRACKING V$GOLDENGATE_MESSAGE_TRACKING
V$GOLDENGATE_MESSAGE_TRACKING V$GOLDENGATE_MESSAGE_TRACKING

TABLE_NAME COMMENTS
—————————— ——————————————————————————–
DBA_HIST_APPLY_SUMMARY Streams/Goldengate/XStream Apply Historical Statistics Information
DBA_HIST_REPLICATION_TBL_STATS Replication Table Stats For GoldenGate/XStream Sessions
DBA_HIST_REPLICATION_TXN_STATS Replication Transaction Stats For GoldenGate/XStream Sessions
DBA_HIST_CAPTURE Streams/GoldenGate/XStream Capture Historical Statistics Information
DBA_HIST_SESS_SGA_STATS SGA Usage Stats For High Utilization GoldenGate/XStream Sessions
DBA_HIST_SESS_TIME_STATS CPU And I/O Time For High Utilization Streams/GoldenGate/XStream sessions

Replication System Resource Usage

System resource usage of GoldenGate/XStream processes aggregated by Session Type and Session Module
Data is ordered by CPU Time in descending order, followed by Session Type and Session Module in ascending order

Session Type Session Module First Logon CPU Time(s) User IO Wait Time(s) System IO Wait Time(s)
Capture GoldenGate 25-Oct-13 13:54:43 17.60 2.20 0.00
Apply Receiver GoldenGate 25-Oct-13 11:51:56 4.33 0.60 0.00
Apply Reader GoldenGate 25-Oct-13 11:51:56 3.60 0.00 0.00
Logminer Preparer GoldenGate 25-Oct-13 13:54:44 0.40 0.11 0.00
Apply Server GoldenGate 25-Oct-13 11:51:56 0.34 0.13 0.00
Logminer Builder GoldenGate 25-Oct-13 13:54:44 0.32 0.04 0.00
Logminer Reader GoldenGate 25-Oct-13 13:54:44 0.20 0.00 0.02
Apply Coordinator GoldenGate 25-Oct-13 11:51:56 0.00 0.00 0.00

Back to Replication Statistics (GoldenGate, XStream)
Back to Top

Replication SGA Usage

SGA usage of Streams Pool for GoldenGate/XStream processes ordered by SGA Used %Diff, SGA Allocated %Diff, Component in descending order
For each Capture (Object Name), reporting memory used by Capture and Logminer processes at the Begin and End snapshots
For each Apply (Object Name), reporting memory used by each Apply process at the Begin and End snapshots
% SGA Util refers to the percentage of allocated memory used at the End snapshot
Memory usage is displayed in Megabytes

Component Session Module Object Name SGA Used Begin Snap SGA Used End Snap SGA Used %Diff SGA Allocated Begin Snap SGA Allocated End Snap SGA Allocated %Diff % SGA Util
Logminer GoldenGate OGG$CAP_EXT1 4.17 4.25 1.92 199.00 199.00 0.00 2.14
Capture GoldenGate OGG$CAP_EXT1 4.18 4.26 1.91 199.00 199.00 0.00 2.14
Apply GoldenGate OGG$RPEE1 3.01 3.00 -0.33 3.02 3.04 0.66 98.

Nota | Publicado em por | Deixe um comentário

Oracle Dynamic Performance Views Version 12.1.0.2

http://www.morganslibrary.org/reference/dyn_perf_view.html

Nota | Publicado em por | Deixe um comentário

SQL Tuning Advisor in Oracle SQL Developer 3.0

link original: http://www.oracle.com/webfolder/technetwork/tutorials/obe/db/sqldev/r30/TuningAdvisor/TuningAdvisor.htm

SQL Tuning Advisor in Oracle SQL Developer 3.0

<Do not delete this text because it is a placeholder for the generated list of “main” topics when run in a browser>

Purpose

This tutorial shows you how to use the SQL Tuning Advisor feature in Oracle SQL Developer 3.0.

Time to Complete

Approximately 20 minutes.

Overview

The SQL Tuning Advisor analyzes high-volume SQL statements and offers tuning recommendations. It takes one or more SQL statements as an input and invokes the Automatic Tuning Optimizer to perform SQL tuning on the statements. It can run against any given SQL statement. The SQL Tuning Advisor provides advice in the form of precise SQL actions for tuning the SQL statements along with their expected performance benefits. The recommendation or advice provided relates to the collection of statistics on objects, creation of new indexes, restructuring of the SQL statement, or creation of a SQL profile. You can choose to accept the recommendation to complete the tuning of the SQL statements.

Oracle Database can automatically tune SQL statements by identifying problematic SQL statements and implementing tuning recommendations using the SQL Tuning Advisor. You can also run the SQL Tuning Advisor selectively on a single or a set of SQL statements that have been identified as problematic.

In this tutorial, you learn how to run and review the recommendations of the SQL Tuning Advisor.

Note: Tuning Advisor is part of the Tuning Pack, one of the Oracle management packs and is available for purchase with Enterprise Edition. For more information see The Oracle Technology Network or the online documentation.

Software and Hardware Requirements

The following is a list of software requirements:

  • Oracle Database 11g Enterprise Edition with access to the Tuning and Diagnostic management packs and with the sample schema installed.
  • Oracle SQL Developer 3.0.

Prerequisites

Before starting this tutorial, you should:

1 . Install Oracle SQL Developer 3.0 from OTN. Follow the readme instructions here.
2 . Install Oracle Database 11g with the Sample schema.

Creating a Database Connection

The first step to managing database objects using Oracle SQL Developer 3.0 is to create a database connection.

Perform the following steps to create a database connection:

Note: If you already have database connections for SCOTT and SYSTEM, you do not need to perform the following steps. You can move to Providing Privileges to the Scott User topic.

1. If you have installed the SQL Developer icon on your desktop, click the icon to start your SQL Developer and move to Step 4. If you do not have the icon located on your desktop, perform the following steps to create a shortcut to launch SQL Developer 3.0 from your desktop.

Open the directory where the SQL Developer 3.0 is located, right-click sqldeveloper.exe (on Windows) or sqldeveloper.sh (on Linux) and select Send to > Desktop (create shortcut).

Screenshot for Step

 

2. On the desktop, you will find an icon named Shortcut to sqldeveloper.exe. Double-click the icon to open SQL Developer 3.0.

Note: To rename it, select the icon and then press F2 and enter a new name.

Screenshot for Step

3. Your Oracle SQL Developer opens up.

Screenshot for Step

 

4. In the Connections navigator, right-click Connections and select New Connection.

Screenshot for Step

 

5. The New / Select Database Connection dialog opens. Enter the connection details as follows and click Test.

Connection Name: system
Username: system
Password: <your_password> (Select Save Password)
Hostname: localhost
SID: <your_own_SID>

Screenshot for Step

 

6. Check for the status of the connection on the left-bottom side (above the Help button). It should read Success. Click Save and then click Connect.

Screenshot for Step

 

7. In the Connections navigator, to create a new connection to the scott schema, right-click Connections and select New Connection.

Screenshot for Step

 

8. The New / Select Database Connection dialog opens. Enter the connection details as follows and click Test.

Connection Name: scott
Username: scott
Password: <your_password> (Select Save Password)
Hostname: localhost
SID: <your_own_SID>

Screenshot for Step

 

9. Check for the status of the connection on the left-bottom side (above the Help button). It should read Success. Click Save and then click Connect.

Screenshot for Step

 

10 . The connection is saved and you can view the two newly created connections in the Connections list.

Screenshot for Step

 

Providing Privileges and Removing the existing Statistics on the Scott User

A user requires certain privileges to run the SQL Tuning Advisor. Also, in order to collect and manage statistics on the SCOTT schema, the existing statistics have to be cleared. Below are the steps to grant SQL Tuning Advisor privileges and remove the existing statistics on the scott user.

1. Click SQL Worksheet and select system user.

Screenshot for Step

 

2. To grant privileges to the scott user to run the SQL Tuning Advisor, enter the following lines of code. Click Run Statement .

grant advisor to scott;

grant administer sql tuning set to scott;

Screenshot for Step

3. The output for the statements is displayed.

Screenshot for Step

 

4. The Oracle database allows you to collect statistics of many different kinds in order to improve performance. To illustrate some of the features the SQL Tuning Advisor offers, clear the existing statistics from the SCOTT schema.

To delete the schema statistics, enter the following line of code.

exec DBMS_STATS.DELETE_SCHEMA_STATS (‘scott’);

Select the statement and click Run Statement .Screenshot for Step

With the DBMS_STATS package you can view and modify optimizer statistics gathered for database objects.The DELETE_SCHEMA_STATS procedure deletes statistics for an entire schema.

 

5. The output for the statement appears.

Screenshot for Step

 

Running the SQL Tuning Advisor on a SQL statement

In this topic, you run the SQL Tuning Advisor on a SQL statement. Four types of analysis are performed by the SQL Tuning Advisor on the SQL statement.

All the recommendations are displayed in the Overview. You can also view each recommendation individually.

1. Open the SQL Worksheet for the scott user by clicking SQL Worksheet .

Screenshot for Step

 

2. Enter the following SQL statement in the worksheet.

select sum(e.sal), avg(e.sal), count(1), e.deptno from dept d, emp e group by e.deptno order by e.deptno;

Screenshot for Step

Note that the above SQL statement has an unused reference to the “dept” table.

3. Select the SQL statement and click SQL Tuning Advisor .

Screenshot for Step

4. The SQL Tuning Advisor output appears.

Screenshot for Step

5. In the left navigator, click Statistics. In this analysis, objects with stale or missing statistics are identified and appropriate recommendations are made to remedy the problem.

Screenshot for Step

6. In the left navigator, click SQL Profile. Here, the SQL Tuning Advisor recommends to improve the execution plan by the generation of a SQL Profile.

Screenshot for Step

 

7. Click the Detail tabbed page to view the SQL Profile Finding.

Screenshot for Step

8. In the left navigator, click Indexes. This recommends whether the SQL statement might benefit from an index. If necessary, new indexes that can significantly enhance query performances are identified and recommended.

Screenshot for Step

9. Click the Overview tabbed page. In this case, there are no index recommendations.

Screenshot for Step

10. In the left navigator, click Restructure SQL. In this analysis, relevant suggestions are made to restructure selected SQL statements for improved performance.

Screenshot for Step

Implementing SQL Tuning Advisor recommendations

You can implement the SQL Tuning Advisor recommendation feature. This will enable you to update the statistics in scott schema. Perform the following steps to implement the SQL Tuning Advisor recommendations:

1. In the Connections navigator, right-click scott and select Gather Schema Statistics….

Screenshot for Step

 

2. In Gather Schema Statistics, select Estimate Percent as 100 from the drop-down list so that all rows in each table are read. This ensures that the statistics are as accurate as possible.

Screenshot for Step

 

3. Click Apply.

Screenshot for Step

4. A confirmation message appears. Click OK.

Screenshot for Step

 

5. To run the SQL Tuning Advisor on the SQL statement again, select the SQL statement and click SQL Tuning Advisor .

Screenshot for Step

 

6. The SQL Tuning Advisor output appears. By gathering statistics, the Statistics and SQL Profile advice is now removed.

Screenshot for Step

7. In the left navigator, click each of the SQL Tuning Advisor Implement Type to check if all the recommendations have been implemented.

Screenshot for Step

Note the issues reported to you:

Screenshot for Step

Note the issues reported to you:

Screenshot for Step

Screenshot for Step

Note that the Restructure SQL recommendation to remove an unused table remains.

 

8. Remove the “dept” table in the SQL statement and click SQL Advisor .

Screenshot for Step

9. The output appears. All of the advice recommendations have been removed.

Screenshot for Step

Nota | Publicado em por | Marcado com , , , , , , , , | Deixe um comentário

Oracle GoldenGate for Big Data 12.2.0.1 is Generally Available Now!

By Thomas Vengal-Oracle on Dec 22, 2015

https://blogs.oracle.com/dataintegration/entry/oracle_goldengate_for_big_data

Much awaited Oracle GoldenGate for Big Data 12.2 is released today and it is available for download at OTN.

Let me give you a quick recap on Oracle GoldenGate for Big Data. Oracle GoldenGate for Big Data streams transactional data into big data systems in real-time, raising the quality and timeliness of business insights. Oracle GoldenGate for Big Data offers also provides a flexible and extensible solution to support all major big data systems.

Oracle GoldenGate for Big Data

  • Same trusted Oracle GoldenGate architecture used by 1000’s of customers
  • Data delivery to Big Data targets including NoSQL databases
  • Support for Polyglot, Lambda and Kappa architectures for streaming data

Key Benefits

  • Less invasive on source databases when compared to batch processing such as Sqoop or ETL processes
  • Simple ingestion for 1:1 data architecture for populating “raw data” zones
  • Real-time data delivery for streaming analytics/apps
  • Reliable, proven at scale with high performance

Architecture – GoldenGate for Big Data 12.2 versus 12.1

New Features in 12.2.0.1:

New Java based Replicat Process 

The advantages of using Java based Replicat process are the following:

  1. Improved performance with Java based adapters
  2. Declarative design and configurable mapping
  3. Transaction grouping based on Operation count & Message size
  4. Improved check pointing functionality
    E.g.: CHECKPOINTSECS 1 (default 10 seconds)

Dynamic Data Handling

You no longer require to define SOURCEDEFS. DDL changes are automatically replicated to target. For example, if a new column named “mycolumn“ is added on the source database, it will be automatically replicated to the target without stopping and reconfiguring Oracle GoldenGate.

Pluggable Formatters

Oracle GoldenGate for Big Data can write into any Big Data targets in various data formats such as delimited text or XML or JSON or Avro or custom format. This can save users cost and time for staging data in ETL operations.

Example: gg.handler.name.format= <value>
values supported are “delimitedtext”, “xml”, “json”, “avro” or “avro_row”, “avro_op” or Custom Format. Extended class path needs to be included in the config file. <com.yourcompany.YourFormatter

Security Enhancement

Native Kerberos support is available in the 12.2.0.1 binaries.

Example of configuration:
gg.handler.gghdfs.authType=Kerberos
gg.handler.gghdfs.kerberosKeytabFile=/keytab/file/path
gg.handler.gghdfs.kerberosPrincipal=user/FQDN@MY.REALM

Declarative Design

Oracle GoldenGate for Big Data is able to provide mapping functionally between source table to target table and source field to target field for HDFS/Hive, HBase, Flume and Kafka. The metadata is also validated at Hive or using an Avro schema to ensure data correctness.

Example:
MAP GG.TCUSTOMER, TARGET GG.TCUSTMER2, COLMAP (USEDEFAULTS, “cust_code2″=cust_code,”city2″=city);

Kafka as target

Oracle GoldenGate for Big Data can write Logical change records data to a Kafka topic. Operations such as Insert, Update, Delete and Primary Key Update can be handled. It can handles native compression such as GZIP and Snappy in Kafka.

Example of defining Kafka Handler Properties:
gg.handlerlist=ggkafka
gg.handler.ggkafka.type=kafka
gg.handler.ggkafka.topicName=gg_kafka
gg.handler.ggkafka.mode=tx

Other Enhancements

  • Partition data by Hive Table and/or column. Partitioning into new file based on designated column values
    Example:

    • gg.handler.{name}.partitionByTable =true | false
    • gg.handler.{name}.partitioner.{fully qualified table name}={colname}
    • gg.handler.{name}.partitioner.{fully qualified table name}={colname1},{colname2}
    • gg.handler.<yourhandlername>.partitioner.dbo.TCUSTORD=region, rating
  • Configurable File Rolling Property for HDFS (file size, duration, inactivity timer, metadata change)
  • Configurable file output encoding into HDFS
  • Automatically create HBase table if it does not exist
  • Ability to treat primary key updates as a delete and then an insert in HBase
  • HBase row key generation
  • Treat Primary Key updates as delete and insert in Flume and HBase
  • New Time stamping functionality to include micro second precision as ISO-8601
  • Availability on additional OS platforms: Windows and Solaris
  • Certification for newer versions: Apache HDFS 2.7.x, Cloudera 5.4.x, Hortonworks 2.3, Kafka 0.8.2.0 and 0.8.2.1

For more details about new product features, you may refer to Oracle GoldenGate for Big Data 12.2.0.1 Release Notes and User Documentation.

For more information about Oracle GoldenGate for Big Data.

Feel free to reach out to me for your queries by posting in this blog or tweeting @thomasvengal

 

Publicado em GOLDENGATE, ORACLE 11gR2 | Marcado com , | Deixe um comentário

Goldengate 12c lendo direto do Oracle Active Dataguard “ADG”

Olá a todos.

O nova release de Goldengate 12c está conseguindo ler diretamente do Active Dataguard, isso é muito bom, irá ajudar muitas empresas.

Segue abaixo como fazer essa nova configuração:

Configuring Classic Capture in Oracle Active Data Guard Only Mode
You can configure classic Extract to access both redo data and metadata in real-time to successfully replicate source database activities using Oracle Active Data Guard. This is known as Active Data Guard (ADG) mode. ADG mode enables Extract to use production logs that are shipped to a standby database as the data source. The online logs are not used as all. Oracle GoldenGate connects to the standby database to get metadata and other required data as needed.
This mode is useful in load sensitive environments where ADG is already in place or can be implemented. It can also be used as cost effective method to implement high availability using the ADG Broker role planned (switchover) and failover (unplanned) changes. In an ADG configuration, switchover and failover are considered roles. When either of the operations occur, it is considered a role change. For more information, see Oracle Data Guard Concepts and Administration and Oracle Data Guard Broker.

Limitations and Requirements for Using ADG Mode
Observe the following limitations and requirements when using Extract in ADG mode.

  • Extract in ADG mode will only apply redo data that has been applied to the standby database by the apply process. If Extract runs ahead of the standby database, it will wait for the standby database to catch up.
  • You must explicitly specify ADG mode in your classic Extract parameter file to run extract on the standby database.
  • You must specify the database user and password to connect to the ADG system because fetch and other metadata resolution occurs in the database.
  • The number of redo threads in the standby logs in the standby database must match the number of nodes from the primary database.
  • No new RAC instance can be added to the primary database after classic Extract has been created on the standby database. If you do add new instances, the redo data from the new thread will not be captured by classic Extract.
  • Archived logs and standby redo logs accessed from the standby database will be an exact duplicate of the primary database. The size and the contents will match, including redo data, transactional data, and supplemental data. This is guaranteed by a properly configured ADG deployment.
  • ADG role changes are infrequent and require user intervention in both cases.
  • With a switchover, there will be an indicator in the redo log file header (end of the redo log or EOR marker) to indicate end of log stream so that classic Extract on the standby can complete the RAC coordination successfully and ship all of the committed transactions to the trail file.
  • With a failover, a new incarnation is created on both the primary and the standby databases with a new incarnation ID, RESETLOG sequence number, and SCN value.
  • You must connect to the primary database from GGSCI to add TRANDATA or SCHEMATRANDATA because this is done on the primary database.
  • DDL triggers cannot be used on the standby database, in order to support DDL replication (except ADDTRANDATA). You must install the Oracle GoldenGate DDL package on the primary database.
  • DDL ADDTRANDATA is not supported in ADG mode; you must use ADDSCHEMATRANDATA for DDL replication.
  • When adding extract on the standby database, you must specify the starting position using a specific SCN value, timestamp, or log position. Relative timestamp values, such as NOW, become ambiguous and may lead to data inconsistency.
  • When adding extract on the standby database, you must specify the number of threads that will include all of the relevant threads from the primary database.
  • During or after failover or switchover, no thread can be added or dropped from either primary or standby databases.
  • Classic Extract will only use one intervening RESETLOG operation.
  • If you do not want to relocate your Oracle GoldenGate installation, then you must position it in a shared space where the Oracle GoldenGate installation directory can be accessed from both the primary and standby databases.
  • If you are moving capture off of an ADG standby database to a primary database, then you must point your net alias to the primary database and you must remove the TRANLOG options.
  • Only Oracle Database releases that are running with compatibility setting of 10.2 or higher (10g Release 2) are supported.

Configuring Extract for ADG Mode
To configure Extract for ADG mode, follow these steps as part of the overall process for configuring Oracle GoldenGate, as documented in Chapter 8, “Configuring Capture in Classic Mode.”

  1. Enable supplemental logging at the table level and the database level for the tables in the primary database using the ADD SCHEMATRANDATA parameter. If necessary, create a DDL capture. (See Section 3.2, “Configuring Logging Properties”.)
  2. When Oracle GoldenGate is running on a different server from the source database, make certain that SQL*Net is configured properly to connect to a remote server, such as providing the correct entries in a TNSNAMES file. Extract must have permission to maintain a SQL*Net connection to the source database.
  3. On the standby database, use the Extract parameter TRANLOGOPTIONS with the MINEFROMACTIVEDG option. This option forces Extract to operate in ADG mode against a standby database, as determined by a value of PRIMARY or LOGICAL STANDBY in the db_role column of the v$database view.
    Other TRANLOGOPTIONS options might be required for your environment. For example, depending on the copy program that you use, you might need to use the COMPLETEARCHIVEDLOGONLY option to prevent Extract errors.
  4. On the standby database, add the Extract group by issuing the ADD EXTRACT command specifying the number of threads active on the primary database at the given SCN. The timing of the switch depends on the size of the redo logs and the volume of database activity, so there might be a limited lag between when you start Extract and when data starts being captured. This can happen in both regular and RAC database configurations.

Migrating Classic Extract To and From an ADG Database
You must have your parameter files, checkpoint files, bounded recovery files, and trail files are stored in shared storage or copied to the ADG database before attempting to migrate a classic Extract to or from an ADG database. Additionally, you must ensure that there has not been any intervening role change or Extract will be mining the same branch of redo.

Use the following steps to move to an ADG database:

  1. Edit the parameter file ext1.prm to add the following parameters:
    DBLOGIN USERID userid@ADG PASSWORD password
    TRANLOGOPTIONS MINEFROMACTIVEDG
  2. Start Extract by issuing the START EXTRACT ext1 command.

Use the following steps to move from an ADG database:

  1. Edit the parameter file ext1.prm to remove the following parameters:
    DBLOGIN USERID userid@ADG PASSWORD password
    TRANLOGOPTIONS MINEFROMACTIVEDG
  2. .Start Extract by issuing the START EXTRACT ext1 command.

Handling Role Changes In an ADG Configuration
In a role change involving a standby database, all sessions in the primary and the standby database are first disconnected including the connections used by Extract. Then both databases are shut down, then the original primary is mounted as a standby database, and the original standby is opened as the primary database.
The procedure for a role change is determined by the initial deployment of Classic Extract and the deployment relation that you want, database or role. The following table outlines the four possible role changes and is predicated on an ADG configuration comprised of two databases, prisys and stansys. The prisys system contains the primary database and the stansys system contains the standby database; prisys has two redo threads active, whereas stansys has four redo threads active.

Initial Deployment Primary (prisys) Initial Deployment ADG (stansys)
Role Related:
TRANLOGOPTIONS MINEFROMACTIVEDG
Database Related:
After Role Transition: Classic Extract to classic Extract After Role Transition: ADG to ADG
1.Edit ext1.prm to change the database system to the standby system: 1.Edit ext1.prm to change the database system to the primary system:
DBLOGIN USERID userid@stansys, PASSWORD password DBLOGIN USERID userid@prisys, PASSWORD password
2.If a failover, add TRANLOGOPTIONS USEPREVRESETLOGSID. 2.If a failover, add TRANLOGOPTIONS USEPREVRESETLOGSID.
3.Start Extract: 3.Start Extract:
START EXTRACT ext1 START EXTRACT ext1
Extract will abend once it reaches the role transition point, then it does an internal BR_RESET and moves both the I/O checkpoint and current checkpoint Extract will abend once it reaches the role transition point, then it does an internal BR_RESET and moves both the I/O checkpoint and current checkpoint to SCN s.
SCN s. 4.If failover, edit the parameter file again and remove:
4.If failover, edit the parameter file again and remove: TRANLOGOPTIONS USEPREVRESETLOGSID
TRANLOGOPTIONS USEPREVRESETLOGSID 5.Execute ALTER EXTRACT ext1 SCN #, where# is the SCN value from role switch message.
5.Execute ALTER EXTRACT ext1 SCN #, where# is the SCN value from role switch message. 6.Based on the thread counts, do one of the following:
6.Based on the thread counts, do one of the following: If the thread counts are same between the databases, then execute the START EXTRACT ext1; command.
If the thread counts are same between the databases, then execute the START EXTRACT ext1; command. or
or If thread counts are different between the databases, then execute the following commands:
If thread counts are different between the databases, then execute the following commands: DROP EXTRACT ext1
DROP EXTRACT ext1 ADD EXTRACT ext1 THREADS t BEGIN SCN s
ADD EXTRACT ext1 THREADS t BEGIN SCN s START EXTRACT ext1
START EXTRACT ext1
Initial Deployment Primary (prisys) Initial Deployment ADG (stansys)
Original Deployment:
ext1.prm ext1.prm
DBLOGIN USERID userid@prisys, PASSWORD password DBLOGIN USERID userid@stansys, PASSWORD password
TRANLOGOPTIONS MINEFROMACTIVEDG
Database Related:
After Role Transition: Classic Extract to ADG After Role Transition: ADG to classic Extract
1.Edit the ext1.prm file to add: 1.Edit ext1.prm and remove:
TRANLOGOPTIONS MINEFROMACTIVEDG TRANLOGOPTIONS MINEFROMACTIVEDG
2.If a failover, add TRANLOGOPTIONS USEPREVRESETLOGSID. 2.If a failover, add TRANLOGOPTIONS USEPREVRESETLOGSID.
3.Start Extract: 3.Start Extract:
START EXTRACT ext1 START EXTRACT ext1
Extract will abend once it reaches the role transition point, then it does an internal BR_RESET and moves both the I/O checkpoint and current checkpoint to SCN s. Extract will abend once it reaches the role transition point, then it does an internal BR_RESET and moves both the I/O checkpoint and current checkpoint to SCN s.
4.If failover, edit the parameter file again and remove: 4.If failover, edit the parameter file again and remove:
TRANLOGOPTIONS USEPREVRESETLOGSID TRANLOGOPTIONS USEPREVRESETLOGSID
5.Execute ALTER EXTRACT ext1 SCN #, where # is the SCN value from role switch message. 5.Execute ALTER EXTRACT ext1 SCN #, where # is the SCN value from role switch message.
6.Based on the thread counts, do one of the following: 6.Based on the thread counts, do one of the following:
If the thread counts are same between the databases, then execute the START EXTRACT ext1; command. If the thread counts are same between the databases, then execute the START EXTRACT ext1; command.
or or
If thread counts are different between the databases, then execute the following commands: If thread counts are different between the databases, then execute the following commands:
DROP EXTRACT ext1 DROP EXTRACT ext1
ADD EXTRACT ext1 THREADS t BEGIN SCN s ADD EXTRACT ext1 THREADS t BEGIN SCN s
START EXTRACT ext1 START EXTRACT ext1
Publicado em GOLDENGATE | Marcado com , , | Deixe um comentário

Goldengate 12c e as novas tabelas criadas no Banco Oracle

Olá pessoal.

Sabiam que o Goldengate 12c cria algumas tabelas dentro do banco de dados Oracle?

Segue abaixo as novas tabelas criadas pelo Goldengate 12c para replicat integrado.

GoldenGate Database Views
Integrated Delivery

Configuration
DBA_GOLDENGATE_PRIVILEGES
DBA_GOLDENGATE_INBOUND
DBA_GG_INBOUND_PROGRESS
DBA_APPLY, DBA_APPLY_PARAMETERS
Parameters are configured by Replicat at run-time
Runtime
V$GG_APPLY_RECEIVER
V$GG_APPLY_READER
V$GG_APPLY_COORDINATOR
V$GG_APPLY_SERVER
V$GOLDENGATE_TABLE_STATS

É isso ai!

Publicado em GOLDENGATE | Marcado com , , | Deixe um comentário

Migrando de Goldengate 11 para Goldengate 12 passo a passo

Olá amigos.

Recentemente tive que fazer algumas migrações de Goldengate, tive que fazer uma documentação detalhada do procedimento, então aproveitei para compartilhar com vocês, bom divertimento:

  1. Procedimento detalhado de upgrade de Oracle Goldengate 11g para Oracle Goldengate 12c.

    1. Baixar o binário do 12c pelo site Edelivery e copiar para a maquina onde será feito o upgrade.
    2. Definir um home de instalação padrão para o Goldengate 12c
      1. SUGESTÃO:   /u01/app/oracle/product/goldengate12c
    3. Definir uma janela de parada total das transações na produção entre 1 e 2 horas para executar o procedimento de upgrade dos binários do Goldengate
    4. Pegar o SCN do banco no momento antes de parar o extrator
      1. sqlplus / as sysdba
      2. select current_scn from v$database;
    5. Conectar no Goldengate 11.
      1. cd $GG_HOME
      2. ./ggsci
    6. Garantir que não há mais nenhuma transação na base de dados de origem e também garantir que todas as ultimas transações foram replicadas na base de destino antes de parar os extratores e replicadores.
      1. Parar todas aplicações que apontam para essa base de dados
      2. Parar o listener
  • Parar os Jobs internos
  1. Parar as execuções via Crontab “se existir”
  2. Parar os jobs externos via ferramentas, por exemplo “Control M, Patrol, NetBackup, TSM, etc”
  3. Garantir via sqlplus que não há mais nenhum usuário externo ou transação ativa.
  1. Parar o Extrator editar o parameter file e retirar o “Format release 11.2”
    1. GGSCI> INFO EXTRACT group, SHOWCH
    2. GGSCI> SEND EXTRACT group, SHOWTRANS
  • SEND EXTRACT group, {SKIPTRANS | FORCETRANS} transaction_ID [THREAD n] [FORCE]
  1. GGSCI> SEND EXTRACT group, BR BRCHECKPOINT IMMEDIATE
  2. — parar dms e ddls no banco, melhor fazer em uma janela
  3. GGSCI> SEND EXTRACT group LOGEND
  • GGSCI> STOP EXTRACT group
  1. Parar o Pump editar o parameter file e retirar o “Format release 11.2”
    1. GGSCI> SEND EXTRACT group LOGEND
    2. GGSCI> STOP EXTRACT group
  2. Fazer o etrolover no pump para fechar o trail file atual.
    1. GGSCI> ALTER EXTRACT group ETROLLOVER
    2. — The command should return “Rollover performed.”
  • GGSCI> INFO EXTRACT group, DETAIL
  1. GGSCI> ALTER EXTRACT pump, EXTSEQNO seqno, EXTRBA RBA
  1. Parar o Replicador e se for necessário, reposicionar o Replicador no novo Pump versão 12 do trail file.
    1. GGSCI> info REPLICAT group
    2. GGSCI> STOP REPLICAT group
  • GGSCI> ALTER REPLICAT group, EXTSEQNO seqno, EXTRBA RBA
  1. Verificar se tem algum processo do Goldengate travado com o camando “os”, caso exista matar com o comando “kill -9”
    1. pf -ef |grep ggate
    2. kill -9
  2. Rodar no banco de dados como sysdba os scripts para desinstalar o Goldengate 11G
    1. sqlplus / as sysdba
    2. @ddl_disable
  • @ddl_remove     log >> ddl_remove_spool.txt
  1. @marker_remove   log >> marker_remove_set.txt
  1. Garantir que não esteja rodando nenhum DDL nem DML no banco de dados.
    1. Script para verificar transações ativas
    2. select t.start_time, s.sid,s.serial#,s.username,s.status,s.schemaname,s.osuser,s.process,s.machine,s.terminal,s.program,s.module,s.type, to_char(s.logon_time,’DD/MON/YY HH24:MI:SS’) logon_time from v$transaction t, v$session s where s.saddr = t.ses_addr and s. status = ‘ACTIVE’ order by start_time
  • Script para matar as transações ativas se for necessário
  1. select ‘alter system kill session ”’||s.sid||’,’||s.serial#||”’ immediate;’, s.osuser,s.machine,s.module, CEIL(24*(SYSDATE-t.START_DATE)) from v$session s, v$transaction t WHERE S.SADDR=T.SES_ADDR and CEIL(24*(SYSDATE-t.START_DATE))>12;
  1. Fazer backup do binário do Goldengate 11
    1. tar -cvf backupgg11.tar
  2. Instalar binário do Goldengate 12C via Universal Installer

https://docs.oracle.com/goldengate/1212/gg-winux/GIORA/install.htm#GIORA162

Oracle GoldenGate 12c Using Universal Installer1
Enter the appropriate GoldenGate directory where you want to install binaries. Provide the Oracle home location so that GoldenGate can use the shared library libnnz12.so. Modify the port if a different one is required. The Manager process can also be selected to start up after the installation is completed.

Oracle GoldenGate 12c Using Universal Installer2
Review the information provide before starting the install process.

Oracle GoldenGate 12c Using Universal Installer3
Installation in progress.

Oracle GoldenGate 12c Using Universal Installer4
Installation completed.

Oracle GoldenGate 12c Using Universal Installer5
Once the installation is complete you can confirm by checking the contents of Inventory.

Fig-6

  1. Copiar o GLOBALS do home antigo para o novo home 12C do Goldengate
  2. Copiar os arquivos de parâmetro .prm do home antigo para o novo home do Goldengate
  3. Atualizar as variáveis de ambiente para refletir no o novo Home 12C “GG_HOME”
  4. Verificar a versão atual e os patches do ambiente através do “opatch lsinventory”
  5. Upgrade da tabela de checkpoint checkpointtable para versão 12c do Goldengate
    1. GGSCI> DBLOGIN [{SOURCEDB} data_source]|[, database@host:port] |{USERID {/| user id}[, PASSWORD password] [algorithm ENCRYPTKEY {keyname |DEFAULT}] |USERIDALIAS alias [DOMAIN domain]}
    2. GGSCI> UPGRADE CHECKPOINTTABLE [owner.table]
  6. Se conectar no banco de dados como sysdba e rodar o ulg.sql
  7. sqlplus / as sysdba
  8. @ulg.sql — o ulg apaga e recria trandatas das tabelas, para isso ele faz um lock exclusivo nessas tabelas, esse script deverá ser executado somente se houver certeza que não há nenhuma transação ou alteração nessas tabelas envolvidas na replicação durante sua execução.
  • Após rodar o ulg com sucesso seguir adiante, caso contrario abrir chamado na Oracle My Suporte para resolver o problema.
  1. Criar os objetos de replicação de DDL e tabelas de controle do Goldengate.
    1. @marker_setup
    2. Para ativar a replicação de ddl tem que ter uma tablespace exclusiva para o Goldengate
  • @ddl_setup
  1. @role_setup
  2. grant da role gerada para o usuario do Goldengate
  3. @ddl_pin gguser
  • @ddl_enable.sql
  1. Ajustar o novo parâmetro de banco de dados referente ao Goldengate “se 11.2.0.4”.
    1. Alter system set ENABLE_GOLDENGATE_REPLICATION=TRUE
  2. Conceder os privilegios necessários para usar o Extract integrado – já deixar preparado
    1. exec dbms_goldengate_auth.grant_admin_privilege(‘GG_ADMIN’)
  3. Ajustar o parâmetro Streams pool size se for usar Extract integrado(1Gb a 5Gb)
    1. alter system set streams_pool_size=5G
  4. Recriar extracts datapumps e replicats
    1. Para facilitar o processo de criação de todos os processos,pode ser criado um arquivo obey, por exemplo gg11_OBEY_add.oby
    2. Colocar o parâmetro NOUSEANSISQLQUOTES no Globals para permitir o Goldengate entender as aspas duplas.
  • Caso o parâmetro não tenha efeito o utilitário convprm deverá ser usado para fazer essas conversões de aspas duplas e aspas simples.
Publicado em GOLDENGATE, UPGRADE | Marcado com , | Deixe um comentário

Oracle Goldengate 12c New features

With the introduction of the new features in Oracle GoldenGate 12c, the configuration and management of replication has become much easier. It provides support for Oracle 12c multitenant database, more secure authentication and much better performance management options. Most of these features however require the database to be at 11.2.0.4 version or above.
Related Links
Oracle GoldenGate 12c: Silent Install
Oracle GoldenGate 12c: Installation Using Universal Installer
Oracle GoldenGate 12c: Integrated Capture Overview
Oracle GoldenGate 12c: Upgrade Classic Extract to Integrated Capture

New Features

Here are a few highlights of the new features available with GoldenGate 12c Software.
1. Replication: Coordinated Mode – For environments where there is a large number of transactions against a table or a group of tables, the RANGE function has traditionally been used to improve performance of the replicats. Management of this requires quite a bit of effort, especially when this is done on a group of tables which have relationships with each other. The need for this in most situations is eliminated by the introduction of the Coordinated Replicat in GoldenGate 12c. This allows the replicat to process transactions in parallel similar to a multithreaded process. The coordinated replicat feature is able to handle referential integrity, applying the records to the table in the correct order.
To enable Coordinated mode Replication the COORDINATED parameter needs to be specified in the parameter file. The THREADRANGE parameter is used in conjunction with COORDINATED parameter to specify the thread number to be assigned to the process. The SYNCHRONIZE REPLICAT can be used to synchronize all these threads to the same position in the trail file.
2. Replicat: Integrated Mode – With Oracle Goldengate 11g, the Integrated Extract mode on the source database was made available. In GoldenGate 12c, the Integrated Replicat Mode has also been introduced for use on the target database. With Integrated Replicat the Logical Change Records (LCR’s) are created and transferred to the inbound server which applies them to the database.
The DBOPTIONS parameter with the INTEGRATEDPARAMS(parallelism n) option needs to be used to create the Replicat in Integrated mode. Here the parallelism is specified where ‘n’ is the number for parallel processes to be used.
3. Multitenant Support: 12c Database – To support the specification of objects within a given PDB, Oracle GoldenGate now supports three-part object name specifications. Extraction and Replication is supported using the TABLE and MAP statements with other parameters and commands which accept the object names as input. The format for these parameters is container.schema.object.
Three-part names are required to capture from a source Oracle container database because one Extract group can capture from more than one container. Thus, the name of the container, as well as the schema, must be specified for each object or objects in an Extract TABLE statement.
Specify a three-part Oracle CDB name as follows:
Syntax: container.schema.object
Example: PDB1.HR.EMP
Oracle GoldenGate supports three-part names for the following databases:
• Oracle container databases (CDB)
• Informix Dynamix Server
• NonStop SQL/MX
Alternatively to be able to be backup compatible the two naming scheme SCHEMA.OBJECT is still supported with the use of the SOURCECATALOG database name. Below is a sample of the entry required in the parameter file.
SOURCECATALOG plugdb1
MAP schema*.tab*, TARGET *1.*;
4. Security: Credential store – The username and encrypted password login credentials are no longer required to be encrypted and can securely be stored in a database wallet. Reference to this login information is made via an alias.
5. Security: Wallet and Master Key – Data in trail files and the network are encrypted using the master-key wallet mechanism. With the creation of each trail file, a new encryption key is generated. This is used to encrypt the data while the encryption key is encrypted by the master key. Secure network transfer is done by creating a session key using the master key and a crystallographic function.
6. DDL Replication: Native Capture – For capturing DDL operations, the DDL trigger mechanism has been replaced by a new triggerless capture method. This allows support of additional DDL statements which was not previously possible.
7. Installation: Using Oracle Universal Installer (OUI). The installation mechanism no longer uses the untarring of the binaries, rather it uses the OUI, much like most Oracle products.

Oracle GoldenGate 12c Using Universal Installer1
8. Enhanced character set conversion: The conversion of the source character set to an Oracle target character set is now performed by the Replicat instead of the OCI. The name of the source character set is included in the trail and Replicat uses that character set for its session. This enhancement eliminates the requirement to set NLS_LANG on the target to support conversion. See the list of supported Oracle character sets in the Oracle GoldenGate For Windows and UNIX Administrator’s Guide. See SOURCECHARSET in Parameter Changes and Additions for additional information.
9. Remote task data type support: Remote task now supports all Oracle data types, including BLOB, CLOB, NCLOB, LONG, UDT, and XML.
A remote task is a special type of initial-load process in which Extract communicates directly with Replicat over TCP/IP. Neither a Collector process nor temporary disk storage in a trail or file is used. The task is defined in the Extract parameter file with the RMTTASK parameter.
This method supports standard character, numeric, and datetime data types, as well as CLOB, NCLOB, BLOB, LONG, XML, and user-defined datatypes (UDT) embedded with the following attributes: CHAR, NCHAR, VARCHAR, NVARCHAR, RAW, NUMBER, DATE, FLOAT, TIMESTAMP, CLOB, BLOB, XML, and UDT. Character sets are converted between source and target where applicable.
10. Enhanced timezone conversion: Extract now writes the source time zone to the trail. Replicat sets its session to this time zone. This eliminates the need to use SQLEXEC to alter the Replicat session time zone to apply the source database time zone to the target. See Parameter Changes and Additions for parameter changes related to this enhancement.
11. CSN-based transaction filtering: You can now start Extract at a specific CSN in the transaction log or trail.
Syntax: START EXTRACT group_name [ATCSN csn | AFTERCSN csn] START EXTRACT extfin ATCSN 725473
START EXTRACT extfin AFTERCSN 725473
12. Automatic discard file creation: By default, Oracle GoldenGate processes now generate a discard file with default values whenever a process is started with the START command through GGSCI. However, if you currently specify the DISCARDFILE parameter in your parameter files, those specifications remain valid. If you did not specify DISCARDROLLOVER along with DISCARDFILE, however, your discard file will roll over automatically every time the process starts. This automatic rollover behavior contradicts the DISCARDFILE [APPEND/PURGE] option because the new default is to rollover.
• The default discard file has the following properties:
• The file is named after the process that creates it, with a default extension of .dsc.
Example: extfin.dsc.
• The file is created in the dirrpt sub-directory in the GoldenGate home.
• The maximum file size is 50 megabytes.
• At startup, if a discard file exists, it is purged before new data is written.
You can change these properties by using the DISCARDFILE parameter. You can disable the use of a discard file by using the NODISCARDFILE parameter.
If a process is started from the command line of the operating system, it does not generate a discard file by default. You can use the DISCARDFILE parameter to specify the use of a discard file and its properties.
External Links
Oracle GoldenGate 12c Documentation.

Publicado em GOLDENGATE | Marcado com , , , , , , , , , | Deixe um comentário

Goldengate em Alta disponibilidade usando RAC com ACFS e Agents XAG

Essa postagem não é minha, todos os créditos de desse post são para : https://mdinh.wordpress.com/2014/12/14/goldendate-ha-maa-rac-acfs-xag/

GOLDENDATE HA MAA RAC ACFS XAG

Filed under: 11g,GoldenGate,RAC — mdinh @ 11:39 pm

Purpose is to demonstrate how to create HA for Bi-Directional Replication Goldengate installed on ACFS with RAC cluster using XAG.

XAG simplifies the process since there are no requirements to create action scripts.

Please review REFERENCE section for versions used in test case and versions requirements.

Goldengate is installed on ACFS for simplicity; otherwise, at a minimum the following directories br, dirchk, dirdat, dirtmp will need to be on shared storage with symbolic links if installed on local storage. Keyword is minimum until you find out more directories are required.

Role separation was a huge PITA and do not attempt to perform chmod -R 775 /u01 as it will break since the setuid get unset.
Even with chmod 6751 oracle may prove to be ineffective and relink was done.

# id grid
uid=54322(grid) gid=54321(oinstall) groups=54321(oinstall),54322(dba),493(vboxsf),54323(asmadmin),54324(asmdba)

# id oracle
uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),54322(dba),493(vboxsf),54324(asmdba)

# id gguser
uid=54323(gguser) gid=54321(oinstall) groups=54321(oinstall),54322(dba),493(vboxsf)

$ ls -l $ORACLE_HOME/bin/oracle
-rwsr-s--x. 1 grid oinstall 209914519 Dec  7 20:57 /u01/app/11.2.0.4/grid/bin/oracle

Last, DO NOT stop Goldengate using ggsci. For cluster-aware Goldengate, use $XAG_HOME/bin/agctl.

The configuration shown is for one cluster only and the same would have to be performed on the other clusters.

AS ROOT, CONFIGURE GOLDENGATE VIP (lax-ggate1-vip)
1. Determine the network number

# crsctl stat res -p|grep -ie .network -ie subnet|grep -ie name -ie ora_subnet
NAME=ora.net1.network
USR_ORA_SUBNET=192.168.56.0

2. Create GoldenGate VIP with naming Convention: LAX (closest airport code to data center), ggate1 (for Goldengate on network number 1), vip
Note IP address provided is 192.168.56.41

# appvipcfg create -network=1 -ip=192.168.56.41 -vipname=lax-ggate1-vip -user=root -group=oinstall
Production Copyright 2007, 2008, Oracle.All rights reserved
2014-12-14 10:58:41: Creating Resource Type
2014-12-14 10:58:41: Executing /u01/app/11.2.0.4/grid/bin/crsctl add type app.appvip_net1.type -basetype ora.cluster_vip_net1.type -file /u01/app/11.2.0.4/grid/crs/template/appvip.type
2014-12-14 10:58:41: Executing cmd: /u01/app/11.2.0.4/grid/bin/crsctl add type app.appvip_net1.type -basetype ora.cluster_vip_net1.type -file /u01/app/11.2.0.4/grid/crs/template/appvip.type
2014-12-14 10:58:42: Create the Resource
2014-12-14 10:58:42: Executing /u01/app/11.2.0.4/grid/bin/crsctl add resource lax-ggate1-vip -type app.appvip_net1.type -attr "USR_ORA_VIP=192.168.56.41,START_DEPENDENCIES=hard(ora.net1.network) pullup(ora.net1.network),STOP_DEPENDENCIES=hard(ora.net1.network),ACL='owner:root:rwx,pgrp:root:r-x,other::r--,group:oinstall:r-x,user:root:r-x',HOSTING_MEMBERS=rac01.localdomain,APPSVIP_FAILBACK="
2014-12-14 10:58:42: Executing cmd: /u01/app/11.2.0.4/grid/bin/crsctl add resource lax-ggate1-vip -type app.appvip_net1.type -attr "USR_ORA_VIP=192.168.56.41,START_DEPENDENCIES=hard(ora.net1.network) pullup(ora.net1.network),STOP_DEPENDENCIES=hard(ora.net1.network),ACL='owner:root:rwx,pgrp:root:r-x,other::r--,group:oinstall:r-x,user:root:r-x',HOSTING_MEMBERS=rac01.localdomain,APPSVIP_FAILBACK="

3. Set permission for users to start/stop VIP

crsctl setperm resource lax-ggate1-vip -u user:oracle:r-x
# crsctl setperm resource lax-ggate1-vip -u user:grid:r-x
# crsctl setperm resource lax-ggate1-vip -u user:gguser:r-x
# crsctl start resource lax-ggate1-vip
CRS-2672: Attempting to start 'lax-ggate1-vip' on 'rac02'
CRS-2676: Start of 'lax-ggate1-vip' on 'rac02' succeeded

RETRIEVE INFORMATIONS TO CREATE GOLDENGATE AGENT
–nodes

$ olsnodes
rac01
rac02

–filesystems

$ crsctl stat res -w "TYPE = ora.acfs.type" -p|grep '^NAME'
NAME=ora.dg_acfs.vg_acfs.acfs
NAME=ora.dg_acfs.vg_acfs.acfs

–databases

$ crsctl stat res -w "TYPE = ora.database.type"
NAME=ora.emu.db
TYPE=ora.database.type
TARGET=ONLINE         , ONLINE
STATE=ONLINE on rac01, ONLINE on rac02

— extracts and replicats

GGSCI (rac02.localdomain) 1> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING
EXTRACT     RUNNING     ELAX        00:00:00      00:00:02
EXTRACT     RUNNING     PLAX_DEN    00:00:00      00:00:03
REPLICAT    RUNNING     RDEN_LAX    00:00:00      00:00:06

AS GRID, CONFIGURE GOLDENGATE AGENT (lax_ggate)
Options for –instance_type is source|target|dual and I am interpreting dual is bi-directional.

$XAG_HOME/bin/agctl add goldengate lax_ggate \
--gg_home /acfsmount/ggs112 \
--instance_type dual \
--nodes rac01,rac02 \
--vip_name lax-ggate1-vip \
--filesystems ora.dg_acfs.vg_acfs.acfs \
--databases ora.emu.db \
--oracle_home /u01/app/oracle/product/11.2.0.4/db_1 \
--monitor_extracts ELAX,PLAX_DEN \
--critical_extracts ELAX,PLAX_DEN \
--monitor_replicats RDEN_LAX \
--critical_replicats RDEN_LAX

VERIFY RESULTS

$ $XAG_HOME/bin/agctl config goldengate lax_ggate
GoldenGate location is: /acfsmount/ggs112
GoldenGate instance type is: dual
Configured to run on Nodes: rac01 rac02
ORACLE_HOME location is: /u01/app/oracle/product/11.2.0.4/db_1
Databases needed: ora.emu.db
File System resources needed: ora.dg_acfs.vg_acfs.acfs
Extracts to monitor: ELAX,PLAX_DEN
Replicats to monitor: RDEN_LAX
Critical extracts: ELAX,PLAX_DEN
Critical replicats: RDEN_LAX
Autostart on DataGuard role transition to PRIMARY: no
Autostart JAgent: no

START GOLDENDATE

$XAG_HOME/bin/agctl start goldengate lax_ggate

CHECK GOLDENGATE STATUS

$XAG_HOME/bin/agctl status goldengate lax_ggate
Goldengate  instance 'lax_ggate' INTERMEDIATE on rac02

$XAG_HOME/bin/agctl status goldengate lax_ggate
Goldengate  instance 'lax_ggate' is running on rac02
$ crsstat

Resource Name                            Resource Type  Target       State        Node            State Details
---------------------------------------- -------------- ------------ ------------ --------------- ---------------
lax-ggate1-vip                           appvip_net1    ONLINE       ONLINE       rac02
ora.DATA2.dg                             diskgroup      ONLINE       ONLINE       rac01
ora.DATA2.dg                             diskgroup      ONLINE       ONLINE       rac02
ora.DG_ACFS.dg                           diskgroup      ONLINE       ONLINE       rac01
ora.DG_ACFS.dg                           diskgroup      ONLINE       ONLINE       rac02
ora.LISTENER.lsnr                        Listener       ONLINE       ONLINE       rac01
ora.LISTENER.lsnr                        Listener       ONLINE       ONLINE       rac02
ora.LISTENER_SCAN1.lsnr                  SCAN Listener  ONLINE       ONLINE       rac01
ora.LISTENER_SCAN2.lsnr                  SCAN Listener  ONLINE       ONLINE       rac02
ora.LISTENER_SCAN3.lsnr                  SCAN Listener  ONLINE       ONLINE       rac02
ora.asm                                  ASM            ONLINE       ONLINE       rac01           Started
ora.asm                                  ASM            ONLINE       ONLINE       rac02           Started
ora.cvu                                  cvu            ONLINE       ONLINE       rac02
ora.dg_acfs.vg_acfs.acfs                 acfs           ONLINE       ONLINE       rac01           mounted on /acfsmount
ora.dg_acfs.vg_acfs.acfs                 acfs           ONLINE       ONLINE       rac02           mounted on /acfsmount
ora.emu.db                               database       ONLINE       ONLINE       rac01           Open
ora.emu.db                               database       ONLINE       ONLINE       rac02           Open
ora.gsd                                  Gbl Svc Daemon OFFLINE      OFFLINE
ora.gsd                                  Gbl Svc Daemon OFFLINE      OFFLINE
ora.net1.network                         Network (VIP)  ONLINE       ONLINE       rac01
ora.net1.network                         Network (VIP)  ONLINE       ONLINE       rac02
ora.oc4j                                 OC4J           ONLINE       ONLINE       rac02
ora.ons                                  Ora Notif Svc  ONLINE       ONLINE       rac01
ora.ons                                  Ora Notif Svc  ONLINE       ONLINE       rac02
ora.rac01.vip                            Cluster VIP    ONLINE       ONLINE       rac01
ora.rac02.vip                            Cluster VIP    ONLINE       ONLINE       rac02
ora.registry.acfs                        registry       ONLINE       ONLINE       rac01
ora.registry.acfs                        registry       ONLINE       ONLINE       rac02
ora.scan1.vip                            SCAN VIP       ONLINE       ONLINE       rac01
ora.scan2.vip                            SCAN VIP       ONLINE       ONLINE       rac02
ora.scan3.vip                            SCAN VIP       ONLINE       ONLINE       rac02
xag.lax_ggate.goldengate                 goldengate     ONLINE       ONLINE       rac02

CONNECT TO THE CORRECT NODE FOR GOLDENGATE

$ su - gguser
Password:
0
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
!!!!! PLEASE CONNECT TO NODE WHERE GOLDENGATE IS RUNNING !!!!!
!!!!!                                                    !!!!!
--+ Goldengate  instance 'lax_ggate' is running on rac02 +--
!!!!!                                                    !!!!!
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

[gguser@rac01:/home/gguser]
$ ssh rac02
Last login: Sun Dec 14 13:23:48 2014 from rac01
1
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~ GOLDENGATE IS RUNNING ON THIS NODE - GOOD TO GO    ~~~~~
--+ Goldengate  instance 'lax_ggate' is running on rac02 +--
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[gguser@rac02:/home/gguser]
$

.bash_profile
# User specific environment and startup programs
ogg=`/u01/app/grid/xag/bin/agctl status goldengate lax_ggate`
/u01/app/grid/xag/bin/agctl status goldengate lax_ggate|grep -ic `hostname -f`
if [ "$?" != "0" ]; then
  clear
  echo "~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"
  echo "!!!!! PLEASE CONNECT TO NODE WHERE GOLDENGATE IS RUNNING !!!!!"
  echo "!!!!!                                                    !!!!!"
  echo "--+ $ogg +--"
  echo "!!!!!                                                    !!!!!"
  echo "~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"
else
  clear
  echo "~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"
  echo "~~~~~ GOLDENGATE IS RUNNING ON THIS NODE - GOOD TO GO    ~~~~~"
  echo "--+ $ogg +--"
  echo "~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"
fi
umask 0002

RELOCATE GOLDENGATE TO DIFFERENT NODE

[grid@rac01:+ASM1:/home/grid]
$ crsstat


Resource Name                            Resource Type  Target       State        Node            State Details
---------------------------------------- -------------- ------------ ------------ --------------- ---------------
lax-ggate1-vip                           appvip_net1    ONLINE       ONLINE       rac02
ora.DATA2.dg                             diskgroup      ONLINE       ONLINE       rac01
ora.DATA2.dg                             diskgroup      ONLINE       ONLINE       rac02
ora.DG_ACFS.dg                           diskgroup      ONLINE       ONLINE       rac01
ora.DG_ACFS.dg                           diskgroup      ONLINE       ONLINE       rac02
ora.LISTENER.lsnr                        Listener       ONLINE       ONLINE       rac01
ora.LISTENER.lsnr                        Listener       ONLINE       ONLINE       rac02
ora.LISTENER_SCAN1.lsnr                  SCAN Listener  ONLINE       ONLINE       rac01
ora.LISTENER_SCAN2.lsnr                  SCAN Listener  ONLINE       ONLINE       rac02
ora.LISTENER_SCAN3.lsnr                  SCAN Listener  ONLINE       ONLINE       rac02
ora.asm                                  ASM            ONLINE       ONLINE       rac01           Started
ora.asm                                  ASM            ONLINE       ONLINE       rac02           Started
ora.cvu                                  cvu            ONLINE       ONLINE       rac02
ora.dg_acfs.vg_acfs.acfs                 acfs           ONLINE       ONLINE       rac01           mounted on /acfsmount
ora.dg_acfs.vg_acfs.acfs                 acfs           ONLINE       ONLINE       rac02           mounted on /acfsmount
ora.emu.db                               database       ONLINE       ONLINE       rac01           Open
ora.emu.db                               database       ONLINE       ONLINE       rac02           Open
ora.gsd                                  Gbl Svc Daemon OFFLINE      OFFLINE
ora.gsd                                  Gbl Svc Daemon OFFLINE      OFFLINE
ora.net1.network                         Network (VIP)  ONLINE       ONLINE       rac01
ora.net1.network                         Network (VIP)  ONLINE       ONLINE       rac02
ora.oc4j                                 OC4J           ONLINE       ONLINE       rac02
ora.ons                                  Ora Notif Svc  ONLINE       ONLINE       rac01
ora.ons                                  Ora Notif Svc  ONLINE       ONLINE       rac02
ora.rac01.vip                            Cluster VIP    ONLINE       ONLINE       rac01
ora.rac02.vip                            Cluster VIP    ONLINE       ONLINE       rac02
ora.registry.acfs                        registry       ONLINE       ONLINE       rac01
ora.registry.acfs                        registry       ONLINE       ONLINE       rac02
ora.scan1.vip                            SCAN VIP       ONLINE       ONLINE       rac01
ora.scan2.vip                            SCAN VIP       ONLINE       ONLINE       rac02
ora.scan3.vip                            SCAN VIP       ONLINE       ONLINE       rac02
xag.lax_ggate.goldengate                 goldengate     ONLINE       ONLINE       rac02

[grid@rac01:+ASM1:/home/grid]
$ $XAG_HOME/bin/agctl relocate goldengate lax_ggate --node rac01
[grid@rac01:+ASM1:/home/grid]
$ crsstat


Resource Name                            Resource Type  Target       State        Node            State Details
---------------------------------------- -------------- ------------ ------------ --------------- ---------------
lax-ggate1-vip                           appvip_net1    ONLINE       ONLINE       rac01
ora.DATA2.dg                             diskgroup      ONLINE       ONLINE       rac01
ora.DATA2.dg                             diskgroup      ONLINE       ONLINE       rac02
ora.DG_ACFS.dg                           diskgroup      ONLINE       ONLINE       rac01
ora.DG_ACFS.dg                           diskgroup      ONLINE       ONLINE       rac02
ora.LISTENER.lsnr                        Listener       ONLINE       ONLINE       rac01
ora.LISTENER.lsnr                        Listener       ONLINE       ONLINE       rac02
ora.LISTENER_SCAN1.lsnr                  SCAN Listener  ONLINE       ONLINE       rac01
ora.LISTENER_SCAN2.lsnr                  SCAN Listener  ONLINE       ONLINE       rac02
ora.LISTENER_SCAN3.lsnr                  SCAN Listener  ONLINE       ONLINE       rac02
ora.asm                                  ASM            ONLINE       ONLINE       rac01           Started
ora.asm                                  ASM            ONLINE       ONLINE       rac02           Started
ora.cvu                                  cvu            ONLINE       ONLINE       rac02
ora.dg_acfs.vg_acfs.acfs                 acfs           ONLINE       ONLINE       rac01           mounted on /acfsmount
ora.dg_acfs.vg_acfs.acfs                 acfs           ONLINE       ONLINE       rac02           mounted on /acfsmount
ora.emu.db                               database       ONLINE       ONLINE       rac01           Open
ora.emu.db                               database       ONLINE       ONLINE       rac02           Open
ora.gsd                                  Gbl Svc Daemon OFFLINE      OFFLINE
ora.gsd                                  Gbl Svc Daemon OFFLINE      OFFLINE
ora.net1.network                         Network (VIP)  ONLINE       ONLINE       rac01
ora.net1.network                         Network (VIP)  ONLINE       ONLINE       rac02
ora.oc4j                                 OC4J           ONLINE       ONLINE       rac02
ora.ons                                  Ora Notif Svc  ONLINE       ONLINE       rac01
ora.ons                                  Ora Notif Svc  ONLINE       ONLINE       rac02
ora.rac01.vip                            Cluster VIP    ONLINE       ONLINE       rac01
ora.rac02.vip                            Cluster VIP    ONLINE       ONLINE       rac02
ora.registry.acfs                        registry       ONLINE       ONLINE       rac01
ora.registry.acfs                        registry       ONLINE       ONLINE       rac02
ora.scan1.vip                            SCAN VIP       ONLINE       ONLINE       rac01
ora.scan2.vip                            SCAN VIP       ONLINE       ONLINE       rac02
ora.scan3.vip                            SCAN VIP       ONLINE       ONLINE       rac02
xag.lax_ggate.goldengate                 goldengate     ONLINE       INTERMEDIATE rac01           ER(s) not running : ELAX,ELAX,RDEN_LAX

[grid@rac01:+ASM1:/home/grid]
$ crsstat


Resource Name                            Resource Type  Target       State        Node            State Details
---------------------------------------- -------------- ------------ ------------ --------------- ---------------
lax-ggate1-vip                           appvip_net1    ONLINE       ONLINE       rac01
ora.DATA2.dg                             diskgroup      ONLINE       ONLINE       rac01
ora.DATA2.dg                             diskgroup      ONLINE       ONLINE       rac02
ora.DG_ACFS.dg                           diskgroup      ONLINE       ONLINE       rac01
ora.DG_ACFS.dg                           diskgroup      ONLINE       ONLINE       rac02
ora.LISTENER.lsnr                        Listener       ONLINE       ONLINE       rac01
ora.LISTENER.lsnr                        Listener       ONLINE       ONLINE       rac02
ora.LISTENER_SCAN1.lsnr                  SCAN Listener  ONLINE       ONLINE       rac01
ora.LISTENER_SCAN2.lsnr                  SCAN Listener  ONLINE       ONLINE       rac02
ora.LISTENER_SCAN3.lsnr                  SCAN Listener  ONLINE       ONLINE       rac02
ora.asm                                  ASM            ONLINE       ONLINE       rac01           Started
ora.asm                                  ASM            ONLINE       ONLINE       rac02           Started
ora.cvu                                  cvu            ONLINE       ONLINE       rac02
ora.dg_acfs.vg_acfs.acfs                 acfs           ONLINE       ONLINE       rac01           mounted on /acfsmount
ora.dg_acfs.vg_acfs.acfs                 acfs           ONLINE       ONLINE       rac02           mounted on /acfsmount
ora.emu.db                               database       ONLINE       ONLINE       rac01           Open
ora.emu.db                               database       ONLINE       ONLINE       rac02           Open
ora.gsd                                  Gbl Svc Daemon OFFLINE      OFFLINE
ora.gsd                                  Gbl Svc Daemon OFFLINE      OFFLINE
ora.net1.network                         Network (VIP)  ONLINE       ONLINE       rac01
ora.net1.network                         Network (VIP)  ONLINE       ONLINE       rac02
ora.oc4j                                 OC4J           ONLINE       ONLINE       rac02
ora.ons                                  Ora Notif Svc  ONLINE       ONLINE       rac01
ora.ons                                  Ora Notif Svc  ONLINE       ONLINE       rac02
ora.rac01.vip                            Cluster VIP    ONLINE       ONLINE       rac01
ora.rac02.vip                            Cluster VIP    ONLINE       ONLINE       rac02
ora.registry.acfs                        registry       ONLINE       ONLINE       rac01
ora.registry.acfs                        registry       ONLINE       ONLINE       rac02
ora.scan1.vip                            SCAN VIP       ONLINE       ONLINE       rac01
ora.scan2.vip                            SCAN VIP       ONLINE       ONLINE       rac02
ora.scan3.vip                            SCAN VIP       ONLINE       ONLINE       rac02
xag.lax_ggate.goldengate                 goldengate     ONLINE       ONLINE       rac01

STOP GOLDENDATE

[grid@rac01:+ASM1:/home/grid]
$ $XAG_HOME/bin/agctl stop goldengate lax_ggate
[grid@rac01:+ASM1:/home/grid]

DISCOVERY
It’s easy to know where everything when you are the architect, but what happens when you are not?
How to find the Goldengate VIP and is it in DNS?

$ crsctl stat res lax-ggate1-vip -p|grep USR_ORA_VIP
GEN_USR_ORA_VIP=
USR_ORA_VIP=192.168.56.41

$ nslookup 192.168.56.41
Server:         127.0.0.1
Address:        127.0.0.1#53

41.56.168.192.in-addr.arpa      name = lax-ggate1-vip.localdomain.

$ crsctl stat res -w "TYPE = ora.acfs.type" -p|grep '^MOUNT'
MOUNTPOINT_PATH=/acfsmount
MOUNTPOINT_PATH=/acfsmount

REFERENCE:

Oracle GoldenGate Best Practices: 
Configuring Oracle GoldenGate with Oracle Grid Infrastructure Bundled Agents (XAG) - (Document 1527310.1) 

Oracle Clusterware: 

http://oracle.com/goto/clusterware

Oracle Grid Infrastructure (Bundled) Agents: 

http://www.oracle.com/technetwork/database/database-technologies/clusterware/downloads/index.html

Oracle Grid Infrastructure Agents Version 5.1
11.2.0.3 and later 
12.1.0.1 and later 

http://www.oracle.com/technetwork/database/database-technologies/clusterware/downloads/ogiba-2189738.pdf

SRDC - Data to Collect on GoldenGate Issues Related to XAG (Doc ID 1913048.1)	

Patch 16762459: ORACLE GOLDENGATE V11.2.1.0.7 FOR ORACLE 11G

[gguser@angel:/u01/app/ggs112]
$ ./ggsci

Oracle GoldenGate Command Interpreter for Oracle
Version 11.2.1.0.7 16934304 OGGCORE_11.2.1.0.7_PLATFORMS_130709.1600.1_FBO
Linux, x64, 64bit (optimized), Oracle 11g on Jul 18 2013 07:04:28

Copyright (C) 1995, 2013, Oracle and/or its affiliates. All rights reserved.

GGSCI (angel.local) 1> dblogin userid ggadmin, password ggadmin
Successfully logged into database.

GGSCI (angel.local) 2> versions
Operating System:
Linux
Version #1 SMP Fri Feb 22 18:16:18 PST 2013, Release 2.6.39-400.17.1.el6uek.x86_64
Node: angel.local
Machine: x86_64

Database:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
PL/SQL Release 11.2.0.4.0 - Production
CORE    11.2.0.4.0      Production
TNS for Linux: Version 11.2.0.4.0 - Production
NLSRTL Version 11.2.0.4.0 - Production
Publicado em GOLDENGATE, RAC | Marcado com , , , , , , , , , , , , , , | Deixe um comentário

Como instalar o Goldengate em Cluster usando agentes XAG

An Implementation of GoldenGate Monitoring configured in a Real Application Cluster Environment

Essa postagem não é minha, todos os créditos de desse post são para : http://maazanjum.com/2014/06/19/an-implementation-of-goldengate-monitoring-configured-in-a-real-application-cluster-environment/

Nearly a year ago I wrote about GoldenGate monitoring using a Metric Extension, and then later on expanded upon the creation of a Metric Extension, which I’ve installed and configured at several customer sites where GoldenGate was running on a stand alone server.

Over the last few months, several people had approached me on the process, as well as improved on it. One such improvement is accredited to Bobby Curtis (@dbasolved) who taught me how to buffer in Perl. Bobby also has a neat collection of monitoring scripts for GoldenGate, you can find them here.

The latest implementation, is a collaboration led by my tenacious and talented colleague Tucker Thompson (LinkedIn). He is responsible to maintain and manage the Enterprise Manager environment from an operational perspective for (among many other) a rather large retail corporation – let’s call them Furry Feet (FF). Their environment contains multiple Exadata machines including several dozen non-Exadata environments.

A few weeks ago he approached me with a question on GoldenGate monitoring with Enterprise Manager that does not involve the GoldenGate Plugin. In his own words, Tucker described the problem below:

“The client was previously using [custom built] crontab scripts to monitor multiple items (including GoldenGate) in their large Exadata environment, despite having an OEM 12c implementation. Our desire was to move all of their crontab elements into a centralized strategy utilizing OEM 12c.

The current GoldenGate plugin for OEM 12c was tested, but seemed very buggy and the client was not ready to use it yet.

The client has AGCTL configured to assist in running multiple highly available GoldenGate instances in the same Exadata DBM.”

As per Oracle’s documentation; Agent Control (AGCTL) is the agent command line utility to manage bundled agents (XAG) for application High Availability (HA) using Oracle Grid Infrastructure.

Tucker explains why this solution wasn’t always reliable by stating:

“The crontab scripts operated per compute node to check the logs for errors, send a lag status to the elements, and check the AGCTL status to determine where the instance was running. However, we found that any alert in the alert.log triggered a critical alert through their ticketing system, as it gripped for any ORA-XXXXX error.

For instance, if there were any long running queries, we would get a ticket created. Another major issue was that we would encounter multiple issues where the status of GoldenGate in AGCTL did not accurately reflect is actual status. For example, an instance could be showing as down through AGCTL, but through GGSCI, the status was RUNNING.”

We seem to find a pattern with issues in monitoring of GoldenGate, don’t we? This doesn’t necessarily mean that the tool itself is at fault but rather the available options. Since I had already come up with an adequate way to monitor GoldenGate using Metric Extensions in EM12c, but it was designed to run against a host target specifically where GoldenGate runs in a stand alone mode. In FF’s environment, there were several GoldenGate instances running across the various Exadata compute nodes that were configured to fail-over and restart GoldenGate seamlessly. This made for an interesting problem to resolve because my initial script assumes a static GoldenGate home.

As an example, three Nodes in a cluster each with a different GoldenGate instance that is managed by XAG.

NewImage

Tucker’s innovative solution was to retrieve the information from clusterware via AGCTL to run the GoldenGate check against the nodes where the instance is currently running.

“What this script does is execute against the Exadata Database Machine as a target. This means that it will first find a compute node available, run AGCTL to determine the names of the GoldenGate instances, and respective nodes they are configured to run on. This information is always available from any node, and the script does not take into account where AGCTL thinks GoldenGate is actually currently running.

Next, with that information registered, the script runs olsnodes to grab the host names of all compute nodes registered in the DBM. It then uses information pulled from the AGCTL configuration per GoldenGate instance to ssh to each compute node and grep for the manager process for that specific GG instance. With the manager found running on a certain node, the script then runs ggsci from that node against that GG instance, and parses the results to tell us if the different components are running, stopped, or abended. It will also take the lag into consideration and set warning thresholds, rather than a critical alert for any amount of lag. The script even goes as far to add the agctl status as an informational column, so it can be seen if agctl is showing as down, but GGSCI shows all processes running fine.

NewImage

If the manager is not found anywhere in the Exadata environment, it extracts the first node that the instance is configured to run on from the AGCTL configuration, and runs ggsci from that node. This allows the script to still show all of the components as stopped or abended, and their respective lags.”

Another thing to note is that if a GoldenGate instance relocates to a different node for some reason, instead of just getting an alert that the instance went down we would get that alert, followed by a clear alert once the GoldenGate objects (manager, extract, replicat, pump) are back up and running on a different node.

Once tested via the Metric Extension setup screens, the output looks like:

NewImage

This strategy allows for one script to monitor multiple diagnostics for across GoldenGate instances configured to run in a large, highly available environment.”

It outputs the following information when run from a prompt.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
ggate_baby077|MANAGER|MANAGER|RUNNING|0|0|0|exababy01db02|Goldengate  instance 'ggate_baby077' is running on exababy01db02|/u01/app/oracle/product/11.2.1.0.6/gghome_1/gg_baby077
ggate_baby077|CODS_E01|EXTRACT|RUNNING|22940|25|0|exababy01db02|Goldengate  instance 'ggate_baby077' is running on exababy01db02|/u01/app/oracle/product/11.2.1.0.6/gghome_1/gg_baby077
ggate_baby077|CODS_R01|REPLICAT|RUNNING|0|3|0|exababy01db02|Goldengate  instance 'ggate_baby077' is running on exababy01db02|/u01/app/oracle/product/11.2.1.0.6/gghome_1/gg_baby077
ggate_baby077|CODS_R02|REPLICAT|RUNNING|0|3|0|exababy01db02|Goldengate  instance 'ggate_baby077' is running on exababy01db02|/u01/app/oracle/product/11.2.1.0.6/gghome_1/gg_baby077
ggate_baby077|CODS_R03|REPLICAT|RUNNING|0|3|0|exababy01db02|Goldengate  instance 'ggate_baby077' is running on exababy01db02|/u01/app/oracle/product/11.2.1.0.6/gghome_1/gg_baby077
ggate_baby077|CODS_R04|REPLICAT|RUNNING|0|3|0|exababy01db02|Goldengate  instance 'ggate_baby077' is running on exababy01db02|/u01/app/oracle/product/11.2.1.0.6/gghome_1/gg_baby077
ggate_baby077|CODS_R05|REPLICAT|RUNNING|0|3|0|exababy01db02|Goldengate  instance 'ggate_baby077' is running on exababy01db02|/u01/app/oracle/product/11.2.1.0.6/gghome_1/gg_baby077
ggate_baby077|CODS_R06|REPLICAT|RUNNING|0|3|0|exababy01db02|Goldengate  instance 'ggate_baby077' is running on exababy01db02|/u01/app/oracle/product/11.2.1.0.6/gghome_1/gg_baby077
ggate_baby077|CODS_R08|REPLICAT|RUNNING|0|3|0|exababy01db02|Goldengate  instance 'ggate_baby077' is running on exababy01db02|/u01/app/oracle/product/11.2.1.0.6/gghome_1/gg_baby077
ggate_baby077|CODS_R09|REPLICAT|RUNNING|0|3|0|exababy01db02|Goldengate  instance 'ggate_baby077' is running on exababy01db02|/u01/app/oracle/product/11.2.1.0.6/gghome_1/gg_baby077
ggate_baby077|CODS_R10|REPLICAT|RUNNING|0|3|0|exababy01db02|Goldengate  instance 'ggate_baby077' is running on exababy01db02|/u01/app/oracle/product/11.2.1.0.6/gghome_1/gg_baby077
ggate_baby077|CODS_R11|REPLICAT|RUNNING|0|3|0|exababy01db02|Goldengate  instance 'ggate_baby077' is running on exababy01db02|/u01/app/oracle/product/11.2.1.0.6/gghome_1/gg_baby077
ggate_baby077|CODS_R12|REPLICAT|RUNNING|0|3|0|exababy01db02|Goldengate  instance 'ggate_baby077' is running on exababy01db02|/u01/app/oracle/product/11.2.1.0.6/gghome_1/gg_baby077
ggate_baby19sss|MANAGER|MANAGER|RUNNING|0|0|0|exababy01db04|Goldengate  instance 'ggate_baby19sss' is running on exababy01db04|/u01/app/oracle/product/11.2.1.0.6/gghome_1/gg_baby19ssa
ggate_baby19sss|MKCN_E01|EXTRACT|RUNNING|7|6|0|exababy01db04|Goldengate  instance 'ggate_baby19sss' is running on exababy01db04|/u01/app/oracle/product/11.2.1.0.6/gghome_1/gg_baby19ssa
ggate_baby19sss|POMS_E01|EXTRACT|RUNNING|9|5|0|exababy01db04|Goldengate  instance 'ggate_baby19sss' is running on exababy01db04|/u01/app/oracle/product/11.2.1.0.6/gghome_1/gg_baby19ssa
ggate_baby19sss|POMS_E02|EXTRACT|RUNNING|8|9|0|exababy01db04|Goldengate  instance 'ggate_baby19sss' is running on exababy01db04|/u01/app/oracle/product/11.2.1.0.6/gghome_1/gg_baby19ssa
ggate_baby19sss|POMS_P02|EXTRACT|RUNNING|0|6|0|exababy01db04|Goldengate  instance 'ggate_baby19sss' is running on exababy01db04|/u01/app/oracle/product/11.2.1.0.6/gghome_1/gg_baby19ssa
ggate_baby19sss|XFER_E01|EXTRACT|RUNNING|8|0|0|exababy01db04|Goldengate  instance 'ggate_baby19sss' is running on exababy01db04|/u01/app/oracle/product/11.2.1.0.6/gghome_1/gg_baby19ssa
ggate_baby19sss|XFER_EM1|EXTRACT|RUNNING|7|1|0|exababy01db04|Goldengate  instance 'ggate_baby19sss' is running on exababy01db04|/u01/app/oracle/product/11.2.1.0.6/gghome_1/gg_baby19ssa
ggate_baby19sss|XFER_P01|EXTRACT|RUNNING|0|8|0|exababy01db04|Goldengate  instance 'ggate_baby19sss' is running on exababy01db04|/u01/app/oracle/product/11.2.1.0.6/gghome_1/gg_baby19ssa
ggate_baby19sss|XFER_PM1|EXTRACT|RUNNING|0|6|0|exababy01db04|Goldengate  instance 'ggate_baby19sss' is running on exababy01db04|/u01/app/oracle/product/11.2.1.0.6/gghome_1/gg_baby19ssa
ggate_baby19sss|BASE_R01|REPLICAT|RUNNING|0|7|0|exababy01db04|Goldengate  instance 'ggate_baby19sss' is running on exababy01db04|/u01/app/oracle/product/11.2.1.0.6/gghome_1/gg_baby19ssa
ggate_baby19sss|MKCN_R01|REPLICAT|RUNNING|0|7|0|exababy01db04|Goldengate  instance 'ggate_baby19sss' is running on exababy01db04|/u01/app/oracle/product/11.2.1.0.6/gghome_1/gg_baby19ssa
ggate_baby19sss|POMS_R01|REPLICAT|RUNNING|0|5|0|exababy01db04|Goldengate  instance 'ggate_baby19sss' is running on exababy01db04|/u01/app/oracle/product/11.2.1.0.6/gghome_1/gg_baby19ssa
ggate_test|MANAGER|MANAGER|STOPPED|0|0|1|exababy01db01|Goldengate  instance 'ggate_test' is not running|/u01/app/oracle/product/11.2.1.0.6/gghome_1/gg_test
ggate_test|TEST_E01|EXTRACT|ABENDED|6|7|2|exababy01db01|Goldengate  instance 'ggate_test' is not running|/u01/app/oracle/product/11.2.1.0.6/gghome_1/gg_test
ggate_test|TEST_R01|REPLICAT|ABENDED|0|0|2|exababy01db01|Goldengate  instance 'ggate_test' is not running|/u01/app/oracle/product/11.2.1.0.6/gghome_1/gg_test

image002

The perl script itself is fairly simple, and can be plugged into the Metric Extension example from earlier. The only caveat is, that your GoldenGate environment must be an XAG resource.

Once the Metric Extension is deployed, its information is accessible in the “All Metrics” section for the Exadata Database Machine.

image001

The point of this exercise was to solve a particular use-case where GoldenGate instances are configured as clusterware resources which can be restarted on different nodes each time. What I would like to see is, an implementation of this GoldenGate monitoring script in an implementation that doesn’t necessarily use XAG in a clustered environment.

Thanks again to Tucker for coming up with the idea to retrieve the information and it was fun to incorporate my original script into his version.

Cheers!

Publicado em GOLDENGATE, GRID CONTROL, RAC | Marcado com | Deixe um comentário

HealthCheck Exadata

HealthCheck_user_guide.txt
Version 1.2.2 – Release Date 05/16/2011

ALWAYS review My Oracle Support note 1070954.1 and download the latest scripting for this
HealthCheck before executing any of the scripting.

Current HealthCheck Version
—————————
As of 05/16/2011, the current HealthCheck version is 1.2.2, reflected in the filename ‘HealthCheck_1_2_2_tar_gz’.
Version numbers are also contained in the headers of the uncompressed files.

Target Oracle Database Machine Impact
————————————-
The Oracle Database Machine HealthCheck consists of read only commands.  Other than the writing of the output files
and an empty locking file to help guard against more than one HealthCheck executing at a time, the impact to the
target machine is minimal.

The operating system, hardware, and firmware checks running all options take about 4 minutes on and HP quarter rack
and about 3 minutes 30 seconds.

The asm checks take less than 10 seconds.

The manual InfiniBand switch commands execution time varies with typing skill.

Note Well:
==========
Execute only one HealthCheck at a time in a database machine.

For example, if you have a full rack configured with one cluster, then run one HealthCheck
on the first database server in the cluster.

For another example, if you have a full rack divided into two clusters, run one HealthCheck on the first database
server in the first cluster, wait for it to complete, then run one HealthCheck on the first database server
in the second cluster.

Environment and Configuration Settings
————————————–
This HealthCheck assumes a deployment according to standard Oracle Database Machine naming and location conventions.
This section details some of those conventions, and other information regarding the command syntax and structures
in this HealthCheck.

DCLI Group Files
—————-
This HealthCheck requires the “root” userid to have the following dcli group files present in its home directory:

dbs_group, contains the Ethernet host names of the Oracle Database servers.
cell_group, contains the Ethernet host names of the Exadata Cells.
all_group, contains the Ethernet host names of both the Exadata Cells and the Oracle Database servers.
all_ib_group, contains the private InfiniBand host names of both the Exadata Cells and the Oracle Database servers.
cell_ib_group, contains the private InfiniBand host names of the Exadata Cells.

Linux Convention
—————-
The Linux ~/ convention is used to indicate the home directory of the current user.

Parameters
———-
This HealthCheck uses the following input parameters to simplify some of the command structure:

run_os_commands_as_root.sh:
—————————
-a <the location of the HealthCheck source files. eg: /home/oracle/HealthCheck>
-b <the location of the CRS home. eg: /u01/app/11.1.0/crs>
-c <the location of the ASM home. eg: /u01/app/oracle/product/11.1.0/asm_1>
-d <the location of the DB home. eg: /u01/app/oracle/product/11.1.0/db_1>
-e <>
-f <>
-g <>

Note: for 11.2.x deployments, enter the grid home location for both parameters -b and -c.

Note: the -e parameter takes no arguments.  When -e is added to the parameters, the scripting
adds -s -q to each dcli command to attempt to suppress SSH login banners.

Note: the -f parameter takes no arguments.

HealthCheck by default on HP hardware does not stop the MS Server on the storage cells in order
to run CheckHWnFWProfile or execute hpaducli commands for
“Determining SAS Backplane Version on storage cells:” and “Verifying disk health on storage cells:”.

If you wish to execute those HealthCheck sections and CheckHWnFWProfile on the storage cells,
it is recommended that you:

1) Schedule an outage.
2) Shutdown the entire Oracle stack running in the cluster.
3) Re-execute the HealthCheck with the “-f” input parameter.
4) Restart the entire Oracle stack running in the cluster.

Note: the -g parameter takes no arguments.

HealthCheck by default does not execute either CheckHWnFWProfile or CheckSWProfile.sh on the
database servers. They should only be executed immediately after the first build of the database
machine or a fresh image. Specify the “-g” parameter to execute CheckHWnFWProfile and
CheckSWProfile.sh on the database servers.

run_asm_commands_as_oracle.sh:
——————————
-a <the location of the HealthCheck source files. eg: /home/oracle/HealthCheck>

-b <the asm instance SID. eg: +ASM1>

Path
—-
This HealthCheck requires the root user and the oracle user to include
/usr/local/bin:/usr/bin:/usr/sbin:/bin in the $PATH environment variable.

Command Execution Location
————————–
The Automatic Storage Management HealthCheck script is recommended to be executed on the database server
where the +ASM1 instance exists, typically the first node in the target cluster.  Unless stated otherwise,
the HealthCheck assumes all commands are executed on the database node where the +ASM1 instance exists.

Command Line Prompts
——————–
A command run by the “root” userid is indicated by a “#” prompt, or may be explicitly stated in the directions.

A command run by the “oracle” userid is indicated by a “$” prompt, or may be explicitly stated in the directions.

Note: When constructing commands, do not copy the “#” or “$” used in these examples.

Secure Shell Equivalence
————————
This HealthCheck requires that there is Secure Shell (SSH) equivalence configured for the “root” userid
between the first database server and all other database servers, and between the first database server
and the storage servers.  The scripts will not work without the required SSH equivalence.

Pre-execution Steps
——————-
1) Verify ssh equivalence for the root user:
For a standard Oracle Database Machine, the required equivalence was created during the onsite deployment
and may have been left in place, if requested. If you are uncertain of your configuration, you can execute
the following two commands:
# dcli -g ~/all_group -l root hostname
# dcli -g ~/all_ib_group -l root hostname

If you are challenged to authenticate, ctrl-c out of the commands and establish the required SSH connectivity
as follows:

1.1) Create a private/public key file using the following command:
# ssh-keygen -t dsa

The output will be similar to the following:

Generating public/private dsa key pair.
Enter file in which to save the key (/root/.ssh/id_dsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_dsa.
Your public key has been saved in /root/.ssh/id_dsa.pub.
The key fingerprint is:
09:79:e9:78:14:bf:29:40:66:ec:94:25:a9:7d:93:3e

The passphrase has been left empty so that when an SSH connection is established, a passphrase is not required.

Note:  Linux provides two methods of encryption, RSA and DSA.  This Healthcheck uses DSA encryption.

1.2) Use the dcli utility to distribute the public key in a reliable manner using the following commands:
# dcli -g ~/all_group -l root -k -s ‘-o StrictHostKeyChecking=no’
# dcli -g ~/all_ib_group -l root -k -s ‘-o StrictHostKeyChecking=no’

The output will be similar to the following:

dataab01s: Warning: Permanently added ‘dataab01,152.68.120.251’ (RSA) to the list of known hosts.
dataab01s: ssh key added
dadtaab02s: Warning: Permanently added ‘dataab02,152.68.120.252’ (RSA) to the list of known hosts.
dataab02s: ssh key added
dadtaab03s: Warning: Permanently added ‘dataab03,152.68.120.253’ (RSA) to the list of known hosts.
dataab03s: ssh key added

Note:  If you are prompted for a password on the remote server, enter the appropriate value.

1.3) Test SSH equivalence using the following commands:
# dcli -g ~/all_group -l root hostname
# dcli -g ~/all_ib_group -l root hostname

The output will be similar to the following:

dataas01s-priv: dataas01s.us.oracle.com
dataas02s-priv: dataas02s.us.oracle.com
dataas03s-priv: dataas03s.us.oracle.com

2) Download and uncompress the “HealthCheck_bundle.zip” file from MOS note 1070954.1 to your desktop or
laptop computer.  This is because there are example files and a spreadsheet that can be viewed there.

2.1) Transfer the HealthCheck_1_2_2_tar_gz file to the /home/oracle directory on the first database server
in the cluster where this HealthCheck is to be executed.

Note: Do not decompress the files onto a Windows environment, read the files in an editor, and then transfer
the decompressed files to your Linux environment.  This activity may insert stray characters into the scripts.
It is strongly recommended to decompress the tar file only in your Linux environment and read the files in vi,
if so desired.

Note: When you uncompress the files to your desktop / laptop, check to see if the file receives an extra “.gz”
file extension (e.g: HealthCheck_1_2_2_tar_gz.gz).  If it did, then rename the file to remove the extra “.gz”
file extension.

2.2) Extract the files using the following command:

Note: These instructions assume the HealthCheck is being installed for the first time. If you have been running
HealthCheck on your system, it is recommended that you save both the prior scripting and the output files before
you install a newer version of HealthCheck. If you do not, the older files will be overwritten. For example,
assuming that you wish to retain the prior HealthCheck installation online for reference, one method to preserve
the prior installation is to use the mv command in the /home/oracle directory to rename the existing installation.
Eg: mv HealthCheck HealthCheck_03182010.

$ tar -zpxvf HealthCheck_1_2_2_tar_gz

2.3) Verify file creation using this command in the /home/oracle directory:
$ ls -ltr | grep HealthCheck

The output should look similar to this (date and timestamp will vary):

drwxr-xr-x 3 oracle oinstall 4096 Mar 11 12:05 HealthCheck

HealthCheck is the base directory that contains the command files and the output_files subdirectory.

Note:  The operating system and Automatic Storage Management scripts, as well as the Voltaire commands screen
capture write their output to the /home/oracle/HealthCheck/output_files directory with a date and timestamp
embedded in the file names so that an output history can be easily maintained.

Note: Files written to the /home/oracle/HealthCheck/output_files directory by the root user are owned by the root user.  If file cleanup is desired, the root user will have to perform the actions.

Operating System Healthcheck
—————————-
Execute the following command as the root user from the /home/oracle/HealthCheck directory on the first database
server in the cluster from which this HealthCheck is being driven:

Note: The following command is a sample, and you must substitute the correct parameter values as discussed earlier
in the “Parameters” section.  If you try this verbatim on your system, it may not work!

# ./run_os_commands_as_root.sh -a /home/oracle/HealthCheck -b /u01/app/11.2.0/grid -c /u01/app/11.2.0/grid -d /u01/app/oracle/product/11.2.0/dbhome_1

The output will scroll by on your screen as the scripting executes, and an output file will be written to the /home/oracle/HealthCheck/output_files directory.

Automatic Storage Management Healthcheck
—————————————-
Execute the following command as the oracle user from the /home/oracle/HealthCheck directory on the driving node
in the cluster from which this HealthCheck is being driven:

$ ./run_asm_commands_as_oracle.sh -a /home/oracle/HealthCheck -b +ASM1

The output will scroll by on your screen as the scripting executes, and an output file will be written to the /home/oracle/HealthCheck/output_files directory.

InfiniBand Switch Healthcheck
—————————–
It is not possible to script the execution of the InfiniBand switch commands.

To execute the InfiniBand commands and capture the output, perform the following steps on the driving node in the cluster from which this HealthCheck is being driven:

1) create a log file of your terminal activity with a date and timestamp in its name in the
/home/oracle/HealthCheck/output_files directory:

# script -a -q /home/oracle/HealthCheck/output_files/IB_switch_commands_`date +%m%d%y_%H%M%S`.lst

2) For each of the managed switches, connect by ip address and perform the following commands
(output deleted here for clarity):

For HP Oracle Database Machine:
# ssh 10.204.72.90 -l enable
enable@10.204.72.90’s password: <default should be voltaire>
ISR9024D-36d6# version show
ISR9024D-36d6# config
ISR9024D-36d6(config)# sm
ISR9024D-36d6(config-sm)# sm-info show
ISR9024D-36d6(config-sm)# exit
ISR9024D-36d6(config)# ntp
ISR9024D-36d6(config-ntp)# ntp show
ISR9024D-36d6(config-ntp)# clock show
ISR9024D-36d6(config-ntp)# exit
ISR9024D-36d6(config)# exit
ISR9024D-36d6# exit
ISR9024D-36d6> exit
Connection to 10.204.72.90 closed.

3) When you have processed all of the managed switches, stop logging the terminal output and close
the output file using this command:

# exit

Output File Analysis
——————–
In the output files, after the output from each individual command, there will be one of two types
of expected result provided:

Direct text
A link to another My Oracle Support note

The direct text is used when the expected output is fixed or simple, and the link is used if the
expected output interpretation is complex or varies over time (e.g. firmware versions for
different Exadata Storage Cell Software versions).

If you discover variances between the current values reported for your Oracle Database Machine and
the expected values detailed in the output or referenced files, and are uncertain of how to proceed,
contact Oracle Support for assistance.

The file “sample_output_files.zip” contains a sampling of outupt files for the operating
system scripts and the Automatic Storage Management scripts.

The file “HealthCheck_command_table.xls” is a spreadsheet listing the included checks.

History
V. Wagman 05/16/11 incorporated all directions here instead of the
MOS note.
V. Wagman 05/12/2011 Set verstion to 1.2.2 in file header section

Publicado em EXADATA | Deixe um comentário

Como estender o criar um filesystem no Exadata

NOTA: 1582139.1

Reclaim the 4th hard disk drive and extend the physical volume size

Check the sizes of the physical volume and the associated device(s). Note the volume group name (VGExaDb):
[root@exadb02 ~]# pvs
PV         VG      Fmt  Attr PSize   PFree
/dev/sda2  VGExaDb lvm2 a–  556.80G 372.80G

[root@exadb02 ~]# fdisk -l /dev/sda
Disk /dev/sda: 597.9 GB, 597998698496 bytes
255 heads, 63 sectors/track, 72702 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          16      128488+  83  Linux
/dev/sda2              17       72702   583850295   8e  Linux LVM

Check the logical volume sizes in the volume group:
[root@exadb02 ~]# lvs
LV        VG      Attr   LSize   Origin Snap%  Move Log Copy%  Convert
LVDbOra1  VGExaDb -wi-ao 100.00G
LVDbSwap1 VGExaDb -wi-ao  24.00G
LVDbSys1  VGExaDb -wi-ao  30.00G

To reclaim the 4th hard disk, either run the reclaim disk script manually:
[root@exadb02 ~]# /opt/oracle.SupportTools/reclaimdisks.sh -free -reclaim

or check the progress after running dbnodeupdate.sh:
[root@exadb02 ~]# /opt/oracle.SupportTools/reclaimdisks.sh -check
[INFO] This is SUN FIRE X4170 M2 SERVER machine
[INFO] Number of LSI controllers: 1
[INFO] Physical disks found: 4 (252:0 252:1 252:2 252:3)
[INFO] Logical drives found: 1
[WARNING] Reconstruction on the logical disk 0 is in progress: Completed 83%, Taken 160 min.
[INFO] Continue later when reconstruction is done

The reclaim normally takes 3-4 hours.

Once the reclaim completes, all 4 disks are in the RAID 5 configuration, but there is also an error, as the RAID 5 should be using all 4 disks.
[root@exadb02 ~]# /opt/oracle.SupportTools/reclaimdisks.sh -check
[INFO] This is SUN FIRE X4170 M2 SERVER machine
[INFO] Number of LSI controllers: 1
[INFO] Physical disks found: 4 (252:0 252:1 252:2 252:3)
[INFO] Logical drives found: 1
[INFO] Dual boot installation: no
[INFO] Linux logical drive: 0
[INFO] RAID Level for the Linux logical drive: 5
[INFO] Physical disks in the Linux logical drive: 4 (252:0 252:1 252:2 252:3)
[INFO] Dedicated Hot Spares for the Linux logical drive: 0
[INFO] Global Hot Spares: 0
[ERROR] Expected RAID 5 from 3 physical disks and 1 global hot spare and no dedicated hot spare

Note that the /dev/sda device size is still 600GB and it still has 2 partitions:
[root@exadb02 ~]# fdisk -l /dev/sda

Disk /dev/sda: 597.9 GB, 597998698496 bytes
255 heads, 63 sectors/track, 72702 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          16      128488+  83  Linux
/dev/sda2              17       72702   583850295   8e  Linux LVM

The RAID 5 device correctly shows the new size:
[root@exadb02 ~]# /opt/MegaRAID/MegaCli/MegaCli64 -CfgDsply -a0 | egrep “^RAID|^Size”
RAID Level          : Primary-5, Secondary-0, RAID Level Qualifier-3
Size                : 835.394 GB

The compute node needs to be rebooted, in order for the new size for /dev/sda to take effect.

Shut down the clusterware first:
[root@exadb02 ~]# /u01/app/11.2.0.3/grid/bin/crsctl stop crs
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on ‘exadb02’
CRS-2673: Attempting to stop ‘ora.crsd’ on ‘exadb02’
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on ‘exadb02’

CRS-4133: Oracle High Availability Services has been stopped.

Reboot the server…
[root@exadb02 ~]# reboot

Broadcast message from root (pts/0) (Fri Sep  6 09:53:05 2013):

The system is going down for reboot NOW!

Verify the new size for /dev/sda after the reboot.
[root@exadb02 ~]# fdisk -l /dev/sda

Disk /dev/sda: 896.9 GB, 896998047744 bytes
255 heads, 63 sectors/track, 109053 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          16      128488+  83  Linux
/dev/sda2              17       72702   583850295   8e  Linux LVM

Create the third partition (of type Linux LVM) on /dev/sda:
[root@exadb02 ~]# fdisk -l /dev/sda

Disk /dev/sda: 896.9 GB, 896998047744 bytes
255 heads, 63 sectors/track, 109053 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          16      128488+  83  Linux
/dev/sda2              17       72702   583850295   8e  Linux LVM

Create the third partition (of type Linux LVM) on /dev/sda:

[root@exadb02 ~]# fdisk /dev/sda

The number of cylinders for this disk is set to 109053.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)

Command (m for help): p

Disk /dev/sda: 896.9 GB, 896998047744 bytes
255 heads, 63 sectors/track, 109053 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          16      128488+  83  Linux
/dev/sda2              17       72702   583850295   8e  Linux LVM

Command (m for help): n
Command action
e   extended
p   primary partition (1-4)
p
Partition number (1-4): 3
First cylinder (72703-109053, default 72703):
Using default value 72703
Last cylinder or +size or +sizeM or +sizeK (72703-109053, default 109053):
Using default value 109053

Command (m for help): t
Partition number (1-4): 3
Hex code (type L to list codes): 8e
Changed system type of partition 3 to 8e (Linux LVM)

Command (m for help): p

Disk /dev/sda: 896.9 GB, 896998047744 bytes
255 heads, 63 sectors/track, 109053 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          16      128488+  83  Linux
/dev/sda2              17       72702   583850295   8e  Linux LVM
/dev/sda3           72703      109053   291989407+  8e  Linux LVM

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table.
The new table will be used at the next reboot.
Syncing disks.

The /dev/sda now has three partitions.
[root@exadb02 ~]# fdisk -l /dev/sda

Disk /dev/sda: 896.9 GB, 896998047744 bytes
255 heads, 63 sectors/track, 109053 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          16      128488+  83  Linux
/dev/sda2              17       72702   583850295   8e  Linux LVM
/dev/sda3           72703      109053   291989407+  8e  Linux LVM

Create the physical volume on the new partition and extend the volume group:
[root@exadb02 ~]# pvcreate /dev/sda3
Writing physical volume data to disk “/dev/sda3”
Physical volume “/dev/sda3” successfully created

[root@exadb02 ~]# vgextend VGExaDb /dev/sda3
Volume group “VGExaDb” successfully extended

[root@exadb02 ~]# vgdisplay
— Volume group —
VG Name               VGExaDb
System ID
Format                lvm2
Metadata Areas        2
Metadata Sequence No  9
VG Access             read/write
VG Status             resizable
MAX LV                0
Cur LV                4
Open LV               3
Max PV                0
Cur PV                2
Act PV                2
VG Size               835.26 GB
PE Size               4.00 MB
Total PE              213827
Alloc PE / Size       47104 / 184.00 GB
Free  PE / Size       166723 / 651.26 GB
VG UUID               a4MsSu-yB9U-5oxT-BuBC-mGjT-mSAc-V0pPb6

Resize an existing file system

To resize any of the existing file systems follow

Oracle® Exadata Database Machine Owner’s Guide 11g Release 2 (11.2)
Chapter 7 Maintaining Oracle Exadata Database Machine and Oracle Exadata Storage Expansion Rack
*******************************************************************************
Section 7.25 Resizing LVM Partitions
*******************************************************************************

Resize file system /u01:
[root@exadb02 ~]# df -kh
Filesystem                   Size  Used Avail Use% Mounted on
/dev/mapper/VGExaDb-LVDbSys1  30G  4.7G   24G  17% /
/dev/sda1                    124M   84M   35M  71% /boot
/dev/mapper/VGExaDb-LVDbOra1  99G   25G   70G  27% /u01
tmpfs                         81G     0   81G   0% /dev/shm

[root@exadb02 ~]# lvscan
ACTIVE            ‘/dev/VGExaDb/LVDbSys1’ [30.00 GB] inherit
ACTIVE            ‘/dev/VGExaDb/LVDbSwap1’ [24.00 GB] inherit
ACTIVE            ‘/dev/VGExaDb/LVDbOra1’ [100.00 GB] inherit

[root@exadb02 ~]# lvdisplay /dev/VGExaDb/LVDbOra1
— Logical volume —
LV Name                /dev/VGExaDb/LVDbOra1
VG Name                VGExaDb
LV UUID                SNn8Wd-NZoK-zAIG-1fyv-GU98-EJPd-Zy7nQE
LV Write Access        read/write
LV Status              available
# open                 1
LV Size                100.00 GB
Current LE             25600
Segments               1
Allocation             inherit
Read ahead sectors     auto
– currently set to     256
Block device           253:2

============================================================
There is plenty of free space in the volume group VGExaDb:
============================================================

[root@exadb02 ~]# vgdisplay VGExaDb -s
“VGExaDb” 835.26 GB [184.00 GB used / 651.26 GB free]

Shut down clusterware and OSW…
[root@exadb02 ~]# /u01/app/11.2.0.3/grid/bin/crsctl stop crs
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on ‘exadb02’
CRS-2673: Attempting to stop ‘ora.crsd’ on ‘exadb02’
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on ‘exadb02’

CRS-4133: Oracle High Availability Services has been stopped.

[root@exadb02 ~]# /opt/oracle.oswatcher/osw/stopOSW.sh

Unmount and check the file system to be resized (/u01):
[root@exadb02 ~]# umount /u01

[root@exadb02 ~]# e2fsck -f /dev/VGExaDb/LVDbOra1
e2fsck 1.39 (29-May-2006)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
DBORA: 164726/13107200 files (0.9% non-contiguous), 6793630/26214400 blocks

Add 100GB to the logical volume (LVDbOra1), to make the total size 200GB:
[root@exadb02 ~]# lvextend -L+100G –verbose /dev/VGExaDb/LVDbOra1
Finding volume group VGExaDb
Archiving volume group “VGExaDb” metadata (seqno 9).
Extending logical volume LVDbOra1 to 200.00 GB
Found volume group “VGExaDb”
Loading VGExaDb-LVDbOra1 table (253:2)
Suspending VGExaDb-LVDbOra1 (253:2) with device flush
Found volume group “VGExaDb”
Resuming VGExaDb-LVDbOra1 (253:2)
Creating volume group backup “/etc/lvm/backup/VGExaDb” (seqno 10).
Logical volume LVDbOra1 successfully resized

Check the new size of logical volume (LVDbOra1):
[root@exadb02 ~]# lvscan | grep LVDbOra1
ACTIVE            ‘/dev/VGExaDb/LVDbOra1’ [200.00 GB] inherit

Resize the file system:
[root@exadb02 ~]# resize2fs -p /dev/VGExaDb/LVDbOra1
resize2fs 1.39 (29-May-2006)
Resizing the filesystem on /dev/VGExaDb/LVDbOra1 to 52428800 (4k) blocks.
Begin pass 1 (max = 800)
Extending the inode table     XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Begin pass 2 (max = 30)
Relocating blocks             XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Begin pass 3 (max = 800)
Scanning inode table          XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Begin pass 5 (max = 15)
Moving inode table            XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
The filesystem on /dev/VGExaDb/LVDbOra1 is now 52428800 blocks long.

Mount it back:
[root@exadb02 ~]# mount -t ext3 /dev/VGExaDb/LVDbOra1 /u01

Verify the new file system size:
[root@exadb02 ~]# df -kh
Filesystem                   Size  Used Avail Use% Mounted on
/dev/mapper/VGExaDb-LVDbSys1  30G  4.7G   24G  17% /
/dev/sda1                    124M   84M   35M  71% /boot
tmpfs                         81G     0   81G   0% /dev/shm
/dev/mapper/VGExaDb-LVDbOra1 197G   25G  163G  14% /u01

Restart the clusterware and OSW…
[root@exadb02 ~]# /u01/app/11.2.0.3/grid/bin/crsctl start crs
CRS-4123: Oracle High Availability Services has been started.

[root@exadb02 ~]# /opt/oracle.cellos/vldrun -script oswatcher
Logging started to /var/log/cellos/validations.log
Command line is /opt/oracle.cellos/validations/bin/vldrun.pl -quiet -script oswatcher
Run validation oswatcher – PASSED
The each boot completed with SUCCESS

*******************************************************************************
Add a new file system using the free space in the extended volum
*******************************************************************************

Ther is still plenty of free space in the volume group:
[root@exadb02 ~]# vgdisplay | grep Free
Free  PE / Size       141123 / 551.26 GB

Create a new 200GB logical volume for a new file system:
[root@exadb02 ~]# pvs
PV         VG      Fmt  Attr PSize   PFree
/dev/sda2  VGExaDb lvm2 a–  556.80G 272.80G
/dev/sda3  VGExaDb lvm2 a–  278.46G 278.46G
[root@exadb02 ~]# lvs
LV        VG      Attr   LSize   Origin Snap%  Move Log Copy%  Convert
LVDbOra1  VGExaDb -wi-ao 200.00G
LVDbSwap1 VGExaDb -wi-ao  24.00G
LVDbSys1  VGExaDb -wi-ao  30.00G

[root@exadb02 ~]# lvcreate -L200GB -n LVDbOra2 VGExaDb
Logical volume “LVDbOra2” created

[root@exadb02 ~]# lvs
LV        VG      Attr   LSize   Origin Snap%  Move Log Copy%  Convert
LVDbOra1  VGExaDb -wi-ao 200.00G
LVDbOra2  VGExaDb -wi-a- 200.00G
LVDbSwap1 VGExaDb -wi-ao  24.00G
LVDbSys1  VGExaDb -wi-ao  30.00G

Create a new file system (and name it /u02):
[root@exadb02 ~]# mkfs.ext3 -j -L u02 /dev/VGExaDb/LVDbOra2
mke2fs 1.39 (29-May-2006)
Filesystem label=u02
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
26214400 inodes, 52428800 blocks
2621440 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
1600 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872

Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 30 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.

Mount the new file system:
[root@exadb02 ~]# mkdir /u02
[root@exadb02 ~]# mount -t ext3 /dev/VGExaDb/LVDbOra2 /u02
[root@exadb02 ~]# df -kh
Filesystem                   Size  Used Avail Use% Mounted on
/dev/mapper/VGExaDb-LVDbSys1  30G  4.8G   24G  17% /
/dev/sda1                    124M   84M   35M  71% /boot
tmpfs                         81G     0   81G   0% /dev/shm
/dev/mapper/VGExaDb-LVDbOra1 197G   25G  163G  14% /u01
/dev/mapper/VGExaDb-LVDbOra2 197G  188M  187G   1% /u02

Note that there is still some free space in the volume group:
[root@exadb02 ~]# vgdisplay | grep Free
Free  PE / Size       89923 / 351.26 GB

[root@exadb02 ~]# pvs
PV         VG      Fmt  Attr PSize   PFree
/dev/sda2  VGExaDb lvm2 a–  556.80G  72.80G
/dev/sda3  VGExaDb lvm2 a–  278.46G 278.46G

Publicado em EXADATA | Deixe um comentário

Atualização de kernel do Exadata usando o YUM.

Atualização de kernel usando o YUM.

É mandatório que haja no mínimo 3Gb de espaço livre no barra “/”

1.    Fazer backup de S.O

Existem 3 métodos de fazer backup de S.O.:

Primeiro método é usando o script dbserver_backup.sh

Para usar esse método, é necessário haver espaço para criar uma nova partição do mesmo tamanho que a partição do file system (/) do root e todas as partições não-locais de sistema devem ser desmontadas (ex. NFS, Samba, etc). O script de auxilio para backup dbserver_backup.sh pode ser usado para fazer backup do root(/) e de partições /boot. Ele pode ser baixado para o seu release pelo patch 13741363.

Esse script verifica se existe espaço suficiente e coloca o backup e uma nova partição chamada /dev/VGExaDb/LVDbSys2. Se não houver espaço suficiente o script não irá funcionar.

Para diminuir o tempo total e reduzir a possibilidade de falhas for falta de espaço antes de rodar o script, é recomendado que sejam removidos os arquivos de log e trace desnecessários, e qualquer outro arquivo grande que possa ser recuperado com facilidade de outra fontes ( ex. Oracle patch ou arquivos de instalação zipados).

NOTA: Por padrão o script dbserver_backup.sh cria um backup da partição 1 (/dev/mapper/LVExaDb-LVDbSys1) considerando como partição ativa em uso na partição 2 (/dev/mapper/VGExaDb-LVDbSys2) considerada como uma partição inativa. Após um rollback bem sucessido, as partições ficarão invertidas. Antes de tentar refazer o upgrade, um novo backup deverá ser criado. Se a partição ativa em uso não for mais a partição 1, o argumento –backup-to-partition precisará ser especificado quando for rodar o script dbserver_backup.sh, assegurando dessa forma que a partição certa estará sendo copiada para o local correto. Para maiores informações por favor verifique o -help do dbserver_backup.sh.

Para saber qual partição está ativa, você pode usar o commando imaginfo como root, no exemplo abaixo a partição ativa é a 1 (/dev/mapper/LVExaDb-LVDbSys1), dessa forma o backup seria feito na partição 2

# imageinfo

Kernel version: 2.6.18-238.12.2.0.2.el5 #1 SMP Tue Jun 28 05:21:19 EDT 2011 x86_64
Image version: 11.2.2.4.2.111221
Image activated: 2012-07-27 13:15:04 -0600
Image status: success

System partition on device: /dev/mapper/VGExaDb-LVDbSys1

Segundo método é através de snapshot conforme descrito no chapter 7 do Database Machine Owners Guide na seção “Recovering a Linux-Based Database Server Using the Most-Recent Backup“.

Quando fizer LVM snapshots, tenha certesa de ter fornecido outro label ao snapshot, ou de ter removido o label após ter feito o backup, caso contrário o sistema não irá subir após a proxima reinicialização.

Para verificar se o database server usa Linux Volume Manager (LVM) para file systems, use o seguinte comando:

# mount | sed -e ‘/VGExaDb/!d;s/.*VGExaDb.*/VGExaDb/g;’

O resultado esperado para esse comando é o seguinte:

VGExaDb
VGExaDb

Quando o database server vem de fabrica, ele tem extatamente o mesmo layout de partições o nomes, Use o seguinte commando para verificar:

# mount | sed -e ‘/VGExaDb/!d;s/.*on \/ type.*/\//g;s/.*on \/u01.*/u01/g;’

O resultado esperado para esse comando é o seguinte:

/
u01

Caso o servidor atenda as condições necessárias, esse método poderá ser usado conforme os passos descritos abaixo.

Taking a Snapshot-based Backup

The following procedure describes how to take a snapshot-based backup. The values shown in the procedure are examples.

1.Prepare a destination to hold the backup, as follows. The destination can be a large, writable NFS location. The NFS location should be large enough to hold the backup tar files. For uncustomized partitions, 145 GB should be adequate.

a.Create a mount point for the NFS share using the following command:

mkdir -p /root/tar

b.Mount the NFS location using the following command:

mount -t nfs -o ro,intr,soft,proto=tcp,nolock ip_address:/nfs_location/ /root/tar

In the preceding command, ip_address is the IP address of the NFS server, and nfs_location is the NFS location.

2.Take a snapshot-based backup of the / (root), /u01, and /boot directories, as follows:

a.Create a snapshot named root_snap for the root directory using the following command:

lvcreate -L1G -s -n root_snap /dev/VGExaDb/LVDbSys1

b.Label the snapshot using the following command:

e2label /dev/VGExaDb/root_snap DBSYS_SNAP

c.Mount the snapshot using the following commands:

mkdir /root/mnt

mount /dev/VGExaDb/root_snap /root/mnt -t ext3

d.Create a snapshot named u01_snap for the /u01 directory using the following command:

lvcreate -L5G -s -n u01_snap /dev/VGExaDb/LVDbOra1

e.Label the snapshot using the following command:

e2label /dev/VGExaDb/u01_snap DBORA_SNAP

f.Mount the snapshot using the following commands:

mkdir -p /root/mnt/u01

mount /dev/VGExaDb/u01_snap /root/mnt/u01 -t ext3

g.Change to the directory for the backup using the following command:

cd /root/mnt

h.Create the backup file using one of the following commands:

–System does not have NFS mount points:

# tar -pjcvf /root/tar/mybackup.tar.bz2 * /boot –exclude \

tar/mybackup.tar.bz2 > /tmp/backup_tar.stdout 2> /tmp/backup_tar.stderr

–System has NFS mount points:

# tar -pjcvf /root/tar/mybackup.tar.bz2 * /boot –exclude \

tar/mybackup.tar.bz2 –exclude nfs_mount_points > \

/tmp/backup_tar.stdout 2> /tmp/backup_tar.stderr

In the preceding command, nfs_mount_points are the NFS mount points. Excluding the mount points prevents the generation of large files and long backup times.

i.Check the /tmp/backup_tar.stderr file for any significant errors. Errors about failing to tar open sockets, and other similar errors, can be ignored.

3.Unmount the snapshots and remove the snapshots for the root and /01 directories using the following commands:

cd /

umount /root/mnt/u01

umount /root/mnt

/bin/rm -rf /root/mnt

lvremove /dev/VGExaDb/u01_snap

lvremove /dev/VGExaDb/root_snap

4.Unmount the NFS share using the following command:

umount /root/tar

Refer to the maintenance chapter in the owners guide for information about back up and restore. Note that the backup created by this procedure facilitates best in both rolling back software changes and recovering from an unbootable system.

  1. Terceira opção é fazer backup do barra “/” usando o TAR.

Ir para o barra “/”

cd /

Fazer um backup usando o comando TAR

tar –pjcvf mybackup.tar.bz2 *

 

  1. Passos para atualizar o kernel

Executar os comandos abaixo como (root)

  • Importar a chave do GPG RPM usando o seguinte comando

# rpm –import /usr/share/rhn/RPM-GPG-KEY

  • Rode o comando up2date no modo texto da seguinte forma:

# up2date –nox –register

3)  crie os diretórios necessário para armazenar o repositório

mkdir -p /mnt/iso/yum/unknown/EXADATA/dbserver/11.2/latest
4) Monte o isso no diretório criado anteriormente

mount -o loop /mnt/rman/patch-16432033/112_latest_repo_130302.iso /mnt/iso/yum/unknown/EXADATA/dbserver/11.2/latest
5)  edite o o arquivo Exadata-computenode.repo

vi /etc/yum.repos.d/Exadata-computenode.repo

Exadata-computenode.repo    antes
================================
[exadata_dbserver_11.2.3.2.1_x86_64_base]
name=Oracle Exadata DB server 11.2.3.2.1 Linux $releasever – $basearch – base
baseurl=file:///media/iso/x86_64
gpgcheck=1
enabled=0

Exadata-computenode.repo   depois
================================
[exadata_dbserver_11.2.3.2.1_x86_64_base]
name=Oracle Exadata DB server 11.2.3.2.1 Linux $releasever – $basearch – base
baseurl=file:///mnt/iso/yum/unknown/EXADATA/dbserver/11.2/latest/x86_64
gpgcheck=1
enabled=0

check
=====
[root@dbm01db01 ~]# yum repolist
exadata_dbserver_11.2.3.2.1_x86_64_base                                                                                                      | 1.9 kB     00:00
exadata_dbserver_11.2.3.2.1_x86_64_base/primary_db                                                                                            | 1.1 MB     00:00
Excluding Packages in global exclude list
Finished

repo id                              repo name                                     status
exadata_dbserver_11.2.3.2.1_x86_64_base  Oracle Exadata DB server 11.2.3.2.1 Linux 5 – x86_64 – base                                485+1
repolist: 485

Patch
======

/u01/app/11.2.0.3/grid/bin/crsctl disable crs

CRS-4621: Oracle High Availability Services autostart is disabled.

/u01/app/11.2.0.3/grid/bin/crsctl stop crs -f

CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on ‘dbm01db01’

CRS-4133: Oracle High Availability Services has been stopped.

yum clean all

Cleaning up Everything

Verify the yum repository using the following command:

# yum –enablerepo=<channel name as mentioned in the patch README> repolist

Update the database server using <channel as mentioned in the patch README>.

yum –enablerepo=<channel as mentioned in the patch README> update

exadata_dbserver_11.2.3.2.1_x86_64_base                                                                                                         | 1.9 kB     00:00
exadata_dbserver_11.2.3.2.1_x86_64_base/primary_db                                                                                              | 1.1 MB     00:00
Excluding Packages in global exclude list
Finished
Setting up Update Process
Resolving Dependencies
–> Running transaction check
—> Package device-mapper-multipath.x86_64 0:0.4.9-56.0.3.el5 set to be updated
—> Package device-mapper-multipath-libs.x86_64 0:0.4.9-56.0.3.el5 set to be updated
—> Package exadata-applyconfig.x86_64 0:11.2.3.2.1.130302-1 set to be updated
—> Package exadata-asr.x86_64 0:11.2.3.2.1.130302-1 set to be updated
—> Package exadata-base.x86_64 0:11.2.3.2.1.130302-1 set to be updated
—> Package exadata-commonnode.x86_64 0:11.2.3.2.1.130302-1 set to be updated
—> Package exadata-exachk.x86_64 0:11.2.3.2.1.130302-1 set to be updated
—> Package exadata-firmware-compute.x86_64 0:11.2.3.2.1.130302-1 set to be updated
—> Package exadata-ibdiagtools.x86_64 0:11.2.3.2.1.130302-1 set to be updated
—> Package exadata-ipconf.x86_64 0:11.2.3.2.1.130302-1 set to be updated
—> Package exadata-onecommand.x86_64 0:11.2.3.2.1.130302-1 set to be updated
—> Package exadata-oswatcher.x86_64 0:11.2.3.2.1.130302-1 set to be updated
—> Package exadata-sun-computenode.x86_64 0:11.2.3.2.1.130302-1 set to be updated
–> Processing Dependency: ofa-2.6.32-400.21.1.el5uek = 1.5.1-4.0.58 for package: exadata-sun-computenode
—> Package exadata-validations-compute.x86_64 0:11.2.3.2.1.130302-1 set to be updated
—> Package kernel-uek.x86_64 0:2.6.32-400.21.1.el5uek set to be installed
—> Package kernel-uek-debuginfo.x86_64 0:2.6.32-400.21.1.el5uek set to be updated
—> Package kernel-uek-debuginfo-common.x86_64 0:2.6.32-400.21.1.el5uek set to be updated
—> Package kernel-uek-devel.x86_64 0:2.6.32-400.21.1.el5uek set to be updated
—> Package kernel-uek-doc.noarch 0:2.6.32-400.21.1.el5uek set to be updated
—> Package kernel-uek-firmware.noarch 0:2.6.32-400.21.1.el5uek set to be updated
—> Package kernel-uek-headers.x86_64 0:2.6.32-400.21.1.el5uek set to be updated
—> Package kexec-tools.x86_64 0:1.102pre-161.el5 set to be updated
—> Package kpartx.x86_64 0:0.4.9-56.0.3.el5 set to be updated
—> Package libbdevid-python.x86_64 0:5.1.19.6-79.0.1.el5 set to be updated
—> Package mkinitrd.x86_64 0:5.1.19.6-79.0.1.el5 set to be updated
—> Package nash.x86_64 0:5.1.19.6-79.0.1.el5 set to be updated
–> Running transaction check
—> Package ofa-2.6.32-400.21.1.el5uek.x86_64 0:1.5.1-4.0.58 set to be updated
exadata_dbserver_11.2.3.2.1_x86_64_base/filelists_db                                                                                            | 683 kB     00:00
–> Finished Dependency Resolution

Dependencies Resolved

=======================================================================================================================================================================
Package                                     Arch                  Version                                Repository                                     Size
=======================================================================================================================================================================
Installing:
kernel-uek                                  x86_64                2.6.32-400.21.1.el5uek                 exadata_dbserver_11.2.3.2.1_x86_64_base               26 M
Updating:
device-mapper-multipath                     x86_64                0.4.9-56.0.3.el5                       exadata_dbserver_11.2.3.2.1_x86_64_base             104 k
device-mapper-multipath-libs                x86_64                0.4.9-56.0.3.el5                       exadata_dbserver_11.2.3.2.1_x86_64_base             179 k
exadata-applyconfig                         x86_64                11.2.3.2.1.130302-1                    exadata_dbserver_11.2.3.2.1_x86_64_base                 28 k
exadata-asr                                 x86_64                11.2.3.2.1.130302-1                    exadata_dbserver_11.2.3.2.1_x86_64_base                 33 k
exadata-base                                x86_64                11.2.3.2.1.130302-1                    exadata_dbserver_11.2.3.2.1_x86_64_base                852 k
exadata-commonnode                          x86_64                11.2.3.2.1.130302-1                    exadata_dbserver_11.2.3.2.1_x86_64_base                 40 M
exadata-exachk                              x86_64                11.2.3.2.1.130302-1                    exadata_dbserver_11.2.3.2.1_x86_64_base                1.5 M
exadata-firmware-compute                    x86_64                11.2.3.2.1.130302-1                    exadata_dbserver_11.2.3.2.1_x86_64_base                135 M
exadata-ibdiagtools                         x86_64                11.2.3.2.1.130302-1                    exadata_dbserver_11.2.3.2.1_x86_64_base                 98 k
exadata-ipconf                              x86_64                11.2.3.2.1.130302-1                    exadata_dbserver_11.2.3.2.1_x86_64_base                 84 k
exadata-onecommand                          x86_64                11.2.3.2.1.130302-1                    exadata_dbserver_11.2.3.2.1_x86_64_base                 16 M
exadata-oswatcher                           x86_64                11.2.3.2.1.130302-1                    exadata_dbserver_11.2.3.2.1_x86_64_base                355 k
exadata-sun-computenode                     x86_64                11.2.3.2.1.130302-1                    exadata_dbserver_11.2.3.2.1_x86_64_base                424 k
exadata-validations-compute                 x86_64                11.2.3.2.1.130302-1                    exadata_dbserver_11.2.3.2.1_x86_64_base                 55 k
kernel-uek-debuginfo                        x86_64                2.6.32-400.21.1.el5uek                 exadata_dbserver_11.2.3.2.1_x86_64_base              320 M
kernel-uek-debuginfo-common                 x86_64                2.6.32-400.21.1.el5uek                 exadata_dbserver_11.2.3.2.1_x86_64_base               38 M
kernel-uek-devel                            x86_64                2.6.32-400.21.1.el5uek                 exadata_dbserver_11.2.3.2.1_x86_64_base              6.8 M
kernel-uek-doc                              noarch                2.6.32-400.21.1.el5uek                 exadata_dbserver_11.2.3.2.1_x86_64_base              8.5 M
kernel-uek-firmware                         noarch                2.6.32-400.21.1.el5uek                 exadata_dbserver_11.2.3.2.1_x86_64_base              3.8 M
kernel-uek-headers                          x86_64                2.6.32-400.21.1.el5uek                 exadata_dbserver_11.2.3.2.1_x86_64_base              775 k
kexec-tools                                 x86_64                1.102pre-161.el5                       exadata_dbserver_11.2.3.2.1_x86_64_base                588 k
kpartx                                      x86_64                0.4.9-56.0.3.el5                       exadata_dbserver_11.2.3.2.1_x86_64_base             468 k
libbdevid-python                            x86_64                5.1.19.6-79.0.1.el5                    exadata_dbserver_11.2.3.2.1_x86_64_base                 69 k
mkinitrd                                    x86_64                5.1.19.6-79.0.1.el5                    exadata_dbserver_11.2.3.2.1_x86_64_base                475 k
nash                                        x86_64                5.1.19.6-79.0.1.el5                    exadata_dbserver_11.2.3.2.1_x86_64_base                1.4 M
Installing for dependencies:
ofa-2.6.32-400.21.1.el5uek                  x86_64                1.5.1-4.0.58                           exadata_dbserver_11.2.3.2.1_x86_64_base             1.0 M

Transaction Summary
=======================================================================================================================================================================
Install       2 Package(s)
Upgrade      25 Package(s)

Total download size: 603 M

Is this ok [y/N]: y

Esperar até que a máquina reinicie

[root@dbm01db01 ~]# w
23:37:01 up 2 min,  1 user,  load average: 1,39, 0,53, 0,19
USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU WHAT
root     pts/0    10.1.0.194       23:36    0.00s  0.01s  0.01s w

rpm -qa | grep ‘ofa-\|^kernel-‘ | grep -v ‘uek\|^kernel-2\.6\.18’ | xargs yum -y remove

Setting up Remove Process
Resolving Dependencies
–> Running transaction check
—> Package kernel-debuginfo.x86_64 0:2.6.18-308.24.1.0.1.el5 set to be erased
—> Package kernel-debuginfo-common.x86_64 0:2.6.18-308.24.1.0.1.el5 set to be erased
—> Package kernel-devel.x86_64 0:2.6.18-308.24.1.0.1.el5 set to be erased
—> Package kernel-doc.noarch 0:2.6.18-308.24.1.0.1.el5 set to be erased
—> Package ofa-2.6.18-238.12.2.0.2.el5.x86_64 0:1.5.1-4.0.53 set to be erased
—> Package ofa-2.6.18-274.18.1.0.1.el5.x86_64 0:1.5.1-4.0.58 set to be erased
–> Finished Dependency Resolution

Dependencies Resolved

=======================================================================================================================================================================
Package                                           Arch                         Version                                          Repository                       Size
=======================================================================================================================================================================
Removing:
kernel-debuginfo                                  x86_64                       2.6.18-308.24.1.0.1.el5                          installed                       610 M
kernel-debuginfo-common                           x86_64                       2.6.18-308.24.1.0.1.el5                          installed                       150 M
kernel-devel                                      x86_64                       2.6.18-308.24.1.0.1.el5                          installed                        16 M
kernel-doc                                        noarch                       2.6.18-308.24.1.0.1.el5                          installed                       8.0 M
ofa-2.6.18-238.12.2.0.2.el5                       x86_64                       1.5.1-4.0.53                                     installed                       3.6 M
ofa-2.6.18-274.18.1.0.1.el5                       x86_64                       1.5.1-4.0.58                                     installed                       3.5 M

Transaction Summary
=======================================================================================================================================================================
Remove        6 Package(s)
Reinstall     0 Package(s)
Downgrade     0 Package(s)

Downloading Packages:
Running rpm_check_debug
Running Transaction Test
Finished Transaction Test
Transaction Test Succeeded
Running Transaction
Erasing        : kernel doc                                                                                                                                      1/6
Erasing        : ofa-2.6.18-274.18.1.0.1.el5                                                                                                                     2/6
Erasing        : kernel-debuginfo                                                                                                                                3/6
Erasing        : kernel-devel                                                                                                                                    4/6
Erasing        : ofa-2.6.18-238.12.2.0.2.el5                                                                                                                     5/6
Erasing        : kernel-debuginfo-common                                                                                                                         6/6

Removed:
kernel-debuginfo.x86_64 0:2.6.18-308.24.1.0.1.el5    kernel-debuginfo-common.x86_64 0:2.6.18-308.24.1.0.1.el5    kernel-devel.x86_64 0:2.6.18-308.24.1.0.1.el5
kernel-doc.noarch 0:2.6.18-308.24.1.0.1.el5          ofa-2.6.18-238.12.2.0.2.el5.x86_64 0:1.5.1-4.0.53           ofa-2.6.18-274.18.1.0.1.el5.x86_64 0:1.5.1-4.0.58

Complete!

yum clean all

Cleaning up Everything

/u01/app/11.2.0.3/grid/bin/crsctl enable crs

CRS-4622: Oracle High Availability Services autostart is enabled.

/u01/app/11.2.0.3/grid/bin/crsctl start crs

CRS-4123: Oracle High Availability Services has been started.

Verificação final

[root@dbm01db01 ~]# dcli -g dbs_group -l root uname -a
dbm01db01: Linux dbm01db01.gruposhark.com.br 2.6.32-400.21.1.el5uek #1 SMP Wed Feb 20 01:35:01 PST 2013 x86_64 x86_64 x86_64 GNU/Linux
dbm01db02: Linux dbm01db02.gruposhark.com.br 2.6.32-400.11.1.el5uek #1 SMP Thu Nov 22 03:29:09 PST 2012 x86_64 x86_64 x86_64 GNU/Linux
dbm01db03: Linux dbm01db03.gruposhark.com.br 2.6.32-400.11.1.el5uek #1 SMP Thu Nov 22 03:29:09 PST 2012 x86_64 x86_64 x86_64 GNU/Linux
dbm01db04: Linux dbm01db04.gruposhark.com.br 2.6.32-400.11.1.el5uek #1 SMP Thu Nov 22 03:29:09 PST 2012 x86_64 x86_64 x86_64 GNU/Linux
dbm02db01: Linux dbm02db01.gruposhark.com.br 2.6.32-400.11.1.el5uek #1 SMP Thu Nov 22 03:29:09 PST 2012 x86_64 x86_64 x86_64 GNU/Linux
dbm02db02: Linux dbm02db02.gruposhark.com.br 2.6.32-400.11.1.el5uek #1 SMP Thu Nov 22 03:29:09 PST 2012 x86_64 x86_64 x86_64 GNU/Linux
dbm02db03: Linux dbm02db03.gruposhark.com.br 2.6.32-400.11.1.el5uek #1 SMP Thu Nov 22 03:29:09 PST 2012 x86_64 x86_64 x86_64 GNU/Linux
dbm02db04: Linux dbm02db04.gruposhark.com.br 2.6.32-400.11.1.el5uek #1 SMP Thu Nov 22 03:29:09 PST 2012 x86_64 x86_64 x86_64 GNU/Linux
Fazer relink do binário do banco de dados e do binário do Grid infrastructure:

Conectado como root:

cd $GRID_HOME/crs/install

perl rootcrs.pl -unlock

Conectado como Owner do GRID_HOME (depende do cliente, pode ser grid, oracle, etc):

export ORACLE_HOME=$GRID_HOME

$ORACLE_HOME/bin/relink all

make -C $ORACLE_HOME/rdbms/lib -f ins_rdbms.mk ipc_rds ioracle

Conectado como Owner do binário do banco de dados (depende do cliente, geralmente é oracle):

export ORACLE_HOME={rdbms home}

$ORACLE_HOME/bin/relink all

make -C $ORACLE_HOME/rdbms/lib -f ins_rdbms.mk ipc_rds ioracle

Conectado como root:

cd $GRID_HOME/crs/install

perl rootcrs.pl -patch

A última chamada rootcrs.pl irá iniciar o GI e as instâncias.

Publicado em EXADATA | Deixe um comentário

Parametro para Tuning de Exadata

#### PARAMETERS

## COMMON
alter system set log_buffer=134217728 sid=’*’ scope=spfile;
alter system set db_ultra_safe=’DATA_ONLY’ sid=’*’ scope=spfile;
alter system set fast_start_mttr_target=600 sid=’*’ scope=spfile;
alter system set parallel_adaptive_multi_user=FALSE sid=’*’ scope=spfile;
alter system set parallel_threads_per_cpu=1 sid=’*’ scope=spfile;
alter system set open_cursors=1000 sid=’*’ scope=spfile;
alter system set use_large_pages=’ONLY’ sid=’*’ scope=spfile;
alter system set “_enable_NUMA_support”=FALSE sid=’*’ scope=spfile;
alter system set sql92_security=TRUE sid=’*’scope=spfile;
alter system set “_file_size_increase_increment” = 2044M sid=’*’ scope=spfile;
alter system set global_names=TRUE sid=’*’ scope=spfile;
alter system set db_create_online_log_dest_1=’+DATA_MZFL’ sid=’*’ scope=spfile;
alter system set os_authent_prefix=” sid=’*’ scope=spfile;
alter system set shared_servers=0 sid=’*’ scope=both;
alter system set DB_LOST_WRITE_PROTECT = ‘TYPICAL’ sid=’*’;

## OLTP
alter system set parallel_max_servers=240 sid=’*’ scope=spfile;
alter system set parallel_min_servers=0 sid=’*’ scope=spfile;
sga = 60%
pga = 40%
alter tablespace TEMP AUTOEXTEND ON NEXT 1G UNIFORM SIZE 16M;

## DW
alter system set parallel_max_servers=240 sid=’*’ scope=spfile;
alter system set parallel_min_servers=96 sid=’*’ scope=spfile;
alter system set parallel_degree_policy=manual sid=’*’ scope=spfile;
alter system set parallel_degree_limit=16 sid=’*’ scope=spfile;
alter system set parallel_servers_target=128 sid=’*’ scope=spfile;
sga = 50%
pga = 50%

## X01DBFS
alter system set parallel_max_servers=2 sid=’*’ scope=spfile;
alter system set parallel_min_servers=0 sid=’*’ scope=spfile;
alter system set sga_target=1536M sid=’*’ scope=spfile;
alter system set pga_aggregate_target=6656M sid=’*’ scope=spfile;
alter system set db_recovery_file_dest=’+DBFS_DG’ sid=’*’ scope=spfile;
alter system set db_recovery_file_dest_size = 30G sid=’*’ scope=spfile;
alter system set cluster_interconnects = ‘10.199.11.1’ sid=’x01dbfs1′ scope=spfile;
alter system set cluster_interconnects = ‘10.199.11.2’ sid=’x01dbfs2′ scope=spfile;
alter tablespace SYSTEM autoextend on maxsize 5G;
alter tablespace SYSAUX autoextend on maxsize 10G;
alter tablespace TEMP autoextend on maxsize 20G;
alter tablespace UNDOTBS1 autoextend on maxsize 10G;
alter tablespace UNDOTBS2 autoextend on maxsize 10G;
alter tablespace USERS autoextend on maxsize 1G;
alter tablespace DBFS_CDCSP autoextend on maxsize 20G;

=======================================================================

alter system set cluster_interconnects = ‘192.168.32.5’ sid=’SIELP1′ scope=spfile;
alter system set cluster_interconnects = ‘192.168.32.6’ sid=’SIELP2′ scope=spfile;
alter system set cluster_interconnects = ‘192.168.32.7’ sid=’SIELP3′ scope=spfile;
alter system set cluster_interconnects = ‘192.168.32.8’ sid=’SIELP4′ scope=spfile;

Publicado em EXADATA | Deixe um comentário

Como habilitar o Write Back no Exadata passo a passo

[root@owtcel01 ~]# dcli -l root -g cell_group “cellcli -e list cell detail” | grep “flashCacheMode”
owtcel01: flashCacheMode: WriteThrough
owtcel02: flashCacheMode: WriteThrough
owtcel03: flashCacheMode: WriteThrough
owtcel04: flashCacheMode: WriteThrough
owtcel05: flashCacheMode: WriteThrough
owtcel06: flashCacheMode: WriteThrough

</pre>
CellCLI> list cell detail
name: owtcel04
bbuTempThreshold: 60
bbuChargeThreshold: 800
bmcType: IPMI
cellVersion: OSS_11.2.3.2.1_LINUX.X64_130109
cpuCount: 24
diagHistoryDays: 7
fanCount: 8/8
fanStatus: normal
flashCacheMode: WriteThrough——————————–Current Mode
id: 1250FM5062
interconnectCount: 3
interconnect1: bondib0
iormBoost: 0.4
ipaddress1: 192.168.30.8/22
kernelVersion: 2.6.32-400.11.1.el5uek
locatorLEDStatus: off
makeModel: Oracle Corporation SUN FIRE X4270 M3 SAS
metricHistoryDays: 7
offloadEfficiency: 7,323.6
powerCount: 2/2
powerStatus: normal
releaseVersion: 11.2.3.2.1
releaseTrackingBug: 14522699
smtpFrom: “Exadata OWT HALF”
smtpFromAddr: unixasr@***
smtpServer: ***
smtpToAddr: unixasr@***
status: online
temperatureReading: 19.0
temperatureStatus: normal
upTime: 28 days, 14:34
cellsrvStatus: running
msStatus: running
rsStatus: running
<pre>

</pre>
CellCLI> drop flashcache
Flash cache owtcel03_FLASHCACHE successfully dropped
<pre>

Be sure asmdeactivationoutcome is YES  is before disabling grid disk
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37

</pre>
CellCLI> list griddisk attributes name,asmmodestatus,asmdeactivationoutcome
DATA_OWT_CD_00_owtcel03 ONLINE Yes
DATA_OWT_CD_01_owtcel03 ONLINE Yes
DATA_OWT_CD_02_owtcel03 ONLINE Yes
DATA_OWT_CD_03_owtcel03 ONLINE Yes
DATA_OWT_CD_04_owtcel03 ONLINE Yes
DATA_OWT_CD_05_owtcel03 ONLINE Yes
DATA_OWT_CD_06_owtcel03 ONLINE Yes
DATA_OWT_CD_07_owtcel03 ONLINE Yes
DATA_OWT_CD_08_owtcel03 ONLINE Yes
DATA_OWT_CD_09_owtcel03 ONLINE Yes
DATA_OWT_CD_10_owtcel03 ONLINE Yes
DATA_OWT_CD_11_owtcel03 ONLINE Yes
DBFS_DG_CD_02_owtcel03 ONLINE Yes
DBFS_DG_CD_03_owtcel03 ONLINE Yes
DBFS_DG_CD_04_owtcel03 ONLINE Yes
DBFS_DG_CD_05_owtcel03 ONLINE Yes
DBFS_DG_CD_06_owtcel03 ONLINE Yes
DBFS_DG_CD_07_owtcel03 ONLINE Yes
DBFS_DG_CD_08_owtcel03 ONLINE Yes
DBFS_DG_CD_09_owtcel03 ONLINE Yes
DBFS_DG_CD_10_owtcel03 ONLINE Yes
DBFS_DG_CD_11_owtcel03 ONLINE Yes
RECO_OWT_CD_00_owtcel03 ONLINE Yes
RECO_OWT_CD_01_owtcel03 ONLINE Yes
RECO_OWT_CD_02_owtcel03 ONLINE Yes
RECO_OWT_CD_03_owtcel03 ONLINE Yes
RECO_OWT_CD_04_owtcel03 ONLINE Yes
RECO_OWT_CD_05_owtcel03 ONLINE Yes
RECO_OWT_CD_06_owtcel03 ONLINE Yes
RECO_OWT_CD_07_owtcel03 ONLINE Yes
RECO_OWT_CD_08_owtcel03 ONLINE Yes
RECO_OWT_CD_09_owtcel03 ONLINE Yes
RECO_OWT_CD_10_owtcel03 ONLINE Yes
RECO_OWT_CD_11_owtcel03 ONLINE Yes
<pre>

##
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37

</pre>
CellCLI> alter griddisk all inactive
GridDisk DATA_OWT_CD_00_owtcel03 successfully altered
GridDisk DATA_OWT_CD_01_owtcel03 successfully altered
GridDisk DATA_OWT_CD_02_owtcel03 successfully altered
GridDisk DATA_OWT_CD_03_owtcel03 successfully altered
GridDisk DATA_OWT_CD_04_owtcel03 successfully altered
GridDisk DATA_OWT_CD_05_owtcel03 successfully altered
GridDisk DATA_OWT_CD_06_owtcel03 successfully altered
GridDisk DATA_OWT_CD_07_owtcel03 successfully altered
GridDisk DATA_OWT_CD_08_owtcel03 successfully altered
GridDisk DATA_OWT_CD_09_owtcel03 successfully altered
GridDisk DATA_OWT_CD_10_owtcel03 successfully altered
GridDisk DATA_OWT_CD_11_owtcel03 successfully altered
GridDisk DBFS_DG_CD_02_owtcel03 successfully altered
GridDisk DBFS_DG_CD_03_owtcel03 successfully altered
GridDisk DBFS_DG_CD_04_owtcel03 successfully altered
GridDisk DBFS_DG_CD_05_owtcel03 successfully altered
GridDisk DBFS_DG_CD_06_owtcel03 successfully altered
GridDisk DBFS_DG_CD_07_owtcel03 successfully altered
GridDisk DBFS_DG_CD_08_owtcel03 successfully altered
GridDisk DBFS_DG_CD_09_owtcel03 successfully altered
GridDisk DBFS_DG_CD_10_owtcel03 successfully altered
GridDisk DBFS_DG_CD_11_owtcel03 successfully altered
GridDisk RECO_OWT_CD_00_owtcel03 successfully altered
GridDisk RECO_OWT_CD_01_owtcel03 successfully altered
GridDisk RECO_OWT_CD_02_owtcel03 successfully altered
GridDisk RECO_OWT_CD_03_owtcel03 successfully altered
GridDisk RECO_OWT_CD_04_owtcel03 successfully altered
GridDisk RECO_OWT_CD_05_owtcel03 successfully altered
GridDisk RECO_OWT_CD_06_owtcel03 successfully altered
GridDisk RECO_OWT_CD_07_owtcel03 successfully altered
GridDisk RECO_OWT_CD_08_owtcel03 successfully altered
GridDisk RECO_OWT_CD_09_owtcel03 successfully altered
GridDisk RECO_OWT_CD_10_owtcel03 successfully altered
GridDisk RECO_OWT_CD_11_owtcel03 successfully altered
<pre>

##
1
2
3
4
5
6
7
8
9
10
11
12
13
14

</pre>
CellCLI> alter cell shutdown services cellsrv

Stopping CELLSRV services…
The SHUTDOWN of CELLSRV services was successful

CellCLI> alter cell flashCacheMode=writeback
Cell owtcel03 successfully altered

CellCLI> alter cell startup services cellsrv

Starting CELLSRV services…
The STARTUP of CELLSRV services was successful
<pre>

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131

</pre>
CellCLI> alter griddisk all active
GridDisk DATA_OWT_CD_00_owtcel03 successfully altered
GridDisk DATA_OWT_CD_01_owtcel03 successfully altered
GridDisk DATA_OWT_CD_02_owtcel03 successfully altered
GridDisk DATA_OWT_CD_03_owtcel03 successfully altered
GridDisk DATA_OWT_CD_04_owtcel03 successfully altered
GridDisk DATA_OWT_CD_05_owtcel03 successfully altered
GridDisk DATA_OWT_CD_06_owtcel03 successfully altered
GridDisk DATA_OWT_CD_07_owtcel03 successfully altered
GridDisk DATA_OWT_CD_08_owtcel03 successfully altered
GridDisk DATA_OWT_CD_09_owtcel03 successfully altered
GridDisk DATA_OWT_CD_10_owtcel03 successfully altered
GridDisk DATA_OWT_CD_11_owtcel03 successfully altered
GridDisk DBFS_DG_CD_02_owtcel03 successfully altered
GridDisk DBFS_DG_CD_03_owtcel03 successfully altered
GridDisk DBFS_DG_CD_04_owtcel03 successfully altered
GridDisk DBFS_DG_CD_05_owtcel03 successfully altered
GridDisk DBFS_DG_CD_06_owtcel03 successfully altered
GridDisk DBFS_DG_CD_07_owtcel03 successfully altered
GridDisk DBFS_DG_CD_08_owtcel03 successfully altered
GridDisk DBFS_DG_CD_09_owtcel03 successfully altered
GridDisk DBFS_DG_CD_10_owtcel03 successfully altered
GridDisk DBFS_DG_CD_11_owtcel03 successfully altered
GridDisk RECO_OWT_CD_00_owtcel03 successfully altered
GridDisk RECO_OWT_CD_01_owtcel03 successfully altered
GridDisk RECO_OWT_CD_02_owtcel03 successfully altered
GridDisk RECO_OWT_CD_03_owtcel03 successfully altered
GridDisk RECO_OWT_CD_04_owtcel03 successfully altered
GridDisk RECO_OWT_CD_05_owtcel03 successfully altered
GridDisk RECO_OWT_CD_06_owtcel03 successfully altered
GridDisk RECO_OWT_CD_07_owtcel03 successfully altered
GridDisk RECO_OWT_CD_08_owtcel03 successfully altered
GridDisk RECO_OWT_CD_09_owtcel03 successfully altered
GridDisk RECO_OWT_CD_10_owtcel03 successfully altered
GridDisk RECO_OWT_CD_11_owtcel03 successfully altered
CellCLI> list griddisk attributes name, asmmodestatus —-wait become online
DATA_OWT_CD_00_owtcel03 SYNCING
DATA_OWT_CD_01_owtcel03 SYNCING
DATA_OWT_CD_02_owtcel03 SYNCING
DATA_OWT_CD_03_owtcel03 SYNCING
DATA_OWT_CD_04_owtcel03 SYNCING
DATA_OWT_CD_05_owtcel03 SYNCING
DATA_OWT_CD_06_owtcel03 SYNCING
DATA_OWT_CD_07_owtcel03 SYNCING
DATA_OWT_CD_08_owtcel03 SYNCING
DATA_OWT_CD_09_owtcel03 SYNCING
DATA_OWT_CD_10_owtcel03 SYNCING
DATA_OWT_CD_11_owtcel03 SYNCING
DBFS_DG_CD_02_owtcel03 SYNCING
DBFS_DG_CD_03_owtcel03 SYNCING
DBFS_DG_CD_04_owtcel03 SYNCING
DBFS_DG_CD_05_owtcel03 SYNCING
DBFS_DG_CD_06_owtcel03 SYNCING
DBFS_DG_CD_07_owtcel03 SYNCING
DBFS_DG_CD_08_owtcel03 SYNCING
DBFS_DG_CD_09_owtcel03 SYNCING
DBFS_DG_CD_10_owtcel03 SYNCING
DBFS_DG_CD_11_owtcel03 SYNCING
RECO_OWT_CD_00_owtcel03 SYNCING
RECO_OWT_CD_01_owtcel03 SYNCING
RECO_OWT_CD_02_owtcel03 SYNCING
RECO_OWT_CD_03_owtcel03 SYNCING
RECO_OWT_CD_04_owtcel03 SYNCING
RECO_OWT_CD_05_owtcel03 SYNCING
RECO_OWT_CD_06_owtcel03 SYNCING
RECO_OWT_CD_07_owtcel03 SYNCING
RECO_OWT_CD_08_owtcel03 SYNCING
RECO_OWT_CD_09_owtcel03 SYNCING
RECO_OWT_CD_10_owtcel03 SYNCING
RECO_OWT_CD_11_owtcel03 SYNCING</pre>
CellCLI> create flashcache all
Flash cache owtcel03_FLASHCACHE successfully created

CellCLI> list cell detail
name: owtcel03
bbuTempThreshold: 60
bbuChargeThreshold: 800
bmcType: IPMI
cellVersion: OSS_11.2.3.2.1_LINUX.X64_130109
cpuCount: 24
diagHistoryDays: 7
fanCount: 12/12
fanStatus: normal
flashCacheMode: writeback——————Now mode become WRITEBACK
id: 1110FMM1DG
interconnectCount: 3
interconnect1: bondib0
iormBoost: 0.0
ipaddress1: 192.168.30.7/22
kernelVersion: 2.6.32-400.11.1.el5uek
locatorLEDStatus: off
makeModel: Oracle Corporation SUN FIRE X4270 M2 SERVER SAS
metricHistoryDays: 7
notificationMethod: mail
notificationPolicy: critical,warning,clear
offloadEfficiency: 5,379.5
powerCount: 2/2
powerStatus: normal
releaseVersion: 11.2.3.2.1
releaseTrackingBug: 14522699
smtpFrom: “Exadata OWT quarter”
smtpFromAddr:***
smtpPort: 25
smtpPwd: ******
smtpServer:***
smtpToAddr: unixasr@**
smtpUser:
smtpUseSSL: FALSE
snmpSubscriber: host=10.**,port=162,community=public,type=asr
status: online
temperatureReading: 20.0
temperatureStatus: normal
upTime: 28 days, 13:52
cellsrvStatus: running
msStatus: running
rsStatus: running
####

Do these steps  for all exadata cells and then check the status:

[root@owtcel01 ~]# dcli -l root -g cell_group “cellcli -e list cell detail” | grep “flashCacheMode”
owtcel01: flashCacheMode: WriteBack
owtcel02: flashCacheMode: WriteBack
owtcel03: flashCacheMode: WriteBack
owtcel04: flashCacheMode: WriteBack
owtcel05: flashCacheMode: WriteBack
owtcel06: flashCacheMode: WriteBack

&nbsp;
<pre><pre>

===========================================================================================================================================

Steps to enable the flash in Write Back mode –

CellCli> drop flashcache
CellCLI> alter cell shutdown services cellsrv
CellCLI> alter cell flashCacheMode = WriteBack
CellCLI> alter cell startup services cellsrv
CellCLI> create flashcache all

The Write Back Cache mode can be reverted back to the Write Through Cache mode by manually flushing all the dirty blocks back to the disk

CellCLI> alter flashcache all flush
CellCLI> drop flashcache
CellCLI> alter cell shutdown services cellsrv
CellCLI> alter cell flashCacheMode=Writethrough
CellCLI> alter cell startup services cellsrv

Publicado em EXADATA | Deixe um comentário

Reset das senhas do Exadata

pam_tally2 -r

ou

pam_tally2 -r -u root

ou

pam_tally2 -r -u oracle

Publicado em EXADATA | Marcado com | Deixe um comentário

senhas padrão do Exadata

dbnodes senha :    welcome1
cellnodess senha : welcome1
infinifband senha :changeme

Publicado em EXADATA | Deixe um comentário

CLONE DATABASE HOME PARA DBFS

1– copiar o home como root  nos 4 nodes do  EXA
cp -Rpf /u01/ora11g/app/oracle/product/11.2.0.3/SIEBELP2 /u01/ora11g/app/oracle/product/11.2.0.3/DBFS

2– Ajustar Links Simbólicos

* Executar os passos abaixo como usuário “root”

<ORACLE_HOME_DESTINO>/lib
rm libclntsh.so.10.1
ln -s libclntsh.so libclntsh.so.10.1

<ORACLE_HOME_DESTINO>/bin
rm lbuilder
ln -s ../nls/lbuilder/lbuilder  lbuilder

– Passo 3 Ajustar Permissões

* Executar os passos abaixo como usuário “root”
chown -R ora11g:oinstall /u01/ora11g/app/oracle/product/11.2.0.3/DBFS

– Passo 4  Realizar o clone

snelnxb66 {SIEBELP22} /home/ora11g > echo $ORACLE_BASE
/u01/ora11g/app
snelnxb66 {SIEBELP22} /home/ora11g > echo $ORACLE_HOME
/u01/ora11g/app/oracle/product/11.2.0.3/SIEBELP2
snelnxb66 {SIEBELP22} /home/ora11g >

mudar para ogg11g

cd /u01/ora11g/app/oracle/product/11.2.0.3/DBFS/clone/bin
perl clone.pl ORACLE_HOME=”/u01/ora11g/app/oracle/product/11.2.0.3/DBFS” ORACLE_HOME_NAME=”HOME_DBFS” ORACLE_BASE=”/u01/ora11g/app” OSDBA_GROUP=dba OSOPER_GROUP=oinstall

– Passo 5  rodar root.sh como root

cd /u01/ora11g/app/oracle/product/11.2.0.3/DBFS
./root.sh

Publicado em EXADATA | Deixe um comentário

Exadata Taking a Snapshot-based Backup

Taking a Snapshot-based Backup

The following procedure describes how to take a snapshot-based backup. The values shown in the procedure are examples.

1.Prepare a destination to hold the backup, as follows. The destination can be a large, writable NFS location. The NFS location should be large enough to hold the backup tar files. For uncustomized partitions, 145 GB should be adequate.

a.Create a mount point for the NFS share using the following command:

mkdir -p /root/tar

b.Mount the NFS location using the following command:

mount -t nfs -o ro,intr,soft,proto=tcp,nolock ip_address:/nfs_location/ /root/tar

In the preceding command, ip_address is the IP address of the NFS server, and nfs_location is the NFS location.

2.Take a snapshot-based backup of the / (root), /u01, and /boot directories, as follows:

a.Create a snapshot named root_snap for the root directory using the following command:

lvcreate -L1G -s -n root_snap /dev/VGExaDb/LVDbSys1

b.Label the snapshot using the following command:

e2label /dev/VGExaDb/root_snap DBSYS_SNAP

c.Mount the snapshot using the following commands:

mkdir /root/mnt

mount /dev/VGExaDb/root_snap /root/mnt -t ext3

d.Create a snapshot named u01_snap for the /u01 directory using the following command:

lvcreate -L5G -s -n u01_snap /dev/VGExaDb/LVDbOra1

e.Label the snapshot using the following command:

e2label /dev/VGExaDb/u01_snap DBORA_SNAP

f.Mount the snapshot using the following commands:

mkdir -p /root/mnt/u01

mount /dev/VGExaDb/u01_snap /root/mnt/u01 -t ext3

g.Change to the directory for the backup using the following command:

cd /root/mnt

h.Create the backup file using one of the following commands:

–System does not have NFS mount points:

# tar -pjcvf /root/tar/mybackup.tar.bz2 * /boot –exclude \

tar/mybackup.tar.bz2 > /tmp/backup_tar.stdout 2> /tmp/backup_tar.stderr

–System has NFS mount points:

# tar -pjcvf /root/tar/mybackup.tar.bz2 * /boot –exclude \

tar/mybackup.tar.bz2 –exclude nfs_mount_points > \

/tmp/backup_tar.stdout 2> /tmp/backup_tar.stderr

In the preceding command, nfs_mount_points are the NFS mount points. Excluding the mount points prevents the generation of large files and long backup times.

i.Check the /tmp/backup_tar.stderr file for any significant errors. Errors about failing to tar open sockets, and other similar errors, can be ignored.

3.Unmount the snapshots and remove the snapshots for the root and /01 directories using the following commands:

cd /

umount /root/mnt/u01

umount /root/mnt

/bin/rm -rf /root/mnt

lvremove /dev/VGExaDb/u01_snap

lvremove /dev/VGExaDb/root_snap

4.Unmount the NFS share using the following command:

umount /root/tar

Publicado em EXADATA | Deixe um comentário

Aplicação Bundle Patch SuperCluster

 

A atividade é a aplicação de Bundle Patch JUL2014 – 11.2.0.4.9 no servidor Super Cluster .

Atividades:

– Estratégia de aplicação.

– Problemas encontrados.

  1. Aplicação dos patches foi efetuada conforme documentação e seguindo os procedimentos descritos no documento de aplicação do patch conforme descrito abaixo:

Para aplicação do patch é necessário atualizar o OPach que de estar na versão 11.2.0.3.6 ou superior.

Obs: para check da versão do opatch utilizar o comando [ opatch version].
Criar o arquivo de response file (com.rsp) utilizando o comando [$ORACLE_HOME/OPatch/com/Bin/emocmrsp] será solicitado para informar e-mail (pode deixar em branco ) e se deseja continuar [ YES ] , [ NO ] informar [YES] será criado um arquivo ocm.rsp no diretório corrente , este arquivo é utilizado para aplicar o patch tanto para o Grid Infra como para o Banco de Dados e deve ter permissões de acesso e leitura.

# /u01/app/11.2.0.4/grid/OPatch/ocm/bin/emocmrsp

OCM Installation Response Generator 10.3.4.0.0 – Production

Copyright (c) 2005, 2010, Oracle and/or its affiliates. All rights reserved.

Provide your email address to be informed of security issues, install and

initiate Oracle Configuration Manager. Easier for you if you use your My

Oracle Support Email address/User Name.

Visit http://www.oracle.com/support/policies.html for details.

Email address/User Name:

You have not provided an email address for notification of security issues.

Do you wish to remain uninformed of security issues ([Y]es, [N]o) [N]:

Email address/User Name:

You have not provided an email address for notification of security issues.

Do you wish to remain uninformed of security issues ([Y]es, [N]o) [N]: Y

The OCM configuration response file (ocm.rsp) was successfully created.

  • Verificar as informações do Inventário utilizando o comando [opatch lsinventory –detail –oh $ORACLE_HOME ]

Exemplo:

Oracle Interim Patch Installer version 11.2.0.3.6

Copyright (c) 2013, Oracle Corporation.  All rights reserved.

Oracle Home       : /u01/app/oracle/product/11.2.0.4/dbhome_1

Central Inventory : /u01/app/oraInventory

from           : /u01/app/oracle/product/11.2.0.4/dbhome_1/oraInst.loc

OPatch version    : 11.2.0.3.6

OUI version       : 11.2.0.4.0

Log file location : /u01/app/oracle/product/11.2.0.4/dbhome_1/cfgtoollogs/opatch/opatch2014-09-       08_20-48-03PM_1.log

Lsinventory Output file location :        /u01/app/oracle/product/11.2.0.4/dbhome_1/cfgtoollogs/opatch/lsinv/lsinventory2014-09-08_20-48-       03PM.txt

——————————————————————————–

Installed Top-level Products (1):

Oracle Database 11g                                                  11.2.0.4.0

There are 1 product(s) installed in this Oracle Home.

Interim patches (2) :

Patch  17984784     : applied on Tue Apr 15 15:08:32 BRT 2014

Unique Patch ID:  17080567

Patch description:  “CRS PATCH FOR EXADATA (JAN2014 – 11.2.0.4.3) : (17984784)”

Created on 10 Jan 2014, 03:57:42 hrs UTC

Bugs fixed:

16346413, 17065496, 16613232, 17551223, 14525998

Patch  17943261     : applied on Tue Apr 15 15:07:30 BRT 2014

Unique Patch ID:  17080567

Patch description:  “DATABASE PATCH FOR EXADATA (JAN2014 – 11.2.0.4.3) : (17943261)”

Created on 2 Jan 2014, 05:21:15 hrs UTC

Sub-patch  17741631; “DATABASE PATCH FOR EXADATA (DEC 2013 – 11.2.0.4.2) : (17741631)”

Sub-patch  17628006; “DATABASE PATCH FOR EXADATA (NOV 2013 – 11.2.0.4.1) : (17628006)”

Bugs fixed:

17288409, 13944971, 16450169, 17265217, 16180763, 16220077, 17465741

17614227, 16069901, 14010183, 16285691, 17726838, 13364795, 17088068

17612828, 17443671, 17080436, 17761775, 16721594, 16043574, 16837842

17446237, 16863422, 17332800, 13609098, 17610798, 17501491, 17239687

17468141, 17752121, 17602269, 16850630, 17346671, 17313525, 14852021

17783588, 17437634, 13866822, 12905058, 17546761

  • Descompactar o patch do SuperCluster (JUL2014 – 11.2.0.4.9) em um diretorio
    Exemplo:

mkdir /u01/app/oracle/patches

cd /u01/app/oracle/patches

unzip p18840215_112040_SOLARIS64.zip

chown -R oracle:oinstall /u01/app/oracle/patches/18840215

 

  • Check de conflito utilizando os comandos abaixo para o GI e BD o resultado deve ser passed.

      

       Para Grid Infrastructure Home, usuario do grid:

% $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir          <UNZIPPED_PATCH_LOCATION>/18840215/18825509         % $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir          <UNZIPPED_PATCH_LOCATION>/18840215/18522515         % $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir          <UNZIPPED_PATCH_LOCATION>/18840215/18522514

      

       Para Database home, usuario do oracle DB:

% $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir          <UNZIPPED_PATCH_LOCATION>/18840215/18825509         % $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir          <UNZIPPED_PATCH_LOCATION>/18840215/18522515/custom/server/18522515

  • Check de SystemSpace abaixo para o GI e BD o resultado deve ser passed.

       Para Grid Infrastructure Home, usuario do Grid:

% $ORACLE_HOME/OPatch/opatch prereq CheckSystemSpace -phBaseDir          <UNZIPPED_PATCH_LOCATION>/18840215/18825509         % $ORACLE_HOME/OPatch/opatch prereq CheckSystemSpace -phBaseDir          <UNZIPPED_PATCH_LOCATION>/18840215/18522515         % $ORACLE_HOME/OPatch/opatch prereq CheckSystemSpace -phBaseDir          <UNZIPPED_PATCH_LOCATION>/18840215/18522514

Para Database home, usuario do oracle DB:

% $ORACLE_HOME/OPatch/opatch prereq CheckSystemSpace -phBaseDir          <UNZIPPED_PATCH_LOCATION>/18840215/18825509         % $ORACLE_HOME/OPatch/opatch prereq CheckSystemSpace -phBaseDir          <UNZIPPED_PATCH_LOCATION>/18840215/18522515/custom/server/18522515

  • Instalação do Patch pode ser efetuado em um nó do cluster por vez , deve-se utilizar usuário root , exportar variáveis de ambiente do caminho do OPatch .

  Aplicar o path primeiro no GI conforme descrito abaixo:                    export PATH=$PATH:<GI_HOME>/OPatch         # opatch auto <PATH_TO_PATCH_DIRECTORY> -oh <GI_HOME> –ocmrf <$/ocm.rsp>

Observações:
Na documentação não é informado o parametro –ocmrf ( arquivo do response file ), quando é executado o comando opatch auto sem o parâmetro –ocmrf o aplicativo solicita para ser inserido o caminho do arquivo este procedimento causa problemas, o patch não é aplicado e o aplicativo não retorna erro ele retorna que o patch foi instalado com sucesso conforme visto abaixo: opatch auto /u01/patches/18840215 -oh /u01/app/11.2.0.4/gridExecuting /u01/app/11.2.0.4/grid/perl/bin/perl /u01/app/11.2.0.4/grid/OPatch/crs/patch11203.pl -patchdir /u01/patches -patchn 18840215 -oh /u01/app/11.2.0.4/grid -paramfile /u01/app/11.2.0.4/grid/crs/install/crsconfig_paramsThis is the main log file: /u01/app/11.2.0.4/grid/cfgtoollogs/opatchauto2014-09-07_13-11-15.logThis file will show your detected configuration and all the steps that opatchauto attempted to do on your system:/u01/app/11.2.0.4/grid/cfgtoollogs/opatchauto2014-09-07_13-11-15.report.log2014-09-07 13:11:15: Starting Clusterware Patch SetupUsing configuration parameter file: /u01/app/11.2.0.4/grid/crs/install/crsconfig_paramsOPatch is bundled with OCM, Enter the absolute OCM response file path:/u01/patches/ocm.rspStopping CRS…Stopped CRS successfullypatch /u01/patches/18840215/18825509 apply successful for home /u01/app/11.2.0.4/gridpatch /u01/patches/18840215/18522515 apply successful for home /u01/app/11.2.0.4/gridpatch /u01/patches/18840215/18522514 apply successful for home /u01/app/11.2.0.4/gridStarting CRS…Installing Trace File AnalyzerCRS-4123: Oracle High Availability Services has been started.opatch auto succeeded.Ao executar o comando opatch lsinventory vemos que o patch não foi instalado.

Aplicar o path no DB conforme descrito abaixo:         # opatch auto <PATH_TO_PATCH_DIRECTORY> -oh <GI_HOME> –ocmrf <$/ocm.rsp>           Executar do comando no Banco de Dados [$rdbms/admin/catbundle.sql exa apply ]com o usuário sys via        sqlplus após instalar o patch em todos os nós do cluster.  SQL> @rdbms/admin/catbundle.sql exa apply

Observações:

Aplicar o patch para o GI e DB em todos os nós do cluster e rodar o comando

SQL> @rdbms/admin/catbundle.sql exa apply

  • Verificar as informações do Inventário utilizando o comando [opatch lsinventory –detail –oh $ORACLE_HOME ]

Oracle Interim Patch Installer version 11.2.0.3.6 Copyright (c) 2013, Oracle Corporation. All rights reserved.

Oracle Home       : /u01/app/oracle/product/11.2.0.4/dbhome_1

Central Inventory : /u01/app/oraInventory

from           : /u01/app/oracle/product/11.2.0.4/dbhome_1/oraInst.loc

OPatch version   : 11.2.0.3.6

OUI version       : 11.2.0.4.0

Log file location : /u01/app/oracle/product/11.2.0.4/dbhome_1/cfgtoollogs/opatch/opatch2014-09-       09_10-29-31AM_1.log

Lsinventory Output file location :        /u01/app/oracle/product/11.2.0.4/dbhome_1/cfgtoollogs/opatch/lsinv/lsinventory2014-09-09_10-29-       31AM.txt

——————————————————————————–

Installed Top-level Products (1):

Oracle Database 11g                                                 11.2.0.4.0

There are 1 product(s) installed in this Oracle Home.

Interim patches (2) :

Patch 18522515     : applied on Tue Sep 09 10:22:16 BRT 2014

Unique Patch ID: 17743109

Patch description: “OCW Patch Set Update : 11.2.0.4.3 (18522515)”

Created on 23 Jun 2014, 10:12:35 hrs UTC

Bugs fixed:

18024089, 18428146, 18328800, 18187697, 14525998, 18352845, 17391726

17750548, 17387214, 18414137, 17001914, 17927970, 16346413, 17551223

15832129, 17305100, 18272135, 18180541, 17985714, 17292250, 17378618

16206997, 17500165, 16876500, 16429265, 17065496, 18343490, 18848125

18336452, 13991403, 16613232, 17955615, 14693336, 17273003, 17273020

17238586, 17089344, 12928658, 18226143, 17531342, 17172091, 18229842

17155238, 17159489, 16543190, 17039197, 17483479, 17947785, 16317771

10052729, 17481314, 18199185, 17405302, 18399991

Patch 18825509     : applied on Tue Sep 09 10:20:13 BRT 2014

Unique Patch ID: 17743109

Patch description: “DATABASE PATCH FOR EXADATA (JUL2014 – 11.2.0.4.9) : (18825509)”

Created on 4 Jul 2014, 11:19:59 hrs UTC Sub-patch 18642122; “DATABASE PATCH FOR EXADATA        (JUN2014 – 11.2.0.4.8) : (18642122)”

Sub-patch 18552960; “DATABASE PATCH FOR EXADATA (MAY2014 – 11.2.0.4.7) : (18552960)”

Sub-patch 18293775; “DATABASE PATCH FOR EXADATA (APR2014 – 11.2.0.4.6) : (18293775)”

Sub-patch 18136151; “DATABASE PATCH FOR EXADATA (MAR2014 – 11.2.0.4.5) : (18136151)”

Sub-patch 18006299; “DATABASE PATCH FOR EXADATA (FEB2014 – 11.2.0.4.4) : (18006299)”

Sub-patch 17943261; “DATABASE PATCH FOR EXADATA (JAN2014 – 11.2.0.4.3) : (17943261)”

Sub-patch 17741631; “DATABASE PATCH FOR EXADATA (DEC 2013 – 11.2.0.4.2) : (17741631)”

Sub-patch 17628006; “DATABASE PATCH FOR EXADATA (NOV 2013 – 11.2.0.4.1) : (17628006)”

Bugs fixed:

17288409, 16188701, 16930924, 17811429, 17205719, 17501296, 17754782

17726838, 13364795, 17311728, 18418934, 17441661, 17284817, 16477664

13645875, 14193240, 16992075, 16542886, 17446237, 14015842, 14565184

18324129, 17071721, 18317132, 17610798, 17375354, 17397545, 17265093

18230522, 17982555, 16360112, 17235750, 13866822, 17478514, 12905058

14338435, 13944971, 17811789, 16929165, 12747740, 17230905, 17546973

14054676, 17088068, 16885125, 18780342, 17016369, 17042658, 14602788

18686405, 17158214, 14657740, 17775506, 17332800, 13951456, 16315398

18483595, 18744139, 17186905, 16850630, 18767554, 17561405, 17437634

19049453, 17883081, 17296856, 14333054, 18277454, 17232014, 17249711

16855292, 10136473, 17179434, 17997507, 17865671, 18554871, 17635021

17588480, 18304997, 17551709, 17344412, 18681862, 16306373, 18139690

13609098, 17501491, 17239687, 17752121, 17602269, 17313525, 18818847

18025431, 17600719, 17571306, 18094246, 17011832, 17165204, 16785708

17174582, 17477958, 16180763, 17465741, 18522509, 17323222, 16875449

16524926, 16980342, 14822091, 17596908, 17811438, 17811447, 18031668

16912439, 14373152, 18077632, 18061914, 17545847, 17082359, 17614134

17341326, 14034426, 18339044, 17716305, 18133214, 17752995, 16392068

17209410, 17767676, 12608451, 17205005, 18384391, 17614227, 17040764

17381384, 14084247, 17389192, 17006570, 17612828, 17721717, 13853126

18203837, 17390431, 18456874, 16043574, 16863422, 18325460, 17402822

14486653, 17468141, 17786518, 14460384, 18226122, 18244962, 18203838

16956380, 17478145, 17394950, 18619917, 17027426, 14000767, 16268425

18247991, 14458214, 17839474, 18436307, 12716670, 16618055, 17265217

13498382, 17786278, 17227277, 17734862, 16042673, 16314254, 17952061

17443671, 18154779, 16228604, 18783969, 16837842, 17393683, 18247351

13073613, 15861775, 18135678, 18614015, 16399083, 18191542, 18192858

18018515, 17082612, 16472716, 18830412, 17050888, 17325413, 14010183

18832544, 17036973, 16613964, 17080436, 17761775, 16721594, 15979965

13651346, 18203835, 17297939, 17811456, 16731148, 18205490, 14133975

17385178, 16450169, 17357979, 17655634, 10194190, 18160822, 17892268

17648596, 16220077, 16069901, 11733603, 16285691, 18180390, 17393915

18096714, 17238511, 13816053, 13877071, 14285317, 17622427, 16943711

17346671, 18996843, 14852021, 17783588, 16618694, 17672719, 17546761

Rac system comprising of multiple nodes

Local node = osc01r5z9db01cn1-client

Remote node = osc02r5z10db03cn3-client

  • Opções adicionais para aplicação do Patch e Rollback do Patch.

    Aplicação do patch no GI home e todos Oracle RAC database homes:

# opatch auto <UNZIPPED_PATCH_LOCATION> -ocmrf <ocm response file>

Para rollback do patch para o GI home e todos Oracle RAC database home:

# opatch auto <UNZIPPED_PATCH_LOCATION> -rollback -ocmrf <ocm response file>

Para rollback do patch para o GI home:

# opatch auto <UNZIPPED_PATCH_LOCATION> -oh <path to GI home> -rollback -ocmrf <ocm                                    response file>

Para rollback do patch para o Oracle RAC database home:

# opatch auto <UNZIPPED_PATCH_LOCATION> -oh <path to RAC database home> -rollback -ocmrf                   <ocm response file>

Publicado em EXADATA | Deixe um comentário

Pré-Resuisitos para instalar o Goldengate

MEMÓRIA Cada processo de extract, replicat e pump usa até 55 MB de RAM, geralmente na origem usamos dois processos= 110MB. No destino vai depender da quantidade de replicadores que usaremos, vamos considerar o máximo de 20, multiplicando por 55 = 1100MB de RAM. SWAP No mínimo 8gb de swap. DISCO. Mínimo 2 Gb para o binario/Logs/parametros/reports. O tamanho da área de manobra onde ficarão os trails dependerá do volume de alterações feitas no banco, e o tempo máximo em que poderá haver um downtime de rede. Uma sugestão é usar 40% do volume de archives gerados por dia multiplicados por 7. Esse cálculo considera que o Goldengate irá guardar até sete dias de trail files, essa quantidade de dias pode variar de cliente para cliente, dependendo da disponibilidade de espaço em disco.

RAC

Se o banco for RAC, o local de instalação do Goldengate deverá ser compartilhado entre todos os nós. Dessa forma, se um dos nós cair, qualquer outro nó do cluster poderá gerenciar o Goldengate.

REDE

Portas TCP/IP sem restrição e não reservadas para comunicação do Oracle Goldengate entre o manager e os outros processos,

Por padrão o range se inicia na porta 7840 e pode ter até 256 portas, ou pode ser criado outro range customizado de portas de até 256 portas. As portas usadas pelo Goldengate deverão estar liberadas no firewall.

PERMISSÃO DE S.O.

O usuário do Goldengate deverá ter permissão total, leitura, escrita e exclusão de arquivos e subdiretórios nos diretórios do Goldengate.

BANCO DE DADOS

O instalador precisará ter acesso de sysdba no banco de dados para criar e ou excluir o esquema contendo os objetos do Goldengate e conceder todos os grants necessários em todos os objetos do banco que serão replicados pelo Goldengate.

Se o banco de dados for s Oracle 10g ou superior e estiver configurado para usar uma conexão Bequeath , o arquivo sqlnet.ora deverá conter o parâmetro bequeath_detach=true.

 Na tabela abaixo estão os privilégios necessários para os processos do Goldengate.

 Na tabela abaixo estão os privilégios necessários para os processos do Goldengate.

Privilégio de usuário

CREATE SESSION,

ALTER SESSION

RESOURCE

CONNECT

SELECT ANY DICTIONARY

Extract

X

X

X

X

X

Replicat

X

X

X

X

X

Manager

FLASHBACK ANY TABLE or FLASHBACK ON

SELECT ANY TABLE or SELECT ON

SELECT on dba_clusters (Oracle 10gR2 and later)

INSERT, UPDATE, DELETE ON

CREATE TABLE

Privileges required to issue DDL operations to target tables (DDL support only).

EXECUTE on DBMS_FLASHBACK package

GGS_GGSUSER_ROLE

DELETE ON Oracle GoldenGate DDL objects

Oracle 10 g ASM privileges

LOCK ANY TABLE

sys.dbms_internal_clkm

SELECT ANY TRANSACTION

X

X

X

X

X

X

X

X

X

X

X

X

X

X

Na versão 10.2 ou superior, serão necessários alguns privilégios adicionais listados na tabela abaixo.

Versão do Oracle

Privilegios

10.2 1. Run package to grant Oracle Streams admin privilege.

exec dbms_streams_auth.grant_admin_privilege(”)

2.

Grant INSERT into logmnr_restart_ckpt$.

grant insert on system.logmnr_restart_ckpt$ to ;

3.

Grant UPDATE on streams$_capture_process.

grant update on sys.streams$_capture_process to ;

4.

Grant the ‘become user‘ privilege.

grant become user to ;

11.1 e 11.2.0.1 1.

Run package to grant Oracle Streams admin privilege.

exec dbms_streams_auth.grant_admin_privilege(”)

2.

Grant the ‘become user‘ privilege.

grant become user to ;

11.2.0.2 ou superior Run package to grant Oracle Streams admin privilege.

exec dbms_goldengate_auth.grant_admin_privilege(”)

Publicado em ORACLE 11gR2 | Deixe um comentário

Comandos TCP para ver o tráfego nas portas do Goldengate o outros

Comando para analisar porta 7840  no destino e ver se esta recebendo:

/usr/sbin/tcpdump port 7840

Comando para analisar a origem  e ver se está saindo conexão para o destino:

while (true) do netstat -an | grep 10.171.28.32; sleep 2; done

Publicado em GOLDENGATE | Marcado com , , , , , , | Deixe um comentário

Query para fazer batimento do trandata no banco de dados

SELECT * FROM   DBA_LOG_GROUPS WHERE a.owner not in ('SYS','SYSTEM','DBSNMP','PERFSTAT2','APEX_030200','SYSMAN', 'OLAPSYS', 'OLAPSYS', 'MDSYS', 'ORDDATA','CTXSYS','EXFSYS','FLOWS_FILES','SCOTT');

SELECT * FROM   DBA_LOG_GROUPS WHERE a.owner not in ('SYS','SYSTEM','DBSNMP','PERFSTAT2','APEX_030200','SYSMAN', 'OLAPSYS', 'OLAPSYS', 'MDSYS', 'ORDDATA','CTXSYS','EXFSYS','FLOWS_FILES','SCOTT');

--Script para Retirar os trandatas de todas as tabelas -> ganhar tempo no imp full

SELECT 'ALTER TABLE "' || a.owner || '"."' || a.table_name || '" DROP SUPPLEMENTAL LOG GROUP || LOG_GROUP_NAME||' ;' FROM   DBA_LOG_GROUPS a WHERE a.owner not in ('SYS','SYSTEM','DBSNMP','PERFSTAT2','APEX_030200','SYSMAN', 'OLAPSYS', 'OLAPSYS', 'MDSYS', 'ORDDATA','CTXSYS','EXFSYS','FLOWS_FILES','SCOTT');
select 'alter table ' || owner || '.' || table_name || '  drop supplemental log group ' || LOG_GROUP_NAME || ';' from DBA_LOG_GROUPS where owner not in ('SYS','SYSTEM','DBSNMP','PERFSTAT2','APEX_030200','SYSMAN', 'OLAPSYS', 'OLAPSYS', 'MDSYS', 'ORDDATA','CTXSYS','EXFSYS','FLOWS_FILES','SCOTT');
SELECT * FROM   DBA_LOG_GROUPS WHERE a.owner not in ('SYS','SYSTEM','DBSNMP','PERFSTAT2','APEX_030200','SYSMAN', 'OLAPSYS', 'OLAPSYS', 'MDSYS', 'ORDDATA','CTXSYS','EXFSYS','FLOWS_FILES','SCOTT');
Publicado em GOLDENGATE | Marcado com , , | Deixe um comentário

Macro no Goldengate

Main Article
A Macro Library is a collection of OGG Macros used to externalize OGG parameters shared across multiple Groups. The library can be a single file containing multiple Macro definitions, or multiple files. Best practice is to create a directory “dirmac” as part of the OGG installation environment to hold the library files. Another best practice is to use the suffix “.mac” on all library files. This way Macro Library files can be recognized without having to open and read one.
Using these best practice tips, the following Macro Library would be stored inside of the file “$OGG_BASE/dirmac/macrolib.mac”.

MACRO #dbconnect BEGIN userid gguser, password AACAAAAAAAAAAAHAAIFBOIYAMCGIMARE, encryptkey default END;
MACRO #bpsettings BEGIN -- The following are "best practice" runtime options which may be -- used for workload accounting and load balancing purposes. -- STATOPTIONS RESETREPORTSTATS ensures that process -- statistics counters will be reset whenever a new report file is -- created. STATOPTIONS RESETREPORTSTATS
-- Generate a report every day at 1 minute after midnight. -- This report will contain the number of operations, by operation -- type, performed on each table. REPORT AT 00:01
-- Close the current report file and create a new one daily at 1 -- minute after midnight. Eleven report files are maintained on disk -- in the dirrpt directory/folder for each GoldenGate group. The -- current report file names are <group name>.rpt. The older reports -- are <group name>0.rpt through <group name>9.rpt, with the -- older report files having larger numbers. REPORTROLLOVER AT 00:01
-- REPORTCOUNT denotes that every 60 seconds the Replicat report file -- in the dirrpt directory/folder will have a line added to it that reports the -- total number of records processed since startup, along with the rated -- number of records processed per second since startup, and the change -- in rate, or "delta" since the last report. -- In a production environment, this setting would typically be 1 hour. REPORTCOUNT EVERY 60 SECONDS, RATE
-- End of "best practices" section END;
MACRO #funcsmap PARAMS (#src_table, #target_table) BEGIN    -- Map the source table provided in the variable #src_table to the target    -- table listed in the variable #target_table. There are extra columns in the    -- target we need to populate, so get the data from either the environment    -- variable, or the user token data sent over from Extract    MAP #src_table, TARGET #target_table,     colmap (usedefaults,             orders_trans_ts = @GETENV ("GGHEADER", "COMMITTIMESTAMP"),             trans_rec_loc8tr = @STRCAT (@GETENV ("RECORD", "FILESEQNO"),                                         @GETENV ("RECORD", "FILERBA")),             extract_lag_ms = @TOKEN ("TKN-EXTLAG-MSEC"),             replicat_lag_ms = @GETENV ("LAG", "MSEC"),             src_db_name = @TOKEN ("TKN-SRC-DBNAME"),             src_db_version = @TOKEN ("TKN-SRC-DBVERSION"),             src_txn_csn = @TOKEN ("TKN-TXN-CSN")      ); END;
Macros are identified by the keyword “Macro” followed by the macro name “#<name>”. The Macro body is contained within the “BEGIN” and “END” statements. The three Macros provided in this example library file are #dbconnect, #bpsettings, and #funcsmap.
    The #dbconnect Macro provides a centralized location for storing database connection information. The database login password is Blowfish encrypted using the “DEFAULT” GoldenGate encryption key.     #bpsettings activates best practices settings for generating hourly and daily activity counts.     #funcsmap accepts input parameters and uses them to build a Replicat map statement.
Using the Macro Library
Consider the following Replicat parameter file:
nolist include ./dirmac/macrolib.mac list
replicat rfuncb #dbconnect () #bpsettings () sourcedefs ./dirdef/mydefs.defs discardfile ./dirrpt/rfuncb.dsc, purge #funcsmap (amer.orders, euro.funcs_test)
IMPORTANTE--->  If list is not included in the parameter file after using nolist, no information will be written to the Group report file.
Publicado em GOLDENGATE | Marcado com , , , , , , , | Deixe um comentário

Quantidade de Redo Gerada por um determinado Objeto

 

--
-- Redo generation top by object from AWR history
-- Usage: SQL> @redogen_obj_hist "03-Sep-13 16:00" "03-Sep-13 17:00" 3500000
--
 
set echo off feedback off heading on timi off pages 1000 lines 500 VERIFY OFF
 
col WHEN for a20
col object_name for a30
 
select * from (
SELECT to_char(begin_interval_time,'YY-MM-DD HH24:MI') as WHEN,
       dhso.object_name,
       sum(db_block_changes_delta) as db_block_changes,
       to_char(round((RATIO_TO_REPORT(sum(db_block_changes_delta)) OVER ())*100,2),'99.00') as PERCENT
  FROM dba_hist_seg_stat dhss,
       dba_hist_seg_stat_obj dhso,
       dba_hist_snapshot dhs
  WHERE dhs.snap_id = dhss.snap_id
    AND dhs.instance_number = dhss.instance_number
    AND dhss.obj# = dhso.obj#
    AND dhss.dataobj# = dhso.dataobj#
  AND begin_interval_time BETWEEN to_date('&1', 'DD-Mon-YY HH24:MI')
                              AND to_date('&2', 'DD-Mon-YY HH24:MI')
  GROUP BY to_char(begin_interval_time,'YY-MM-DD HH24:MI'),
           dhso.object_name
  ORDER BY to_char(begin_interval_time,'YY-MM-DD HH24:MI'),
           db_block_changes desc
) where rownum <= &&3
/
set feedback on echo off VERIFY ON
Publicado em ORACLE 11gR2 | Deixe um comentário

xml trace- salvar com o nome do replicador ou extator em maiusculo ex: gglog-EXT1.xml

<?xml version=”1.0″?> <configuration reset=”true”>   <appender name=”traceini”>     <param name=”BufferedIO”     value=”false”/>     <param name=”Append”         value=”true”/>     <param name=”File”           value=”traceLog_%I_%A”/>     <param name=”MaxBackupIndex” value=”99″/>     <param name=”MaxFileSize”    value=”10MB”/>

<layout>       <param name=”Pattern” value=”%d{%m/%d %H:%M:%S} [%C{1}:%L] %m%n”/>     </layout>   </appender>

<logger name=”gglog.std”>     <level value=”all”/>     <appender-ref name=”traceini”/>   </logger>

<logger name=”gglog.std.utility”>     <level value=”off”/>     <appender-ref name=”traceini”/>   </logger>

</configuration>

Publicado em GOLDENGATE | Deixe um comentário

Goldengate- monitorando processos com while true do

while true do ./ggsci << EOF info all send E_TELPC, status exit EOF sleep 30 done

 

while true do asmcmd  lsdg sleep 10 done

Publicado em GOLDENGATE, SHELL SCRIPT | Deixe um comentário

volume de trafego na Placa de Rede

sar -n DEV 1 10

Publicado em EXADATA, RAC, SHELL SCRIPT, UNIX | Deixe um comentário

Scripts para Monitorar o Goldengate

####################################################################################################
#script       :moncolor.sh
#função       :Evidencia lags, abends e stops, também mostra hora do ultimo checkpoint
#Dependencias : gerainfoacs1.sh e gerainfoacs.sh
#autor        :Alexandre Pires
#data criação :22/01/2014
#OBSERVAÇÂO   :Dever ser criado no Home do Goldengate esse script chama os dependentes
#How To       : sh moncolor.sh
####################################################################################################
while true
do
echo "================================================================="
echo "INSTANCIA -->" $ORACLE_SID
DATABKP=`date +%Y%m%d%H%M%S`
date
echo "================================================================="
echo "                     ULTIMOS CHECKPOINTS "
echo "================================================================="
sh gerainfoacs1.sh > gerainfoacs.txt
awk ' {     if ($1 == "REPLICAT" ) printf("%s ", $2" hora de ultimo checkpoint -->")
       else if ($3 == "RBA" ){print $2;}
}' gerainfoacs.txt
echo "================================================================="
echo "INFO ALL -- "
echo "PROCESSO   STATUS NOME      LAG          CHECKPOINT"
echo "================================================================="
sh gerainfoacs.sh > gerainfoacs.txt
awk '
{  if ($5>"00:05:00" && $2 == "RUNNING" )  {system("tput sgr0 ~/"); system("tput bold ~/"); system("tput setaf 3 ~/");  print $0 "<- CHKPOINT!"; system("tput sgr0~/");}
else if ($4>="00:05:00" && $2 == "RUNNING" )  {system("tput sgr0 ~/"); system("tput bold ~/"); system("tput setaf 3 ~/");  print $0 "<- LAG ALTO!"; system("tput sgr0 ~/");}
else if ($4<"00:05:00"  && $2 == "RUNNING" )  {system("tput setaf 9 ~/"); print $0 ; system("tput sgr0 ~/");}
else if ($2 == "ABENDED" ){system("tput bold ~/")system("tput setaf 1 ~/");  print $0 "<- " $2; system("tput sgr0 ~/")}
else if ($2 == "STOPPED" ){system("tput bold ~/")system("tput setaf 1 ~/");  print $0 "<- " $2; system("tput sgr0 ~/")}
else system("tput sgr0 ~/");
}' gerainfoacs.txt
echo "================================================================="
sleep 10
done
####################################################################################################

####################################################################################################
#script       :gerainfoacs1.sh
#função       :mostra status de todas as filas detalhadas
#Dependencias :não depende de outros scripts
#autor        :Alexandre Pires
#data criação :22/01/2014
#OBSERVAÇÂO   :Dever ser criado no Home do Goldengate
#How To       : é chamado pelo moncolor.sh
####################################################################################################
./ggsci << EOF
info *
exit
EOF
####################################################################################################
####################################################################################################
#script       :gerainfoacs.sh
#função       :mostra status de todas filas resumido
#autor        :Alexandre Pires
#Dependencias :não depende de outros scripts
#data criação :22/01/2014
#OBSERVAÇÂO   :Dever ser criado no Home do Goldengate
#How To       : é chamado pelo moncolor.sh
####################################################################################################
./ggsci << EOF
info all
exit
EOF
####################################################################################################

##############versão com nawk do solaris

####################################################################################################
#script       :moncolor.sh
#função       :Evidencia lags, abends e stops, também mostra hora do ultimo checkpoint
#Dependencias : gerainfoacs1.sh e gerainfoacs.sh
#autor        :Alexandre Pires
#data criação :22/01/2014
#OBSERVAÇÂO   :Dever ser criado no Home do Goldengate esse script chama os dependentes
#How To       : sh moncolor.sh
#Versão para SOLARIS
####################################################################################################
while true
do
echo "================================================================="
echo "INSTANCIA -->" $ORACLE_SID
DATABKP=`date +%H:%M`
date
echo "================================================================="
echo "                     ULTIMOS CHECKPOINTS "
echo "================================================================="
sh gerainfoacs1.sh > gerainfoacs.txt
nawk -v DTHORA=$DATABKP '{ if ($1 == "REPLICAT" ) printf( "%s", $2" -->")
                                     else if ($3 == "RBA" )
                                                  if (substr($2,1,5) == DTHORA) {system("tput sgr0 ~/"); print substr($2,1,8)}
                                                  else if ($3 == "RBA" ) {system("tput bold ~/");print substr($2,1,8) " LAG NO CHECKPOINT!" DTHORA}
                               }' gerainfoacs.txt
echo "================================================================="
echo "INFO ALL -- "
echo "PROCESSO   STATUS  NOME        LAG          CHECKPOINT"
echo "================================================================="
sh gerainfoacs.sh > gerainfoacs.txt
nawk '
{  if ($5>"00:05:00" && $2 == "RUNNING" )  {system("tput sgr0 ~/"); system("tput bold ~/"); system("tput setaf 3 ~/");  print $0 "<- CHKPOINT!"; system                 ("tput sgr0~/");}
else if ($4>="00:05:00" && $2 == "RUNNING" )  {system("tput sgr0 ~/"); system("tput bold ~/"); system("tput setaf 3 ~/");  print $0 "<- LAG ALTO!"; sy                 stem("tput sgr0 ~/");}
else if ($4<"00:05:00"  && $2 == "RUNNING" )  {system("tput setaf 9 ~/"); print $0 ; system("tput sgr0 ~/");}
else if ($2 == "ABENDED" ){system("tput bold ~/")system("tput setaf 1 ~/");  print $0 "<- " $2; system("tput sgr0 ~/")}
else if ($2 == "STOPPED" ){system("tput bold ~/")system("tput setaf 1 ~/");  print $0 "<- " $2; system("tput sgr0 ~/")}
else system("tput sgr0 ~/");
}' gerainfoacs.txt
echo "================================================================="
sleep 10
done
Publicado em ORACLE 11gR2 | Deixe um comentário

Exadata InfiniBand Related Tools

Exadata InfiniBand Related Tools

InfiniBand Related Tools

# ibstat

# ibstatus        — To get the status of the Infiniband services /usr/sbin/ibstatus

# iblinkinfo       — To check the status of the Infiniband Link /usr/sbin/iblinkinfo.pl -Rl
# ibswitches /usr/sbin/ibswitches
# ibtracert <base lid of active interface> <sm lid>
ibdiagnet     — Performs diagnostics upon the InfiniBand fabric and reports status.
ibnetdiscover    — Discovers and displays the InfiniBand fabric topology and connections.
ibcheckerrors    — Checks the entire InfiniBand fabric for errors.
ibqueryerrors.pl -rR -s LinkDowned,RcvSwRelayErrors,XmtDiscards,XmtWait
ibqueryerrors.pl -s RcvSwRelayErrors,RcvRemotePhysErrors,XmtDiscards,XmtConstraintErrors,RcvConstraintErrors, ExcBufOverrunErrors,VL15Dropped    — A single invocation of this command will report on all switch ports on all switches. Run this check from a database server or a switch.
ibclearerrors ibclearcounters
# ibnetdiscover -p      — To identify spine switches # /opt/oracle.SupportTools/ibdiagtools/verify-topology   — To get topology of the infiniband network inside Exadata

Publicado em EXADATA | Marcado com , , | Deixe um comentário

Ipmitool Oracle Exadata

Ipmitool Oracle Exadata

IPMI — Intelligent Platform Management Interface, an interface standard that allows remote management, of a server from another, using standardized interface and to check status of components. # ipmitool # ipmitool -h         — Help # ipmitool -help # ipmitool -H cel01-ilom -U root chassis power on      — To power on a cell or database server, issue this from another server

# ipmitool sel          --- To show System Event Log 
# ipmitool sel list     --- To know the details of the System Event Log 
# ipmitool sel list | grep ECC | cut -f1 -d : | sort -u 
# ipmitool sensor # ipmitool sensor list
 # ipmitool sensor list | grep degree 
# ipmitool sdr | grep -v ok 
# ipmitool lan print 
# ipmitool chassis status # ipmitool power status 
# ipmitool sunoem cli       -- To Print System Event Log 
# ipmitool sunoem cli "show /SYS/T_AMB value" 
# ipmitool sunoem cli "show /SYS product_serial_number"     -- To Print Product Serial Number 
# ipmitool sunoem cli "show /SYS/MB/BIOS"      -- To Print BIOS information dcli -g all_group -l root "ipmitool sensor list | grep "degrees" | grep " T_AMB" | grep "db0"
Publicado em EXADATA | Marcado com , | Deixe um comentário

Exadata Wait Events

Exadata Wait Events

Here are the some of the wait events in Oracle Exadata.
cell flash cache read hits
cell single block physical read
cell multiblock physical read
cell list of blocks physical read
cell smart file creation
cell smart index scan
cell smart table scan
cell smart incremental backup
cell smart restore from backup
cell smart flash unkeep
cell statistics gather
cell manager closing cell
cell manager discovering disks
cell manager opening cell
cell manager cancel work request
cell worker online completion
cell worker retry
cell worker idle
Publicado em EXADATA | Marcado com , | Deixe um comentário

Exadata Statistics

Exadata Statistics

Here is the list of Oracle Exadata Statistics. cell physical IO interconnect bytes cell physical IO interconnect bytes returned by smart scan cell physical IO bytes saved during optimized file creation cell physical IO bytes saved during optimized RMAN file restore cell physical IO bytes eligible for predicate offload cell physical IO bytes saved by storage index

cell smart IO session cache lookups 
cell smart IO session cache hits 
cell smart IO session cache soft misses 
cell smart IO session cache hard misses 
cell smart IO session cache hwm 
cell smart IO sessions hwm 
cell smart IO allocated memory bytes 
cell smart IO memory bytes hwm 
cell num smart IO sessions in rdbms block IO due to user 
cell num smart IO sessions in rdbms block IO due to no 
cell mem cell num smart IO sessions in rdbms block IO due to big payload 
cell num smart IO sessions using passthru mode due to user 
cell num smart IO sessions using passthru mode due to cellsrv 
cell num smart IO sessions using passthru mode due to timezone 
cell num smart file creation sessions using rdbms block IO mode 
cell num fast response sessions 
cell num fast response sessions continuing to smart scan 
cell num active smart IO sessions 
cell blocks processed by cache layer 
cell blocks processed by txn layer 
cell blocks processed by data layer 
cell blocks processed by index layer 
cell blocks helped by commit cache 
cell blocks helped by minscn optimization 
cell simulated physical IO bytes eligible for predicate offload 
cell simulated physical IO bytes returned by predicate offload 
cell CUs sent uncompressed cell CUs sent compressed 
cell CUs sent head piece cell CUs processed for uncompressed 
cell CUs processed for compressed 
cell IO uncompressed bytes 
cell scans cell index scans cell flash cache read hits 
cell commit cache queries 
cell transactions found in commit cache
Publicado em ORACLE 11gR2 | Marcado com , , , , , , , , , , | Deixe um comentário

dcli commands in Oracle Exadata

dcli commands in Oracle Exadata

DCLI  — Distributed Command Line Interface/Interpreter (in Oracle Exadata)

DCLI, a Python script, to execute commands across multiple cells (Oracle Exadata Storage Servers) from one cell.

# dcli [Options] [Command]

Command – Can be cellcli command or Linux command

 

Options

Option
Use
-c CELLNAMES

Executes the commands only on those cells (Storage Servers).

-g GROUPFILE

Executes the command on the cells mentioned in the file.

-l USERNAME

The default user is celladmin; but we can use any other user to use for remote ssh execution. Make sure the user has ssh equivalency across all the cells where you run this command

-n

This shows an abbreviated output instead of a long output from each command execution

-r REGEXP

Suppresses the output that matches the regular expression

-t

Displays the target cells where the command will run

-x script_name

The script will be executed on the target cells

-k

Establishes the ssh user equivalency

-f FILENAE

Copies the files to the other cells but does not execute them. It’s useful for copying files and executing them later.

-d DESTFILE

Destination directory or file

-s SSHOPTIONS

String of options passed through to ssh

–scp=SCPOPTIONS

String of options passed through to scp if different  from sshoptions

–serial

Serialize execution over the cells

–unkey

Drop keys from target cell’s authorized_keys file

-v

Verbose, print extra messages to stdout

–vmstat=VMSTATOPS

vmstat command options

–version

Show dcli version number

-h, –help

Show help message
$ dcli –h
# dcli -g cells.txt -k # dcli -l root -g all_cells shutdown -h -y now # dcli -l root -g cells.txt ps -aef|grep OSWatcher
# dcli -l root -g all_cells vmstat 2 2
$ dcli -c cell2,cell3 vmstat
$ dcli -g mycells.txt –vmstat=”-a 5 2″
# dcli -g all_cells -l root "cellcli -e list cell"
# dcli -c cel02,cel04,cel06,cel08,cel12 -l root "cellcli -e list cell"
# dcli -g all_cells -t
# dcli -l celladmin -c cel08 cellcli -e "list physicaldisk 20:8 detail" # dcli -l celladmin -g all_cells cellcli -e "list griddisk" # dcli -l root -g cells cellcli -e "alter cell smtpToAddr=\'satya@roli.com\'"
# dcli -l root -g all_cells cellcli -e "list cell attributes name,smtpServer,smtpToAddr,smtpFrom"
# dcli -g mycells cellcli -e "alter cell validate mail"
# dcli -g all_cells -n cellcli -e "alter cell validate configuration"
# dcli -l root -g all_cells -x list_scr.dcl
$ dcli -g cells.txt -x mycommands.scl
# dcli -l root -g all_cells -r '.* 0' -x err.dcl
$ dcli -g cells.txt -r "reco" cellcli -e "list griddisk" # dcli -r '.* active' -l root -g all_cells cellcli -e "list griddisk"
# dcli -l root -g /opt/oracle.SupportTools/onecommand/all_group -f /tmp/sundiag.zip -d /tmp
# dcli -l root -g /opt/oracle.SupportTools/onecommand/all_group "cd /tmp;unzip sundiag.zip;ls -l newsundiag.sh;md5sum newsundiag.sh"
# dcli -g all_group -l root /opt/oracle.SupportTools/sundiag.sh 2&>1
# dcli -g all_group -l root --serial 'ls -l /tmp/sundiag*'

 

Publicado em EXADATA | Marcado com | Deixe um comentário

cellcli commands in Oracle Exadata

cellcli commands in Oracle Exadata

CellCLI  — Cell Command Line Interface/Interpreter (in Oracle Exadata)

CellCLI manages Exadata Storage Servers (Cells). The scope of the CellCLI command is the cell where it is run, not in other cells. To invoke the CellCLI, login to the Exadata cell as cellmonitor, celladmin, or root, and type “cellcli”.

# cellcli

# cellcli -e list cell
# cellcli -x -n -e “list metrichistory where objectType=’CELL'”
# cellcli <mycellci.commands >mycellci.output

Help

CellCLI> help HELP [topic]

    Available Topics:
        ALTER
        ALTER ALERTHISTORY
        ALTER CELL
        ALTER CELLDISK
        ALTER GRIDDISK         ALTER IBPORT
        ALTER IORMPLAN
        ALTER LUN         ALTER PHYSICALDISK         ALTER QUARANTINE
        ALTER THRESHOLD
        ASSIGN KEY
        CALIBRATE
        CREATE
        CREATE CELL
        CREATE CELLDISK
        CREATE FLASHCACHE
        CREATE GRIDDISK
        CREATE KEY         CREATE QUARANTINE

        CREATE THRESHOLD
        DESCRIBE
        DROP
        DROP ALERTHISTORY
        DROP CELL
        DROP CELLDISK
        DROP FLASHCACHE
        DROP GRIDDISK         DROP QUARANTINE
        DROP THRESHOLD
        EXPORT CELLDISK
        IMPORT CELLDISK
        LIST
        LIST ACTIVEREQUEST
        LIST ALERTDEFINITION
        LIST ALERTHISTORY
        LIST CELL
        LIST CELLDISK
        LIST FLASHCACHE
        LIST FLASHCACHECONTENT         LIST GRIDDISK
        LIST IBPORT         LIST IORMPLAN
        LIST KEY
        LIST LUN
        LIST METRICCURRENT
        LIST METRICDEFINITION
        LIST METRICHISTORY
        LIST PHYSICALDISK         LIST QUARANTINE
        LIST THRESHOLD
        SET
        SPOOL
        START
CellCLI> help list ibport
CellCLI> help alter cell
Describe       — Will display all attributes
CellCLI> describe cell
CellCLI> describe physicaldisk
CellCLI> describe lun
CellCLI> describe celldisk
CellCLI> describe griddisk
CellCLI> describe flashcache
CellCLI> describe flashcachecontent
CellCLI> describe metriccurrent
CellCLI> describe metricdefinition
CellCLI> describe metrichistory
List
CellCLI> help list

Enter HELP LIST <object_type> for specific help syntax.
    <object_type>:  {ACTIVEREQUEST | ALERTDEFINITION | ALERTHISTORY | CELL | CELLDISK | FLASHCACHE | FLASHCACHECONTENT | GRIDDISK | IBPORT | IORMPLAN | KEY | LUN | METRICCURRENT | METRICDEFINITION | METRICHISTORY | PHYSICALDISK | QUARANTINE | THRESHOLD }

CellCLI> list cell   – Will display Oracle Exadata Storage Servers/Cells information

CellCLI> list cell detail
CellCLI> list cell attributes all
CellCLI> list cell attributes rsStatus
CellCLI> list physicaldisk           - Will display physical disks information
CellCLI> list physicaldisk detail
CellCLI> list physicaldisk 34:5
CellCLI> list physicaldisk 34:11 detail
CellCLI> list physicaldisk attributes all
CellCLI> list physicaldisk attributes name, id, slotnumber
CellCLI> list physicaldisk attributes name, disktype, makemodel, physicalrpm, physicalport, status
CellCLI> list physicaldisk attributes name, disktype, errCmdTimeoutCount, errHardReadCount, errHardWriteCount
CellCLI> list physicaldisk where diskType=’Flashdisk’
CellCLI> list physicaldisk attributes name, id, slotnumber where disktype=”flashdisk” and status != “not present”
CellCLI> list physicaldisk attributes name, physicalInterface, physicalInsertTime where disktype = 'Harddisk' CellCLI> list physicaldisk where diskType=flashdisk and status='poor performance' detail
CellCLI> list lun             - Will display LUNs information
CellCLI> list lun detail
CellCLI> list lun 0_8 detail
CellCLI> list lun attributes all
CellCLI> list lun attributes name, cellDisk, raidLevel, status
CellCLI> list lun where disktype=flashdisk
CellCLI> list celldisk        - Will display cell disks information
CellCLI> list celldisk detail
CellCLI> list celldisk FD_01_cell07
CellCLI> list celldisk FD_01_cell13 detail
CellCLI> list celldisk attributes all
CellCLI> list celldisk attributes name, devicePartition
CellCLI> list celldisk attributes name, devicePartition where size>20G CellCLI> list celldisk attributes name,interleaving where disktype=harddisk
CellCLI> list griddisk      – Will display grid disks information
CellCLI> list griddisk detail CellCLI> list griddisk DG_01_cell03 detail
CellCLI> list griddisk attributes all
CellCLI> list griddisk attributes name, size CellCLI> list griddisk attributes name, cellDisk, diskType
CellCLI> list griddisk attributes name, ASMDeactivationOutcome, ASMModeStatus     — describe command does not show these two attributes
CellCLI> list griddisk attributes name,cellDisk,status where size=476.546875G CellCLI> list griddisk attributes name where asmdeactivationoutcome != ‘Yes’
CellCLI> list flashcache     – Will display flash cache information
CellCLI> list flashcache detail
CellCLI> list flashcache attributes all
CellCLI> list flashcache attributes degradedCelldisks
CellCLI> help list FLASHCACHECONTENT
  Usage: LIST FLASHCACHECONTENT [<filters>] [<attribute_list>] [DETAIL]
  Purpose: Displays specified attributes for flash cache entries.
  Arguments:
   <filters>: An expression which determines the entries to be displayed.
   <attribute_list>: The attributes that are to be displayed. ATTRIBUTES {ALL | attr1 [, attr2]… }
   [DETAIL]: Formats the display as an attribute on each line, with an attribute descriptor preceding each value.

CellCLI> list flashcachecontent        – Will display flash cache content information CellCLI> list flashcachecontent detail

CellCLI> list flashcachecontent where objectnumber=161441 detail CellCLI> list flashcachecontent where dbUniqueName like ‘EX.?.?’ and hitcount > 100 attributes dbUniqueName, objectNumber, cachedKeepSize, cachedSize CellCLI> list flashcachecontent where dbUniqueName like ‘EX.?.?’ and objectNumber like ‘.*007’ CellCLI> list flashcachecontent where dbUniqueName like ‘.*X.?.?’ and objectNumber like ‘.*456’ detail
CellCLI> list metriccurrent    - Will display metrics information
CellCLI> list metriccurrent gd_io_rq_w_sm
CellCLI> list metriccurrent n_nic_rcv_sec detail
CellCLI> list metriccurrent attributes name,metricObjectName,metricType, metricValue,objectType where alertState != ‘normal’
CellCLI> list metriccurrent attributes name,metricObjectName,metricType, metricValue,alertState where objectType = ‘HOST_INTERCONNECT’
CellCLI> list metriccurrent attributes all where objectType = ‘CELL’ CellCLI> list metriccurrent attributes all where objectType = ‘GRIDDISK’ –
> and metricObjectName = 'DATA_CD_09_cell01' and metricValue > 0
CellCLI> list metricdefinition        – Will display metric’s definitions
CellCLI> list metricdefinition cl_cput detail
CellCLI> list metricdefinition attributes all where objecttype=’CELL’
CellCLI> list metrichistory           – Will display metric’s history CellCLI> list metrichistory cl_cput
CellCLI> list metrichistory where objectType = ‘CELL’
CellCLI> list metrichistory where objectType = 'CELL' and name = 'CL_TEMP' CellCLI> list metrichistory cl_cput where collectiontime > '*2011-10-15T22:56:04-04:00*'
# cellcli -x -n -e “list metrichistory where objectType=’CELL’ and name=’CL_TEMP'” — -x to suppress the banner, and the -n to suppress the command line
CellCLI> list alertdefinition detail    - Will display alert's definitions
CellCLI> list alertdefinition attributes all where alertSource!='Metric' CellCLI> list alerthistory        - Will display alert's history CellCLI> list alerthistory detail CellCLI> list alerthistory where notificationState like '[023]' and severity like '[warning|critical]' and examinedBy = NULL; CellCLI> list activerequest
CellCLI> list ibport       - Will display InfiniBand configuration details CellCLI> list ibport detail
CellCLI> list iormplan       - Will display IORM plan details
CellCLI> list key CellCLI> list quarantine
CellCLI> list threshold      - Will display threshold details
Create
CellCLI> CREATE CELL [cellname] [realmname=realmvalue,] [interconnect1=ethvalue,] [interconnect2=ethvalue,][interconnect3=ethvalue,] [interconnect4=ethvalue,]
 ( ([ipaddress1=ipvalue,] [ipaddress2=ipvalue,] [ipaddress3=ipvalue,] [ipaddress4=ipvalue,]) | ([ipblock=ipblkvalue, cellnumber=numvalue]) )  — To configure the Oracle Exadata cell network and starts services.
CellCLI> create celldisk all harddisk CellCLI> create celldisk all
CellCLI> create celldisk all harddisk interleaving=’normal_redundancy’
    interleaving — none(default), normal_redundancy or high_redundancy CellCLI> create celldisk all flashdisk
CellCLI> create griddisk RECO_CD_11_cell01 celldisk=CD_11_cell01
CellCLI> create griddisk RECO_CD_11_cell01 celldisk=CD_11_cell01 size=100M
CellCLI> create griddisk all prefix RECO
CellCLI> create griddisk all flashdisk prefix FLASH
CellCLI> create griddisk all harddisk prefix HARD
CellCLI> create griddisk all harddisk prefix='data', size='270g' CellCLI> create griddisk all prefix='data', size='300g' CellCLI> create griddisk all prefix='redo', size='150g'

CellCLI> create griddisk all harddisk prefix=systemdg
CellCLI> create flashcache celldisk='FD_00_cell01'
CellCLI> create flashcache celldisk=’FD_13_cell01,FD_00_cell01,FD_10_cell01,FD_02_cell01,FD_06_cell01, FD_12_cell01,FD_05_cell01,FD_08_cell01,FD_15_cell01,FD_14_cell01,FD_07_cell01,FD_04_cell01,FD_03_cell01,FD_11_cell01,FD_09_cell01,FD_01_cell01′
CellCLI> create flashcache all
CellCLI> create flashcache all size=365.25G

CellCLI> create key
CellCLI> create quarantine

CellCLI> create threshold cd_io_errs_min.prodb comparison=">", critical=10
CellCLI> create threshold CD_IO_ERRS_MIN warning=1, comparison='>=', occurrences=1, observation=1
Alter
CellCLI> alter cell shutdown services rs - To shutdown the Restart Server service
CellCLI> alter cell shutdown services MS – To shutdown the Management Server service
CellCLI> alter cell shutdown services CELLSRV – To shutdown the Cell Services
CellCLI> alter cell shutdown services all -To shutdown the RS, CELLSRV and MS services
CellCLI> alter cell restart services rs
CellCLI> alter cell restart services all

CellCLI> alter cell led on CellCLI> alter cell led off
CellCLI> alter cell validate mail CellCLI> alter cell validate configuration
CellCLI> alter cell smtpfromaddr=’cell07@orac.com’
CellCLI> alter cell smtpfrom=’Exadata Cell 07′
CellCLI> alter cell smtptoaddr=’satya@orac.com’
CellCLI> alter cell emailFormat=’text’
CellCLI> alter cell emailFormat='html'
CellCLI> alter cell validate snmp type=ASR - Automatic Service Requests (ASRs)
CellCLI> alter cell snmpsubscriber=((host=’snmp01.orac.com,type=ASR’))
CellCLI> alter cell restart bmc  – BMC, Baseboard Management Controller, controls the compoments of the cell.
CellCLI> alter cell configure bmc

CellCLI> alter physicaldisk 34:2,34:3 serviceled on CellCLI> alter physicaldisk 34:6,34:9 serviceled off
CellCLI> alter physicaldisk harddisk serviceled on
CellCLI> alter physicaldisk all serviceled on

CellCLI> alter lun 0_10 reenable
CellCLI> alter lun 0_04 reenable force
CellCLI> alter celldisk FD_01_cell07 comment='Flash Disk'
CellCLI> alter celldisk all harddisk comment=’Hard Disk’
CellCLI> alter celldisk all flashdisk comment=’Flash Disk’

CellCLI> alter griddisk RECO_CD_10_cell06 comment=’Used for Reco’ CellCLI> alter griddisk all inactive
CellCLI> alter griddisk RECO_CD_11_cell12 inactive
CellCLI> alter griddisk RECO_CD_08_cell01 inactive force
CellCLI> alter griddisk RECO_CD_11_cell01 inactive nowait
CellCLI> alter griddisk DATA_CD_00_CELL01,DATA_CD_02_CELL01,…DATA_CD_11_CELL01 inactive
CellCLI> alter griddisk all active
CellCLI> alter griddisk RECO_CD_11_cell01 active
CellCLI> alter griddisk all harddisk comment='Hard Disk'

CellCLI> alter ibport ibp2 reset counters CellCLI> alter iormplan active
CellCLI> alter quarantine

CellCLI> alter threshold DB_IO_RQ_SM_SEC.PRODB comparison=">", critical=100
CellCLI> alter alerthistory
Drop
CellCLI> drop cell — To reset the cell to its factory settings, removes the cell related properties of the server; it does not actually remove the physical server.
CellCLI> drop cell force
CellCLI> drop celldisk CD_01_cell05
CellCLI> drop celldisk CD_00_cell09 force
CellCLI> drop celldisk harddisk
CellCLI> drop celldisk flashdisk
CellCLI> drop celldisk all CellCLI> drop celldisk all flashdisk force

CellCLI> drop griddisk DBFS_DG_CD_02_cel14
CellCLI> drop griddisk RECO_CD_11_cell01 force
CellCLI> drop griddisk prefix=DBFS
CellCLI> drop griddisk flashdisk
CellCLI> drop griddisk harddisk
CellCLI> drop griddisk all
CellCLI> drop griddisk all prefix=temp_dg

CellCLI> drop flashcache
CellCLI> drop quarantine
CellCLI> drop threshold DB_IO_RQ_SM_SEC.PRODB

CellCLI> drop alerthistory
Export
CellCLI> export celldisk
Import
CellCLI> import celldisk
Assign
CellCLI> assign key
Calibrate
CellCLI> calibrate CellCLI> calibrate force
Set
CellCLI> help set
  Usage: SET <variable> <value>
  Purpose: Sets a variable to alter the CELLCLI environment settings for your current session.
  Arguments: variable and value represent one of the following clauses:
    DATEFORMAT { STANDARD | LOCAL }
    ECHO { ON | OFF }

CellCLI> set dateformat local CellCLI> set dateformat standard

CellCLI> set echo on
CellCLI> set echo off
Spool
CellCLI> spool myCellCLI.txt
CellCLI> spool myCellCLI.txt append
CellCLI> spool myCellCLI.txt replace
CellCLI> spool off
CellCLI> spool     --- Will give spool file name
Scripts execution
CellCLI> @listdisks.cli
CellCLI> start listdisks.cli
Comments
REM This is a comment
REMARK This is another comment
— This is yet another comment
Continuation Character
CellCLI> list metriccurrent attributes name,metricObjectName,metricValue, -
objectType where alertState != ‘normal’   — continuation character for queries spanned in multiple lines
Exit/Quit
CellCLI> exit
CellCLI> quit
Publicado em EXADATA | Marcado com , | Deixe um comentário

Goldengate com Exadata – manual de instalaçao

Oracle GoldenGate on Oracle Exadata Database Machine Configuration
Oracle Maximum Availability Architecture White Paper
May 2011
Maximum
Availability
Architecture
Oracle Best Practices For High Availability
Oracle White Paper—Oracle GoldenGate on Oracle Exadata Database Machine Configuration
Executive Overview………………………………………………………………… 2
Configuration Overview …………………………………………………………… 3
Oracle GoldenGate……………………………………………………………… 3
Oracle Exadata Database Machine ……………………………………….. 3
Oracle Database File System……………………………………………….. 4
Oracle Clusterware……………………………………………………………… 4
Migrating to Oracle Exadata Database Machine………………………….. 5
Configuration Best Practices ……………………………………………………. 5
Step 1: Set Up DBFS on Oracle Exadata Database Machine …….. 5
Step 2: Configure GoldenGate and Database Parameters ………… 7
Step 3: Install Oracle GoldenGate …………………………………………. 9
Step 4: Set Up Checkpoint Files and Trail Files in DBFS…………… 9
Step 5: Set Up Discard and Page Files on the Local File System 12
Step 6: Configure Replicat Commit Behavior…………………………. 12
Step 7: Configure Autostart of Extract, Data Pump and Replicat Processes ………………………………………………………………………………………. 12
Step 8: Oracle Clusterware Configuration……………………………… 13
Appendix: Example Agent Script …………………………………………….. 17
References………………………………………………………………………….. 21
Oracle White Paper—Oracle GoldenGate on Oracle Exadata Database Machine Configuration
2
Executive Overview
The strategic integration of Oracle Exadata Database Machine and Oracle Maximum Availability Architecture (MAA) best practices (Exadata MAA) provides the best and most comprehensive Oracle Database availability solution.
This white paper describes best practices for configuring Oracle GoldenGate to work with Oracle Exadata Database Machine and Exadata storage. Oracle GoldenGate is instrumental for many reasons, including the following:
● To migrate to an Oracle Exadata Database Machine, incurring minimal downtime
● As part of an application architecture that requires Oracle Exadata Database Machine plus the flexible availability features provided by GoldenGate, such as active-active database for data distribution and continuous availability, and zero or minimal downtime during planned outages for system migrations, upgrades, and maintenance
● To implement a near real-time data warehouse or consolidated database on Oracle Exadata Database Machine, sourced from various, possibly heterogeneous source databases, populated by Oracle GoldenGate
● To capture from an OLTP application running on Oracle Exadata Database Machine to support further downstream consumption such as SOA type integration
This paper focuses on configuring Oracle GoldenGate to run on Oracle Exadata Database Machine. Oracle Exadata Database Machine can act as the source database, as the target database, or in some cases as both source and target databases for GoldenGate processing.
In addition, this paper covers the GoldenGate regular mode of continuously extracting logical changes from either online redo log files or archived redo log files.
Oracle White Paper—Oracle GoldenGate on Oracle Exadata Database Machine Configuration
3
Configuration Overview
This section introduces you to Oracle GoldenGate, Oracle Exadata Database Machine, and Oracle Database File System (DBFS). For more information about these features, see the References section at the end of this white paper.
Oracle GoldenGate
Oracle GoldenGate provides real-time, log-based change data capture, and delivery between heterogeneous systems. Using this technology, it enables a cost-effective and low-impact real-time data integration and continuous availability solution.
Oracle GoldenGate moves committed transactions with transaction integrity and minimal overhead on your existing infrastructure. The architecture supports multiple data replication topologies such as one-to-many, many-to-many, cascading, and bidirectional. Its wide variety of use cases includes real-time business intelligence; query offloading; zero-downtime upgrades and migrations; and active-active databases for data distribution, data synchronization, and high availability. Figure 1 shows the Oracle GoldenGate architecture.
Figure 1. Oracle GoldenGate Architecture
Oracle Exadata Database Machine
Oracle Exadata Database Machine is an easy to deploy, out-of-the-box solution for hosting the Oracle Database for all applications while delivering the highest levels of performance available.
Oracle Exadata Database Machine is a “grid in a box” composed of database servers, Oracle Exadata Storage Servers (Exadata), an InfiniBand fabric for storage networking and all the other components required for hosting an Oracle Database. Oracle Exadata Storage Server is a storage product optimized for use with Oracle Database applications and is the storage building block of Oracle Exadata Database Machine. Exadata delivers outstanding I/O and SQL processing
Oracle White Paper—Oracle GoldenGate on Oracle Exadata Database Machine Configuration
4
performance for online transaction processing (OLTP), data warehousing (DW) and consolidation of mixed workloads. Extreme performance is delivered for all types of database applications by leveraging a massively parallel grid architecture using Oracle Real Application Clusters (Oracle RAC), Exadata storage, Exadata Smart Flash Cache, high-speed InfiniBand connectivity, and compression technology
Oracle Database File System
The Oracle Database File System (DBFS) creates a file system interface to files stored in the database. DBFS is similar to NFS in that it provides a shared network file system that looks like a local file system. Because the data is stored in the database, the file system inherits all the HA/DR capabilities provided by the database.
With DBFS, the server is the Oracle Database. Files are stored as SecureFiles LOBs. PL/SQL procedures implement file system access primitives such as create, open, read, write, and list directory. The implementation of the file system in the database is called the DBFS SecureFiles Store. The DBFS SecureFiles Store allows users to create file systems that can be mounted by clients. Each file system has its own dedicated tables that hold the file system content.
Oracle Clusterware
Oracle Clusterware enables servers to communicate with each other, so that they appear to function as a collective unit. This combination of servers is commonly known as a cluster. Although the servers are standalone servers, each server has additional processes that communicate with other servers. In this way the separate servers appear as if they are one system to applications and end users.
Oracle Clusterware provides the infrastructure necessary to run Oracle Real Application Clusters (Oracle RAC). Oracle Clusterware also manages resources, such as virtual IP (VIP) addresses, databases, listeners, services, and so on.
There are APIs to register an application and instruct Oracle Clusterware regarding the way an application is managed in a clustered environment. You use the APIs to register the Oracle GoldenGate Manager process as an application managed through Oracle Clusterware. The Manager process should then be configured to automatically start or restart other GoldenGate processes.
Oracle White Paper—Oracle GoldenGate on Oracle Exadata Database Machine Configuration
5
Migrating to Oracle Exadata Database Machine
Figure 2. Migrating Oracle GoldenGate to Oracle Exadata Database Machine
Oracle GoldenGate supports an active-passive bidirectional configuration, where GoldenGate
replicates data from an active primary database to a full replica database on a live standby
system that is ready for failover during planned and unplanned outages. This provides the ability to migrate to Oracle Exadata Database Machine allowing the new system to work in tandem until testing is completed and a switchover planned.
This paper includes instructions for configuring a target system on Oracle Exadata Database Machine that will act as the standby database shown in Figure 2.
Configuration Best Practices
Step 1: Set Up DBFS on Oracle Exadata Database Machine
When setting up the configuration, the best practice is to store the GoldenGate trail files and checkpoint files in DBFS to provide recoverability and failover capabilities in the event of a system failure.
Using DBFS is fundamental to the continuing availability of the checkpoint and trail files in the event of a node failure. Ensuring the availability of the checkpoint files is essential to ensure that, after a failure occurs, the Extract process can continue mining from the last known archived redo log file position and Replicat processes can start applying from the same trail file position before a failure occurred. Using DBFS allows one of the surviving database instances to be the source of an Extract process or destination for the Replicat processes.
See My Oracle Support note 1054431.1 for configuring DBFS on the Oracle Exadata Database Machine. DBFS should be mounted using fstab.
Oracle White Paper—Oracle GoldenGate on Oracle Exadata Database Machine Configuration
6
For better performance of trail file and checkpoint file I/O operations it is recommended to change the storage parameters of the LOB segment used by DBFS:
— Connect to the DBFS database
SQL> connect system/<passwd>@<dbfs_tns_alias>
— View current LOB storage:
SQL> SELECT table_name, segment_name,logging
FROM dba_lobs WHERE tablespace_name='<dbfs tablespace name>’;
— More than likely it will be something like this:

— TABLE_NAME SEGMENT_NAME LOGGING CACHE
— —————— ———————- ——- ———-
— T_GOLDENGATE LOB_SFS$_FST_73 YES NO
— Alter the LOB segment:
SQL> ALTER TABLE DBFS.<TABLE_NAME> MODIFY LOB (FILEDATA) (CACHE LOGGING);
— View the new LOB storage:
SQL> SELECT table_name, segment_name,logging FROM dba_lobs WHERE tablespace_name='<dbfs tablespace name>’;
— TABLE_NAME SEGMENT_NAME LOGGING CACHE
— —————— ———————- ——- ———-
— T_GOLDENGATE LOB_SFS$_FST_73 YES YES
This paper focuses on using DBFS for the shared file system. However, alternatively, you could place the GoldenGate trail files and checkpoint files on the database machine local file system or an NFS mount point hosted by a server outside of the database machine. Using the local file system provides limited bandwidth with a single drive and offers no high availability capabilities. NFS can be used as a suitable alternative to DBFS for storing the trail files and checkpoint files to allow sharing between the source and target GoldenGate hosts.
Starting with GoldenGate version 11.1.1 a new Bounded Recovery feature was added that guarantees an efficient recovery after Extract stops for any reason, planned or unplanned, no matter how many open (uncommitted) transactions there were at the time that Extract stopped, nor how old they were. Bounded Recovery sets an upper boundary for the maximum amount of time that it would take for Extract to recover to the point where it stopped and then resume normal processing. The Bounded Recovery checkpoint files should be placed on a shared file system such that, in an event of a failover when there are open long running transactions Extract can use Bounded Recovery to reduce the time to taken to perform recovery. At this time, DBFS is not supported for a Bounded Recovery checkpoint file location, so using something like NFS is suggested. It is possible to store the checkpoint files on local file system, and when Extract performs recovery after a node failure, the standard checkpoint mechanism will be used until
Oracle White Paper—Oracle GoldenGate on Oracle Exadata Database Machine Configuration
7
new local Bounded Recovery checkpoint files are subsequently created. This will only be noticable if there are long running transactions at the time of the failure.
For more information on Bounded Recovery refer to the Oracle GoldenGate Reference Guide:
http://download.oracle.com/docs/cd/E18101_01/doc.1111/e17791.pdf
Note: If you are using a GoldenGate Data Pump process to transfer the trail files from a source host on the database machine using DBFS, then contact Oracle Support to obtain the fix to Bug 10146318. This bug fix improves trail file creation performance on DBFS by the GoldenGate server/collector process.
Step 2: Configure GoldenGate and Database Parameters
When running GoldenGate in regular mode, there is no need to explicitly set additional database parameters. However, consider setting up the following options:
● Use the default Oracle Automatic Storage Manager (Oracle ASM) naming convention for the archived redo log files.
● Configure the GoldenGate Extract parameter for the new Oracle ASM log read API.
GoldenGate release 11.1.1 introduces a new method of reading log files stored in Oracle ASM. This new method uses the database server to access the redo and archived redo log files, instead of connecting directly to the Oracle ASM instance. The database must contain the libraries with the API modules. The libraries are currently included with Oracle Database release 10.2.0.5 and 11.2.0.2. Contact Oracle Support to request a backport for Bug 9397368 to obtain the libraries for releases (after release 10.2.0.5).
To successfully mine the Oracle archived redo log files located on the storage cells that are managed by Oracle ASM, configure the GoldenGate Extract parameter as follows:
1. Set the TRANLOGOPTIONS parameter to specify use of the new log read API. For example:
TRANLOGOPTIONS DBLOGREADER
● Configure the GoldenGate Extract parameter for the old Oracle ASM log read API.
It is also recommended to configure GoldenGate with the old log read API (below). This way, if the source database is unavailable, the Extract process can continue to mine the log files using the old API. Comment the Extract parameters to disable them until they are needed.
Oracle White Paper—Oracle GoldenGate on Oracle Exadata Database Machine Configuration
8
To set the Oracle ASM parameters with the old log read API:
1. Set the TRANLOGOPTIONS parameter to specify the Oracle ASM logon information in the Extract parameter file. For example:
TRANLOGOPTIONS ASMUSER sys@ASM, ASMPASSWORD g32adrwe, ENCRYPTKEY DEFAULT
Comment this parameter to disable the old log read API until it is needed.
2. If a password is not already in use by the Oracle ASM instance, create it using orapwd and set the REMOTE_LOGIN_PASSWORDFILE initialization parameter to SHARED or EXCLUSIVE.
A password file is required for connection to the Oracle ASM instance by GoldenGate.
3. Configure the LISTENER.ORA and TNSNAMES.ORA files for remote connections to the ASM instance to work, as shown in the following examples:
Example listener.ora:
SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(SID_NAME = PLSExtProc)
(ORACLE_HOME = /app/oracle/grid)
(PROGRAM = extproc)
)
(SID_DESC =
(ORACLE_HOME = /app/oracle/grid)
(SID_NAME = +ASM1)
)
)
LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS=(PROTOCOL=TCP)(HOST=ggtest)(PORT=1521))
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC0))
)
)
Example TNSNAMES.ORA
ASM = (DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = ggtest)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = ASM)
(INSTANCE_NAME = +ASM1)
)
Oracle White Paper—Oracle GoldenGate on Oracle Exadata Database Machine Configuration
9
)
Step 3: Install Oracle GoldenGate
1. Download the GoldenGate software from Oracle Technology Network (OTN) at:
http://www.oracle.com/technetwork/middleware/goldengate/downloads/index.html
2. Install GoldenGate locally on the primary source and target nodes in the Oracle RAC configuration. Make sure the installation directory is the same on all nodes.
3. Once you have successfully configured GoldenGate on the primary source and/or target nodes, shut down Extract/Replicat and copy the entire GoldenGate home directory to the other source and target nodes.
4. Follow the generic installation instructions for the source and target machine installations available in Chapter 2: “Installing GoldenGate” at:
http://download.oracle.com/docs/cd/E18101_01/doc.1111/e17799.pdf
Step 4: Set Up Checkpoint Files and Trail Files in DBFS
1. Set up checkpoint files
Checkpoint files contain the current read and write positions of the Extract and Replicat processes. Checkpoints provide fault tolerance by preventing the loss of data should the system, the network, or a GoldenGate process need to be restarted.
Placing the checkpoint files on the local file system will not provide high availability in the event of a database node failure. A checkpoint table can be used to record Replicat checkpoint information to provide an alternative method of fault tolerance.
To store the checkpoint files on DBFS, the best practice is to create a symbolic link from the GoldenGate home directory to a directory in DBFS. For example:
# Ensure the DBFS file system is already mounted
# In this example, the DBFS mount point is /mnt/dbfs
% mkdir /mnt/dbfs/goldengate/dirchk
% cd /GoldenGate/v11_1_1_0
% rm –rf dirchk
% ln –s /mnt/dbfs/goldengate/dirchk dirchk
Note: GoldenGate uses file locking on the checkpoint files to determine if the Extract or Replicat processes are already running. This would normally prevent the process from being started a second time on another Oracle RAC node that has access to the checkpoint files. DBFS does not support this method of file locking. Mounting DBFS on a single Oracle RAC node prevents access to the checkpoint files from other nodes. This will in turn prevent the extract or replicat from being started concurrently on multiple nodes.
Oracle White Paper—Oracle GoldenGate on Oracle Exadata Database Machine Configuration
10
2. Set up trail files.
Trail files contain the data extracted from the archived redo log files. The trail files are automatically generated by the Extract process.
Store the trail files on DBFS. By mounting the same DBFS directory on both the source and target databases, much like an NFS mount, the Replicat process can read from the same trails created by the Extract process. This removes the need for the GoldenGate Data Pump if both the source and target databases run in the same Oracle Exadata Database Machine.
If the source and target databases are not part of the same Exadata Database Machine, then use a GoldenGate Data Pump to transfer the trail files between the hosts. Be sure to use DBFS to provide fault tolerance for the trail files.
To configure GoldenGate trail files on DBFS for the source database:
1. Create a DBFS directory:
# Ensure DBFS file system is already mounted
# In this example, the DBFS mount point is /mnt/dbfs
% mkdir /mnt/dbfs/goldengate/dirdat
2. Set the EXTTRAIL Extract parameter:
EXTTRAIL /mnt/dbfs/goldengate/dirdat/aa
3. After creating the Extract, use the same EXTTRAIL parameter value to add the local trail:
% ggsci
GGSCI (ggtest.oracle.com) 1> ADD EXTTRAIL /mnt/dbfs/goldengate/dirdat/aa, EXTRACT ext_db, Megabytes 500
Further instructions about creating the Extract are available in the Oracle GoldenGate Administration Guide at
http://download.oracle.com/docs/cd/E18101_01/doc.1111/e17341.pdf
Oracle White Paper—Oracle GoldenGate on Oracle Exadata Database Machine Configuration
11
To configure GoldenGate trail files on DBFS for the target database:
1. Make sure the DBFS directory is already created on the source database
2. Set the EXTTRAIL Replicat parameter, as follows:
EXTTRAIL /mnt/dbfs/goldengate/dirdat/aa
3. When adding the Replicat, use the same EXTTRAIL parameter value:
% ggsci
GGSCI (ggtest.oracle.com) 1> ADD REPLICAT rep_db1, EXTTRAIL /mnt/dbfs/goldengate/dirdat/aa
Do not place trail files on the local file system because it will lengthen restart times in the event of a node failure, reducing availability.
To configure Data Pump between a source and target database outside the same Exadata Database Machine:
1. Make sure Extract and Replicat are configured
2. Set the RMTHOST Data Pump parameter to the IP or hostname that will be used for connecting to the target. In Step 7 below, the Application Virtual IP address is created with Cluster Ready Services (CRS) so that a single IP address can be moved between compute nodes, so that Data Pump can continue to connect to the target host when it moves from a failed node to a surviving node:
RMTHOST gg_dbmachine, MGRPORT 8901
3. Set the RMTTRAIL Data Pump parameter to the trail file location on the target host:
RMTTRAIL /mnt/dbfs/goldengate/dirdat/aa
4. Create a Data Pump process using the local trail file location on the source host:
% ggsci
GGSCI (ggtest.oracle.com) 1> ADD EXTRACT dpump_1, EXTTRAILSOURCE /mnt/dbfs/goldengate/dirdat/aa
5. Use the ADD RMTTRAIL command to specify the remote trail file location on the target host:
% ggsci
GGSCI (ggtest.oracle.com) 1> ADD RMTTRAIL /mnt/dbfs/goldengate/dirdat/aa EXTRACT dpump_1, MEGABYTES 500
Oracle White Paper—Oracle GoldenGate on Oracle Exadata Database Machine Configuration
12
Further instructions about creating the Data Pump process are available in the Oracle GoldenGate Administration Guide at
http://download.oracle.com/docs/cd/E18101_01/doc.1111/e17341.pdf
Step 5: Set Up Discard and Page Files on the Local File System
Discard files are used by GoldenGate to record any operations that failed. The discard file is most useful for the Replicat process to help resolve data errors, such as an invalid column mapping.
Because GoldenGate only replicates transactions that are committed, the capture component stores the operations of each transaction in a virtual-memory pool known as a cache until it receives a commit or rollback for that transaction. When the cache becomes full, transactions can be paged out to disk. By default, the files are stored in the dirtmp sub-directory of the GoldenGate installation directory.
Page files cannot be stored in DBFS because it is a memory mapped file, a file type that is not currently supported by DBFS. Store both discard and page files on the database server local file system within the GoldenGate installation directory structure.
For more details, see CACHEMGR in the Oracle GoldenGate Reference Guide at
http://download.oracle.com/docs/cd/E18101_01/doc.1111/e17791.pdf
Step 6: Configure Replicat Commit Behavior
If using GoldenGate version 11.1.1 or lower it is recommended to set the Replicat commit behavior to COMMIT NOWAIT. The Replicat processes will no longer wait at each commit when applying transactions, increasing throughput performance. This should only be considered when using a checkpoint table due to protection of recovery data during a checkpoint.
Set the Replicat parameter file to COMMIT NOWAIT as follows:
SQLEXEC “ALTER SESSION SET COMMIT_WRITE=’NOWAIT'”;
Step 7: Configure Autostart of Extract, Data Pump and Replicat Processes
Configure the Extract, Data Pump (if used) and Replicat processes to automatically start when the Manager process is started. Add the following parameter to the Manager parameter file:
AUTOSTART ER *
Oracle White Paper—Oracle GoldenGate on Oracle Exadata Database Machine Configuration
13
Step 8: Oracle Clusterware Configuration
The following step-by-step procedure shows how to instruct Oracle Clusterware to start GoldenGate, check whether it is running, and stop it. It provides an example shell script to carry out these functions, including registering the application with Oracle Clusterware and managing switchover and failover between the Oracle RAC nodes.
1. Create an Application VIP
If the source system is outside of Oracle Exadata Database Machine and it uses GoldenGate Data Pump to transfer the trail file data to the target database machine, an application VIP is required to ensure the remote data pumps can communicate with the target database machine, regardless of which node is hosting GoldenGate.
An application virtual internet protocol address (VIP) is a cluster resource that Oracle Clusterware manages. The VIP is assigned to a cluster node and will be migrated to another node in the event of a node failure. This allows GoldenGate data pump to continue transferring data to the newly assigned target node.
If both the Extract and Replicat processes are running within the same database machine and Data Pump is not used, then there is no need to create the Application VIP and you can skip to step 2 to create an agent script. Otherwise, perform the following steps:
a) To create the application VIP, run the following as the root user:
$GRID_HOME/bin/appvipcfg create -network=1 \
-ip=10.1.41.93 \
-vipname=ggatevip \
-user=root
In the example:
• $GRID_HOME is the Oracle home in which Oracle 11g Release 2 Grid infrastructure components have been installed (for example: /u01/app/grid).
• network is the network number that you want to use. With Oracle Clusterware release 11.2.0.1, you can find the network number using the following command: crsctl stat res -p |grep -ie .network -ie subnet |grep -ie name -ie subnet Consider the following sample output: NAME=ora.net1.network USR_ORA_SUBNET=10.1.41.0
Oracle White Paper—Oracle GoldenGate on Oracle Exadata Database Machine Configuration
14
net1 in NAME=ora.net1.network indicates this is network 1, and the second line indicates the subnet on which the VIP will be created.
o ip is the IP address provided by your system administrator for the new Application VIP. This IP address must be in the same subnet as determined above.
o Ggatevip is the name of the application VIP that you will create.
b) Run the following command to give the Oracle Database installation owner permission to start the VIP:
$GRID_HOME/bin/crsctl setperm resource ggatevip -u user:oracle:r-x
c) As the Oracle Database installation owner, start the VIP resource:
$GRID_HOME/bin/crsctl start resource ggatevip
d) To validate whether the VIP is running and on which node it is running, execute:
$GRID_HOME/bin/crsctl status resource ggatevip
See the Oracle Clusterware documentation for further details about creating an Application VIP:
http://download.oracle.com/docs/cd/E11882_01/rac.112/e10717/crschp.htm#BGBHIBEE
2. Create an agent script
Oracle Clusterware runs resource-specific commands through an entity called an agent. The agent script:
● Must be able to accept five parameter values: start, stop, check, clean and abort.
● Must be stored in the same location on all nodes. In this example, it is stored in the Grid Infrastructure ($GRID_HOME) ORACLE_HOME/crs/script directory.
● Must be owned by the Oracle user and have execute permissions.
● Must be accessible at the same location on every node in the cluster.
● Must include environment variable settings for ORACLE_HOME, ORACLE_SID, PATH, TNS_ADMIN and LD_LIBRARY path so that CRS will be able to find the correct program executables and Oracle Net configuration. If the correct sqlnet.ora, tnsnames.ora or dbfs_client executable cannot be found when CRS tries to mount DBFS it will fail.
See the Appendix for an example agent script that mounts and unmounts a DBFS file system upon startup and failover, and starts and stops the GoldenGate Manager, Extract, Data Pump and Replicat processes.
It is important to manually test the agent script to start and stop the GoldenGate processes as well as mounting and dismounting DBFS before moving onto the next step.
Oracle White Paper—Oracle GoldenGate on Oracle Exadata Database Machine Configuration
15
3. Register a resource in Oracle Clusterware
Register Oracle GoldenGate as a resource in Oracle Clusterware using the crsctl utility.
When using DBFS to store the GoldenGate trail and checkpoint files, there is a start dependency on the DBFS database. It is recommended to include the DBFS instance as the start dependency for the new Oracle Clusterware resource.
a. Use the Oracle Grid infrastructure user (oracle, in this example) to execute the following:
$GRID_HOME/bin/crsctl add resource ggateapp_ext \ -type cluster_resource \ -attr “ACTION_SCRIPT=/u01/app/11.2.0/grid/crs/script/11gr2_gg_action.scr, CHECK_INTERVAL=30, START_DEPENDENCIES=’hard(ggappvip,ora.dbfs.db) pullup(ggappvip)’ STOP_DEPENDENCIES=’hard(ggappvip)’ SCRIPT_TIMEOUT=300”
If an Application VIP is not used, issue the following command:
$GRID_HOME/bin/crsctl add resource ggateapp_ext \ -type cluster_resource \ -attr “ACTION_SCRIPT=/u01/app/11.2.0/grid/crs/script/11gr2_gg_action.scr, CHECK_INTERVAL=30, START_DEPENDENCIES=’hard(ora.dbfs.db)’ SCRIPT_TIMEOUT=300”
b. To determine the name of the DBFS resource for the start dependency:
crsctl status resource | grep -i dbfs
This paper assumes a single Oracle Exadata Database Machine is used for either a source (Extract) or target (Replicat) host. If the database machine is split into separate clusters such that the source and target run within the same database machine, then make sure the Extract and Replicat is restricted to the designated clustered nodes:
$GRID_HOME/bin/crsctl add resource ggateapp_ext \ -type cluster_resource \ -attr “ACTION_SCRIPT=/u01/app/11.2.0/grid/crs/script/11gr2_gg_action.scr, CHECK_INTERVAL=30, START_DEPENDENCIES=’hard(ora.dbfs.db)’ HOSTING_MEMBERS=’testbox03 testbox04’ PLACEMENT=’restricted’ SCRIPT_TIMEOUT=300”
Be sure to add the appropriate start and stop dependencies if an Application VIP is used.
For more information about the crsctl add resource command and its options, see the Oracle Clusterware Administration and Deployment Guide at
http://download.oracle.com/docs/cd/E11882_01/rac.112/e10717/toc.htm
Oracle White Paper—Oracle GoldenGate on Oracle Exadata Database Machine Configuration
16
4. Start the resource
Once the resource has been added, you should always use Oracle Clusterware to start Oracle GoldenGate. Login as the Oracle Grid infrastructure user (oracle) and execute the following:
$GRID_HOME/bin/crsctl start resource ggateapp_ext
To check the status of the application, enter the command:
$GRID_HOME/bin/crsctl status resource ggateapp_ext
For example:
[oracle@testbox04]$ crsctl status resource ggateapp_ext
NAME=ggateapp_ext
TYPE=cluster_resource
TARGET=ONLINE
STATE=ONLINE on testbox04
5. Manage the application
To relocate Oracle GoldenGate onto a different cluster node, use the ‘$GRID_HOME/bin/crsctl relocate resource’ API and include the force option. This command can be run on any node in the cluster as the Grid Infrastructure user (oracle).
For example:
[oracle@testbox04]$ crsctl relocate resource ggateapp_ext -f
CRS-2673: Attempting to stop ‘ggateapp_ext’ on ‘testbox04’
CRS-2677: Stop of ‘ggateapp_ext’ on ‘testbox04’ succeeded
CRS-2672: Attempting to start ‘ggateapp_ext’ on ‘testbox03’
CRS-2676: Start of ‘ggateapp_ext’ on ‘testbox03’ succeeded
To stop the Oracle GoldenGate resource, enter the following command:
$GRID_HOME/bin/crsctl stop resource ggateapp_ext
6. CRS Cleanup
To remove GoldenGate from Oracle Clusterware management, perform the following tasks:
a) Stop GoldenGate (login as the Oracle Grid infrastructure (oracle) user):
$GRID_HOME/bin/crsctl stop resource ggateapp_ext
b) As the root user, delete the application ggateapp:
$GRID_HOME/bin/crsctl delete resource ggateapp_ext
c) If no longer needed, delete the agent action script: 11gr2_gg_action.scr.
This does not delete the GoldenGate or DBFS configuration.
Oracle White Paper—Oracle GoldenGate on Oracle Exadata Database Machine Configuration
17
Note: Ensure the GoldenGate software was installed in the same directory on all nodes that may run the processes after a failure. Also ensure that the Manager, Extract, Data Pump, and Replicat parameter files are up to date on all nodes.
Recommendations When Deploying on Oracle RAC
When GoldenGate is configured in an Oracle RAC environment, follow these recommendations:
● Ensure the DBFS database has instances on all the database nodes involved in the Oracle RAC configuration.
This action provides access to GoldenGate if it is restarted after a node failure.
● Ensure the DBFS file system is mountable on all database nodes in the Oracle RAC configuration.
To prevent the Extract or Replicat processes from being started on multiple nodes concurrently, mount the file system only on the node where GoldenGate is running. Use the same mount point names on all the nodes to ensure seamless failover.
Appendix: Example Agent Script
The following example agent script mounts and unmounts a DBFS file system upon startup and failover, as well as starting and stopping the GoldenGate Manager, Extract and Replicat processes. The agent script accepts the parameter values: start, stop, check, clean and abort.
#!/bin/bash
#11gr2_gg_action.scr
# Edit the following environment variables:
export ORACLE_HOME=/u01/app/oracle/product/11.2.0.2/streams_0820
export ORACLE_SID=STRMSB1
GGS_HOME=/home/oracle/goldengate/latest
export PATH=$ORACLE_HOME/bin:$PATH
export TNS_ADMIN=$ORACLE_HOME/network/admin
#Include the GoldenGate home in the library path to start GGSCI
export LD_LIBRARY_PATH=${ORACLE_HOME}/lib:${LD_LIBRARY_PATH}:${GGS_HOME}
# Edit this to indicate the DBFS mount point
DBFS_MOUNT_POINT=/dbfs_direct
# Edit this to indicate the file system mounted in the DBFS mount point
DBFS_FILE_SYSTEM=/dbfs_direct/goldengate
Oracle White Paper—Oracle GoldenGate on Oracle Exadata Database Machine Configuration
18
# Edit for correct Extract name if running this script on the source:
EXTRACT=EXT
# OR edit for current Replicat names if running script on the target:
REPLICAT=REP
#specify delay after start before checking for successful start
start_delay_secs=5
#check_process validates that Manager/Extract/Replicat process is running #at PID that GoldenGate specifies.
check_process () {
PROCESS=$1
if [ ${PROCESS} = mgr ]
then
PFILE=MGR.pcm
elif [ ${PROCESS} = ext ]
then
PFILE=${EXTRACT}*.pce
else
PFILE=${REPLICAT}*.pcr
fi
if ( [ -f “${GGS_HOME}/dirpcs/${PFILE}” ] )
then
pid=`cut -f8 “${GGS_HOME}/dirpcs/${PFILE}”`
if [ ${pid} = `ps -e |grep ${pid} |grep ${PROCESS} |cut -d ” ” -f2` ]
then
#process is running on the PID <96> exit success
exit 0
else
if [ ${pid} = `ps -e |grep ${pid} |grep ${PROCESS} |cut -d ” ” -f1` ]
then
#process is running on the PID <96> exit success
exit 0
else
#process is not running on the PID
exit 1
fi
fi
else
Oracle White Paper—Oracle GoldenGate on Oracle Exadata Database Machine Configuration
19
#process is not running because there is no PID file
exit 1
fi
}
#call_ggsci is a generic routine that executes a ggsci command
call_ggsci () {
ggsci_command=$1
ggsci_output=`${GGS_HOME}/ggsci << EOF
${ggsci_command}
exit
EOF`
#mount_dbfs will mount the DBFS file system if it has not yet been
#mounted
mount_dbfs () {
if ( [ ! -d ${DBFS_FILE_SYSTEM} ] )
then
#this assumes the DBFS mount point was added to fstab
#will not mount automatically upon reboot because fuse does not
#support this; use Oracle wallet for automatic DBFS database login
mount ${DBFS_MOUNT_POINT}
fi
}
#unmount_dbfs will unmount the DBFS file system
unmount_dbfs () {
if ( [ -d ${DBFS_FILE_SYSTEM} ] )
then
fusermount -u ${DBFS_MOUNT_POINT}
fi
}
stop_everything () {
# Before starting, make sure everything is shutdown and process files are removed
#attempt a clean stop for all non-manager processes
call_ggsci ‘stop er *’
#ensure everything is stopped
call_ggsci ‘stop er *!’
Oracle White Paper—Oracle GoldenGate on Oracle Exadata Database Machine Configuration
20
#in case there are lingering processes
call_ggsci ‘kill er *’
#stop Manager without (y/n) confirmation
call_ggsci ‘stop manager!’
#Remove the process files:
rm -f $GGS_HOME/dirpcs/MGR.pcm
rm -f $GGS_HOME/dirpcs/*.pce
rm -f $GGS_HOME/dirpcs/*.pcr
}
case $1 in
‘start’)
#mount the DBFS file system if it is not yet mounted
mount_dbfs
# stop all GG processes and remove process files
stop_everything
sleep ${start_delay_secs}
#Now can start everything…
#start Manager
call_ggsci ‘start manager’
#there is a small delay between issuing the start manager command
#and the process being spawned on the OS <96> wait before checking
sleep ${start_delay_secs}
#start Extract or Replicats
call_ggsci ‘start er *’
#check whether Manager is running and exit accordingly
check_process mgr
sleep ${start_delay_secs}
#Check whether Extract is running
check_process ext
;;
‘stop’)
# stop all GG processes and remove process files
Oracle White Paper—Oracle GoldenGate on Oracle Exadata Database Machine Configuration
21
stop_everything
#unmount DBFS
unmount_dbfs
#exit success
exit 0
;;
‘check’)
check_process mgr
check_process ext
check_process rep
;;
‘clean’)
# stop all GG processes and remove process files
stop_everything
#unmount DBFS
unmount_dbfs
#exit success
exit 0
;;
‘abort’)
# stop all GG processes and remove process files
stop_everything
#unmount DBFS
unmount_dbfs
#exit success
exit 0
;;
esac
References
● Oracle GoldenGate Administration Guide version 11.1.1
● Oracle GoldenGate Oracle Installation and Setup Guide version 11.1.1
● Oracle GoldenGate Reference Guide version 11.1.1
Oracle White Paper—Oracle GoldenGate on Oracle Exadata Database Machine Configuration
22
● Oracle Database SecureFiles and Large Object Developer’s Guide (DBFS)
● Oracle Clusterware Administration and Deployment Guide
● Oracle Maximum Availability Architecture Web site http://www.otn.oracle.com/goto/maa
Oracle GoldenGate on Oracle Database Machine Configuration
May 2011
Author: Stephan Haisley
Contributing Authors: Mark Van de Wiel,
MAA team
Oracle Corporation
World Headquarters
500 Oracle Parkway
Redwood Shores, CA 94065
U.S.A.
Worldwide Inquiries:
Phone: +1.650.506.7000
Fax: +1.650.506.7200
oracle.com
Copyright © 2011, Oracle and/or its affiliates. All rights reserved. This document is provided for information purposes only and the contents hereof are subject to change without notice. This document is not warranted to be error-free, nor subject to any other warranties or conditions, whether expressed orally or implied in law, including implied warranties and conditions of merchantability or fitness for a particular purpose. We specifically disclaim any liability with respect to this document and no contractual obligations are formed either directly or indirectly by this document. This document may not be reproduced or transmitted in any form or by any means, electronic or mechanical, for any purpose, without our prior written permission.
Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners.
0109

Publicado em EXADATA, GOLDENGATE | Marcado com , , , | Deixe um comentário

CARGA INCIAL POR EXEMPLOS

Fazendo uma carga direta com tabelas de colunas diferentes e filtrando dados.

REPLICADOR
—————————–

Primeiramente crie o arquivo de parametros para um replicador de carga inicial

=============================>>> REPORT <<<===============================
replicat repcarga
SETENV (ORACLE_SID = “COMP”)
userid ggcomp@comp, password oracle
–handlecollisions
discardfile ./dirrpt/repcarga.dsc, append
SOURCEDEFS ./dirdef/defgen1.def
MAP hr.ale_col TARGET hr_1.menos_col, & COLMAP (dono=owner,nome=index_name, tipo=index_type,tabela= table_name);
==========================================================================

Após a criação será necessário criar o replicador no Goldengate usando o parametro SPECIALRUN, que indica que ele será rodado somente uma vez e fará
a carga completa da tabela baseada no extrator da origem que veremos logo abaixo.

ADD REPLICAT REPCARGA SPECIALRUN

Observação, um replicador de carga inicial não aparece com o comando info all, e também não podemos fazer o start dele, pois quem fará isso será diretamente o extrator.

O comando abaixo poderá ser usado durante a carga para monitorar o replicador de carga inicial “SPECIALRUN”.

INFO REPLICAT *, TASKS
ou
INFO REPLICAT REPCARGA, TASKS

EXTRATOR
———————

O extrator de carga incial não usa pump nem gera trail files, ele inicia automaticamente o replicador do destino e passa a enviar diretamente os dados para ele, por esse
motivo, no arquivo prm, são inseridos dados do destino como:  ip “RMTHOST”,  porta “MGRPORT” e REPLICADOR “GROUP”.

O prm do extrart de carga inicial ficara da seguinte forma:

================================================================>>> EXTRACT <<<=====================================================================
EXTRACT extcarga
USERID ddl_ogg, PASSWORD ddl_ogg
RMTHOST 10.0.2.15, MGRPORT 7890
RMTTASK replicat, GROUP repcarga
table hr.ale_col , FILTER (@STRFIND (OWNER, “HR”) > 0) FILTER (@STRFIND (INDEX_TYPE, “NORMAL”) > 0) FILTER (@STRFIND (TABLE_NAME, “EMPLOYEES”) > 0);
======================================================================================================================================================

No exemplo acima, usamos já na carga incial um filtro em tres campos diferentes, fazendo assim com que a carga seja menor, levando somente os dados necessários para o destino.

O comando abaixo criar o contendo o parametro “SOURCEISTABLE” cria o extract de carga incial dentro do Goldengate:

ADD EXTRACT EXTCARGA, SOURCEISTABLE

Após o inicio do extract, usamos o comando abaixo para monitorar o extrator de carga inicial “SOURCEISTABLE”.

INFO EXTRACT EXTCARGA , TASKS
ou
INFO EXTRACT *, TASKS

Publicado em GOLDENGATE | Marcado com , , , , | Deixe um comentário

MACRO PARA GERENCIAR ERROS ATRAVÉS DE EXCEPTION

— Replicator parameter file to apply changes
— to tables



— This starts the macro

MACRO #exception_handler
BEGIN
, TARGET ggs_admin.exceptions
, COLMAP ( rep_name = “RLOAD1”
, table_name = @GETENV (“GGHEADER”, “TABLENAME”)
, errno = @GETENV (“LASTERR”, “DBERRNUM”)
, dberrmsg = @GETENV (“LASTERR”, “DBERRMSG”)
, optype = @GETENV (“LASTERR”, “OPTYPE”)
, errtype = @GETENV (“LASTERR”, “ERRTYPE”)
, logrba = @GETENV (“GGHEADER”, “LOGRBA”)
, logposition = @GETENV (“GGHEADER”, “LOGPOSITION”)
, committimestamp = @GETENV (“GGHEADER”, “COMMITTIMESTAMP”))
, INSERTALLRECORDS
, EXCEPTIONSONLY;
END;
— END OF THE MACRO

REPLICAT RLOAD1
ASSUMETARGETDEFS
SETENV (ORACLE_SID=ORADB2)
USERID ggs_admin@oradb2, PASSWORD ggs_admin
DISCARDFILE ./dirrpt/rload.dsc, PURGE
REPERROR (DEFAULT, EXCEPTION)
REPERROR (DEFAULT2, ABEND)
REPERROR (-1, EXCEPTION)
MAP SCOTT.EMP, TARGET SCOTT.EMP;
MAP SCOTT.EMP #exception_handler()

–end of file

create table ggs_owner.exceptions
( rep_name varchar2(8)
, table_name varchar2(61)
, errno number
, dberrmsg varchar2(4000)
, optype varchar2(20)
, errtype varchar2(20)
, logrba number
, logposition number
, committimestamp timestamp
);

SQL> create tablespace MY_INDEXES
datafile ‘+data’ size 200m;

ALTER TABLE ggs_owner.exceptions ADD (
CONSTRAINT PK_CTS
PRIMARY KEY
(logrba, logposition, committimestamp) USING INDEX PCTFREE 0 TABLESPACE MY_INDEXES

Publicado em GOLDENGATE | Marcado com , , , , | Deixe um comentário

Gerenciamento de conflitos com Goldengate

Oracle GoldenGate
Best Practice – Oracle GoldenGate Conflict Management
Version 1
Date: November 11, 2011
Ananth R. Tiru
Senior Solutions Architect
Center of Excellence
Table of Contents
Introduction: ………………………………………………………………………………………………………………………………. 3
Requirements: …………………………………………………………………………………………………………………………. 3
Methodology: ………………………………………………………………………………………………………………………….. 4
Approach: ……………………………………………………………………………………………………………………………….. 4
Implementation: ……………………………………………………………………………………………………………………… 5
Create Users: ……………………………………………………………………………………………………………………….. 5
Create Tables ………………………………………………………………………………………………………………………. 5
Add Supplemental Logging ……………………………………………………………………………………………………. 6
Configure Extract, Replicat and Stored Procedure ……………………………………………………………………. 6
Creating and starting extract on the source …………………………………………………………………………….. 6
Creating and starting replicat on target …………………………………………………………………………………… 7
ETAB.PRM – Implementation highlights …………………………………………………………………………………… 7
RTAB.prm – Implementation highlights……………………………………………………………………………………. 7
cm_procedure.sql – Implementation highlights………………………………………………………………………… 9
Testing: …………………………………………………………………………………………………………………………………. 11
Update Tests: …………………………………………………………………………………………………………………….. 11
Insert Tests: ……………………………………………………………………………………………………………………….. 11
Delete Tests: ……………………………………………………………………………………………………………………… 12
Recommendations: ………………………………………………………………………………………………………………… 12
Conclusion: ……………………………………………………………………………………………………………………………. 12
Key Reviewers ……………………………………………………………………………………………………………………….. 12
Appendix: ……………………………………………………………………………………………………………………………… 13
Extract Param file – Etab.prm ……………………………………………………………………………………………… 13
Replicat Param file – Rtab.prm …………………………………………………………………………………………….. 14
Conflict Handling Stored Procedure – cm_procedure ……………………………………………………………… 15
Introduction:
In a multi-master database environment where the occurrence of conflicts is possible during replication, it is highly desirable to have a conflict management scheme which is enforced as an exception rather than as a norm. Typically, the chances of conflicts occurring during replication are low and hence, checking for conflicts every time before a record is applied on the target is not efficient. However, at the same time it is highly important to ensure that when a conflicting scenario is encountered it is handled in the right manner.
The goal of this document is to illustrate a generic approach which would handle conflicts as an exception. The approach eliminates the need to check for conflict every time before a record is applied to the target table. It will check and handle conflicts only if an exception occurs when attempting to apply a record on the target table.
The approach described is the key focus of this document. It illustrates the configuration and the procedure for handling conflicts as an exception using OGG. The sample implementation is provided to illustrate the approach and is specific to the requirements listed in this document and Oracle database. However, depending on the actual requirements and use cases different implementation should be pursued. It is therefore, highly recommended to evaluate the implementation provided in this document from a functional, performance, and maintenance perspectives before adopting it in an actual use case.
Requirements:
It is highly desired to have a robust conflict management mechanism which addresses the following requirements for a two node multi-master database environment.
 Identify and handle conflicts due to inserts
o Potentially the same record could be inserted on source and target. During replication it is required to keep the record with the latest timestamp. The rejected record should be moved to an exception table for auditing purposes.
 Identify and handle conflicts due to updates
o Potentially the same record could be modified on the source and target. During replication it is required to keep the record with the latest timestamp. The rejected record should be moved to an exception table for auditing purpose.
o Potentially the record modified on the source could have been deleted on the target. During replication the record should be moved to an exception table.
o PK value of the record could be modified. If PK is modified, the rules listed above apply.
 Identify and handle conflicts due to deletes
o Potentially when propagating deletes from source to target the record on the target could have already been deleted. During replication it is required to move the record to an exception table for auditing purposes.
Methodology:
Given the possibility of occurrences of conflict during various database operations and the desired resolution when these conflicts arise, the approach illustrated below provides a way to deal with conflicts as an exception which will address all the aspects of the requirements listed above.
OGG will always assume that there is no conflict when applying a record on the target and will rely on the DB to throw an exception when there is a conflict. The exception will then be handled based on the above requirement.
Approach:
 Use KEYCOLS in both the extract and replicat. Include the table primary key(s) and the timestamp columns to the keycols.
o By doing this OGG will automatically include before image of the record which will be used by the replicat to identify the record on the target table. If the before image matches the current image on the target table replicat will apply the record. However, if the target record has changed, which is typically indicated by the timestamp column, replicat will not be able to find the record and an exception will be thrown.
o Note, supplemental logging must be turned on to include the timestamp in addition to the primary keys.
 Use multiple maps for the a given source and target table. Typically, a table will have three map statements. The first map statement will attempt to apply the record, the second map statement will handle the DB exception that may be generated by the first map statement and invokes the conflict management logic. Finally, the third map statement will handle the exception, if any, from the second map statement and will insert the incoming record into the exception table.
 The map dealing with handling the DB exception can either invoke an external stored procedure or implement the logic in a SQLEXEC (within the replicat param file) to detect and resolve conflict. The complexity of the stored procedure or the logic implemented in a SQLEXEC will depend on the requirements and use case.
Implementation:
The implementation illustrated below attempts to address the requirements using the approach described in a 2 node multi-master database environment.
Create Users:
The implementation was tested on a 2 node multi-master database with source schema named – atiru_east1 and target schema named – atiru_west1 with password for both the schema set to ‘oracle’. Two OGG users named atiru_gguser1 and atiru_gguser2 was created on source and target with password for both the users set to ‘oracle’
Create Tables
The following table, MMS_TEST, will be used as the sample table and is deployed in a 2 node multi-master database environment for which conflict management is desired. The table MMS_TEST_EXCEPTION is created to store records resulting from the conflict management strategy that is adopted based upon the requirements and use case.
Create table MMS_TEST
(CITY VARCHAR2(64),
CODE VARCHAR2(36),
INFORMATION VARCHAR2(500),
PAYLOAD BLOB,
create_timestamp number,
modify_date timestamp);
alter table MMS_TEST add constraint MD_CITY_PK primary key (CITY, CODE);
–Table to store the exception records
Create table MMS_TEST_EXCEPTION
(CITY VARCHAR2(64),
CODE VARCHAR2(36),
INFORMATION VARCHAR2(500),
PAYLOAD BLOB,
create_timestamp number,
modify_date timestamp);
Add Supplemental Logging
After successfully creating the table, turn on supplemental logging on both the source and target for the table MMS_TEST. The following box illustrates adding supplemental logging for the source table.
Configure Extract, Replicat and Stored Procedure
The following artifacts which provide the implementation as described in the approach discussed above accompany this document. Also, the appendix of this document lists the code present in each of the following files.
o Etab.prm – Extract param file for the source.
o Rtab.prm – Replicat param file for the target.
o Cm_procedure.sql – Stored Procedure to resolve conflicts, invoked by replication when an exception occurs when applying the record on the target.
Modify the param files and cm_procedure to properly reflect the schema names in the actual environment. Import the cm_procedure to the target schema. Create and start the extract and replicat on the source and target OGG environment respectively.
Using the extract and replicat param files provided, create another set of extract and replicat param file for replicating from target to source. Import the cm_procedure onto the source schema. Register the extract and replicat and start them as described above. Note, performing this step is optional when the focus is getting an conceptual understanding of the approach and to execute the test cases listed below, but it is mandatory in an actual production multi-master database scenario.
Creating and starting extract on the source
— Log on to your source database.
GGSCI>dblogin userid atiru_east1 password oracle
— This will add supplemental log data for columns – city, code and modify_date on the source
GGSCI>add trandata atiru_east1.mms_test, cols(modify_date)
– Log into source DB. Ensure the OGG user atiru_gguser2 is created on the source DB.
GGSCI> DBLOGIN USERID atiru_gguser1 PASSWORD oracle
–Configure an extract to read from the transaction logs.
GGSCI> ADD EXTRACT ETAB, TRANLOG, BEGIN NOW
–Configure a remote trail to which the extract will write.
GGSCI > ADD RMTTRAIL ./dirdat/ta, EXTRACT ETAB, MEGABYTES 5
–Start the extract.
GGSCI> start etab
Creating and starting replicat on target
ETAB.PRM – Implementation highlights
Some of the important points to be noted in the extract parameter file on the source. See Appendix for full listing.
Note: Do not just specify the extra columns, because when a table has a primary key or unique index, the KEYCOLS specification will override them. Using KEYCOLS in this way ensures that before images are available for updates to the key or index columns.
RTAB.prm – Implementation highlights
Some of the important points to be noted in the replicat parameter file on the target. See Appendix for full listing.
Typically for a given table that requires conflict management there are three map statements. The first map statement assumes no conflicts and attempts to apply the DML on the target table. Using the
– Log into target DB. Ensure the OGG user atiru_gguser2 is created on the target DB.
GGSCI> DBLOGIN USERID atiru_gguser2 PASSWORD oracle
— Add checkpoint table, use the GLOBAL file to specify the name of the checkpoint table
GGSCI> add checkpointtable
–Add replicat on the target
GGSCI> ADD REPLICAT RTAB, EXTTRAIL ./dirdat/ta
–Start Replicat
GGSCI>start rtab
–In a bi-direction replication configuration the following parameter will ensure that the updates performed by replicat logged as ‘atiru_gguser1’ on the source DB will not be picked up the extract.
TRANLOGOPTIONS EXCLUDEUSER atiru_gguser1
–Specify KEYCOLS and include the primary key and the timestamp column. Using KEYCOLS will facilitate the conflict detection when applying the record on the target.
TABLE atiru_east1.mms_test, keycols (city, code, modify_date);
keycols specified, for an update operation, the replicat will use the before image of the keycols and attempt to find the record on the target, if it is successful in finding the record it will update record. However, if it cannot find the record either because the record was updated on the target, in which case the before image of the incoming record will not match the record, or the record was deleted, it will get an exception.
The second map statement will handle the exception thrown by the first map statement and will invoke the stored procedure. Based on the requirements the store procedure will implement an appropriate conflict management logic and return a value indicating whether the replicat should apply the record on the target or discard the record and throw an exception.
Notice the above map statement uses a filter clause which evaluates the return value from the stored procedure and determines whether or not to apply the record. The filter will apply the incoming record to the target if the stored procedure returns a value equal to ‘1’ otherwise it will raise an exception.
The third map statement will handle the exception thrown by the second map statement and insert the incoming record into an exception table.
–First map statement for handling DML for the given table.
–Assume no conflicts and apply record to the target.
MAP atiru_east1.mms_test, TARGET atiru_west1.mms_test,
keycols (city, code, modify_date),
COLMAP ( USEDEFAULTS );
–Second map statement for handling DB exceptions, e.g. ORA-1403, thrown by the first map statement for the given table.
MAP atiru_east1.mms_test, TARGET atiru_west1.mms_test,
EXCEPTIONSONLY,
SQLEXEC (SPNAME atiru_west1.cm_procedure, ID detect,
PARAMS (i_city=city, i_code=code, i_modify_date=modify_date, ib_city=before.city, ib_code=before.code, ib_modify_date=before.modify_date, i_opcode=@getenv (“GGHEADER”, “OPTYPE”)),
EXEC SOURCEROW,
BEFOREFILTER),
FILTER ((@getval (detect.o_result) = 1),
RAISEERROR 9999),
keycols (city, code),
COLMAP (USEDEFAULTS);
cm_procedure.sql – Implementation highlights
The implementation of the stored procedure invoked by the map statement to handle conflict is highly dependent on the requirements. Also, depending on the requirement implementing the logic in a SQLEXEC statement within the replicat param file can be considered. Based on the requirement listed above, it was deemed that using a stored procedure to implement the conflict management logic was appropriate.
The procedure takes the current and before values of the primary key(s) and timestamp, the database operation type and returns either a ‘0’ or ‘1’ to indicate to replicat whether to reject or apply the record respectively.
First the procedure checks for the existence of the record in the target. If the record is found it continues with the processing. However, if the record is not found it handles the exception thrown by the database and depending upon the DML it returns an appropriate value . Example, if the database operation is ‘update’ and the record is not found then it returns ‘0’.
–Third map statement for handling exception thrown by the second map statement as a result of the logic implemented for conflict management.
MAP atiru_east1.mms_test, TARGET atiru_west1.mms_test_exception,
INSERTALLRECORDS,
EXCEPTIONSONLY,
COLMAP (USEDEFAULTS);
CREATE OR REPLACE PROCEDURE “ATIRU_WEST1″.”CM_PROCEDURE” (
o_result OUT NUMBER,
i_opcode IN VARCHAR2,
i_city IN VARCHAR2,
i_code IN VARCHAR2,
i_modify_date IN TIMESTAMP,
ib_city IN VARCHAR2 =null,
ib_code IN VARCHAR2 =null,
ib_modify_date IN TIMESTAMP =null,
)
Then the procedure checks for conflict based on timestamp column and based on the requirement which is to retain the latest timestamp sets the return value to either ‘0’ or ‘1’
IF i_opcode=’UPDATE’ OR i_opcode = ‘INSERT’ THEN
SELECT * into mms_record
FROM mms_test
WHERE city=i_city
AND code=i_code for update;
END IF;
IF i_opcode=’PK UPDATE’ THEN
SELECT * INTO mms_record
FROM mms_test
WHERE city=ib_city
AND code=ib_code for UPDATE;
END IF;
t_last_change_ts := mms_record.modify_date;
IF i_opcode =’UPDATE’ OR i_opcode =’PK UPDATE’ OR i_opcode=’INSERT’ THEN
IF t_last_change_ts IS NULL THEN
o_result := 1;
ELSE
IF sys_extract_utc(i_modify_date) >= sys_extract_utc(t_last_change_ts) THEN
o_result := 1;
dbms_output.put_line(‘record will be processed by replicat’);
ELSE
o_result := 0;
dbms_output.put_line(‘record will not processed by replicat’);
END IF;
END IF;
END IF;
EXCEPTION
when TOO_MANY_ROWS then
o_result :=0; return;
when NO_DATA_FOUND then
IF i_opcode =’UPDATE’ OR i_opcode =’PK UPDATE’ THEN
o_result := 0;
ELSE –insert
o_result := 1;
END IF;
Finally, it performs logging into the exception table of the record that will be replaced and returns.
Testing:
All the tests for inserts, updates and deletes can be simulated and verified on the replication flow from source to target. It is not required to have the replication from target to source to conduct the test. When conducting the tests it is assumed that the modify_date column is updated every time the record is modified.
Update Tests:
o Update a record on source and validate it gets replicated.
o Update a record on the target, update the same record on the source. Validate the source record is replicated to the target and the replaced record on the target is moved to the exception table.
o Stop the replicat ‘RTAB’. Update the record on the source, update the same record on the target. Start the replicat ‘RTAB’. Validate the target record is retained and the record from the source is moved to the exception table.
o Delete a record on the target. Update the same record on the source. Validate the record from the source is moved to the exception table.
o Perform all the above tests with PK update.
Insert Tests:
o Insert a record on source and validate it gets replicated.
o Insert a record on the target. Insert a same record (having same keys) on the source. Validate the source record is replicated to the target and the replaced record on the target is moved to the exception table.
IF o_result = 1 THEN
IF i_opcode=’UPDATE’ or i_opcode=’PK UPDATE’ THEN
INSERT INTO mms_test_exception
values mms_record;
END IF;
IF i_opcode=’INSERT’ THEN
DELETE FROM mms_test WHERE city=i_city AND code=i_code;
END IF;
END IF;
Delete Tests:
o Delete a record that exists on both the target and source on the source. Validate the record is deleted on the target.
o Delete a record on the target. Delete the same record on the source. Validate the record from the source is moved to the exception table.
Recommendations:
The exception table can be enhanced and additional code can be added in the replicat param file to capture additional information about the records that rejected because of conflict management rules. This information can be used to identify the causality of the conflicts and determine the actions that can be taken to minimize the conflicts.
Conclusion:
The key goal of this best practice document was to illustrate an approach to deal with conflict management during replication in an efficient manner. Using the approach an implementation was presented to address a set of requirements to handle conflicts. It should be noted while the scope of the implementation will depend upon the requirements; the approach presented provides an alternate and an efficient method to build the implementations.
When constructing your own conflict management solution it is important to consider the consequences of “choosing” one DML operation over another based solely on primary key and timestamp. It is possible that different columns are updated for the same PK on both the source and target system. When this occurs, choosing one update over the other could result in some information loss occurring. In our example that risk is not addressed as the stated requirement was to use all the values for the DML operation with the most current timestamp.
Key Reviewers
G. Allen Pearson
Director, CoE
Steve George
CMTS
Appendix:
Extract Param file – Etab.prm
EXTRACT ETAB
USERID atiru_gguser1, PASSWORD oracle
–Exclude records updated by replicat
TRANLOGOPTIONS EXCLUDEUSER atiru_gguser1
–Comment out the following if not using ASM
TRANLOGOPTIONS ASMUSER sys@asm1, ASMPASSWORD coe123
— Turn Bounded Recovery off, if required turn it on.
BR BROFF
RMTHOST localhost, MGRPORT 16000
RMTTRAIL ./dirdat/ta
TABLE atiru_east1.mms_test, keycols (city, code, modify_date);
Replicat Param file – Rtab.prm
REPLICAT RTAB
USERID atiru_gguser2, PASSWORD oracle
ASSUMETARGETDEFS
DISCARDFILE ./dirrpt/RTAB.DSC, PURGE
ALLOWDUPTARGETMAP
REPERROR (7777, DISCARD)
REPERROR (9999, EXCEPTION)
REPERROR (DEFAULT, EXCEPTION)
— Assume no conflicts and map DML to the target
MAP atiru_east1.mms_test, TARGET atiru_west1.mms_test,
keycols (city, code, modify_date),
COLMAP ( USEDEFAULTS );
— Handle DB exceptions. E.g. ORA-1403, thrown by by the above map. Perform conflict management.
MAP atiru_east1.mms_test, TARGET atiru_west1.mms_test,
EXCEPTIONSONLY,
SQLEXEC (SPNAME atiru_west1.cm_procedure, ID detect,
PARAMS (i_city=city, i_code=code, i_modify_date=modify_date, ib_city=before.city, ib_code=before.code, ib_modify_date=before.modify_date, i_opcode=@getenv (“GGHEADER”, “OPTYPE”)),
EXEC SOURCEROW,
BEFOREFILTER),
FILTER ((@getval (detect.o_result) = 1),
RAISEERROR 9999),
keycols (city, code),
COLMAP (USEDEFAULTS);
— Handle exception as a result of conflict management logic.
MAP atiru_east1.mms_test, TARGET atiru_west1.mms_test_exception,
INSERTALLRECORDS,
EXCEPTIONSONLY,
COLMAP (USEDEFAULTS);
Conflict Handling Stored Procedure – cm_procedure
——————————————————–
— DDL for Procedure CM_PROCEDURE
——————————————————–
set define off;
CREATE OR REPLACE PROCEDURE “ATIRU_WEST1″.”CM_PROCEDURE” (
o_result OUT number,
i_opcode IN VARCHAR2,
i_city IN VARCHAR2,
i_code IN VARCHAR2,
i_modify_date IN TIMESTAMP,
ib_city IN VARCHAR2 := NULL,
ib_code IN VARCHAR2 := NULL ,
ib_modify_date IN TIMESTAMP :=NULL
)
IS
t_last_change_ts TIMESTAMP;
mms_record mms_test%rowtype;
BEGIN
t_last_change_ts := NULL;
— If DML is ‘delete’ this means record is already deleted
— on the target, hence return 0
— so replicat will put this record into the exception table.
IF i_opcode =’DELETE’ THEN
o_result :=0;
RETURN;
END IF;
— Check if the record on the target exists.
— If record is not found or more than one record is found handle
— it as an exception.
— When more than one record is found then
— return 0 so replicat will put the record into the exception
— table.
— If record is not found then Return 0, if opcode is update or pk
— update so replicat will put the record in the exception table.
— Return 1, If opcode is insert, so replicat will process the
— record appropriately.
— If record found then continue processing and resolve the
— conflict.
— Return 0 if the incoming record timestamp is lesser than target
— timestamp, otherwise return 1.
— When performing conflict detection and resolution on a record,
— lock the record until the resolution is completed.
IF i_opcode=’UPDATE’ OR i_opcode = ‘INSERT’ THEN
SELECT * into mms_record
FROM mms_test
WHERE city=i_city
AND code=i_code for update;
END IF;
IF i_opcode=’PK UPDATE’ THEN
SELECT * INTO mms_record
FROM mms_test
WHERE city=ib_city
AND code=ib_code for UPDATE;
END IF;
t_last_change_ts := mms_record.modify_date;
IF i_opcode =’UPDATE’ OR i_opcode =’PK UPDATE’ OR i_opcode=’INSERT’
THEN
IF t_last_change_ts IS NULL THEN
o_result := 1;
ELSE
IF sys_extract_utc(i_modify_date) >=
sys_extract_utc(t_last_change_ts)
THEN
o_result := 1;
dbms_output.put_line(‘record will be processed by replicat’);
ELSE
o_result := 0;
dbms_output.put_line(‘record will not processed by replicat’);
END IF;
END IF;
END IF;
— Perform logging of the target record.
— In case of update, log the rejected target record into an
— exception table.
— In the case of insert, delete the target record.
IF o_result = 1 THEN
IF i_opcode=’UPDATE’ or i_opcode=’PK UPDATE’ THEN
INSERT INTO mms_test_exception
values mms_record;
END IF;
IF i_opcode=’INSERT’ THEN
DELETE FROM mms_test WHERE city=i_city AND code=i_code;
END IF;
END IF;
–Handle the exception when more than one record is found or no
–record is found.
EXCEPTION
when TOO_MANY_ROWS then
o_result :=0; return;
when NO_DATA_FOUND then
IF i_opcode =’UPDATE’ OR i_opcode =’PK UPDATE’ THEN
dbms_output.put_line(‘exception’);
o_result := 0;
ELSE –insert
o_result := 1;
END IF;
RETURN;
END cm_procedure;
/

Publicado em GOLDENGATE | Marcado com , , , , , , , , | Deixe um comentário

Replicação Bidirecional com resolução de conflitos

Using Oracle GoldenGate (OGG) 11gR2 for Conflict Detection and Resolution (CDR) based on balance and timestamp in a bidirectional active-active configuration

Using Oracle GoldenGate (OGG) 11gR2 for Conflict Detection and Resolution (CDR) based on balance and timestamp in a bidirectional active-active configuration

In the article you will have a look at an example for CDR implementation based on a balance and timestamp column in a bidirectional active-active OGG setup. I will build an active-active bidirectional OGG replication between two sites (RACD, RACDB) each having identical tables (test5.account, test5.seat_assignment). I will emphasize on the requirements for CDR implementation and will outline CDR concepts and illustrate a step-by step CDR implementation, testing and troubleshooting. I will cover two cases

  • Use delta method for account balance CDR – An initial balance of 1000 will be simultaneously credited 200 on site B and debited 100 on site A. The result will be 1100 on both site A and site B.
  • Use USEMIN timestamp method for seat booking CDR – a seat ’2A’ will be booked 1st by John Smith on site A and at about the same time will be booked by Pier Cardin on site B. The result will be the first user on both site A and site B.

Starting with OGG 11gR2 there are build in options in the MAP replicat parameter, such as COMPARECOLS, RESOLVECONFLICT, and in the TABLE extract parameter, such as GETBEFORECOLS, allowing easy, automatic and OGG driven CDR compared to the methods involving SQL or PLSQL code invoked from SQLEXEC used for CDR in versions of OGG prior to 11gR2.

In active-active bidirectional configuration we have

  • OGG configured for replication from site A to site B and from site B to site A
  • Application that can access both site A and site B

In the article Site A is RACD ad site b is RACDB.

Due to the asynchronous nature of OGG conflicts can occur if both sites update the same record at or near the same time. For CDR there are different methods in use such as

  • Latest time stamp – a timestamp column is added to the table and in case of two contending operations against the same record issued each on a different site, the record corresponding to an operation with the latest timestamp win the contest and persist in the database
  • Earliest timestamp – a timestamp column is added to the table and in case of two contending operations against the same record issued each on a different site, the record corresponding to an operation with the earliest timestamp win the contest and persist in the database.
  • Balance – in case of two contending operations against the same record issued each on a different site, the record that persist in the database is a summation from the difference(before-after) from the source + current column value from a record on the target for the column where balance is used. That is, adds the difference between the before and after values in the trail record from site A to the current value of the column in the target database on site B.
  • Site priority
  • Etc…
  • Any combination of the above methods.

OGG 11gR2 introduced facilities for automatic handling of

  • Latest timestamp – using USEMAX option in RESOLVECONFLICT
  • Earliest timestamp – using USEMIN option in RESOLVECONFLICT
  • Balance – using BALANCE option in RESOLVECONFLICT

In OGG versions prior to 11gR2 a custom code using SQLEXEC was required in order to implement the CDR functionality.

For CDR to operate the before image of the changed record is required in addition to the before image of the table key. Force Oracle to log in the tranlog the before image of a changed non key column by issuing ADD TRANDATA <tablename>, COLS(<changed columns>)

The extract capture for CDR should include the following

  1. Force extract to capture the before image using GETBEFORECOLS in the TABLE parameter.
  2. Use NOCOMPRESSDELETES and NOCOMPRESSUPDATES in the extract parameter file so that to have extract write a full record in a trail instead of the changed columns only.

For CDR replicat will be configured with mapping for:

  • A base table part of the replication configuration
  • An exception table corresponding to each replicated table to store the records details only in case of a conflict resolution or an error ( Similar to the way OGG REPERROR maps errors using exception map statement into an exception table)

The CDR is handled first and the REPERROR is handled second. Thus, the exception table is populated in case of a CDR.

A replicat configured for CDR performs the following tasks in addition to the usual tasks of applying the records from the trail.

  1. Compare the before values of the record from the trail using the MAP option COMPARECOLS with the before values on the target for each update or delete or both update and delete
  2. Use the before images from the trail to calculate a value on the target in case of conflict if DELTA is used
  3. Use the after images from the trail to calculate a value on the target in case of conflict if DELTA, USEMIN or USEMAX is used
  4. In case of conflict populates the table specified in the exception map for future reference (optional if you use an exception mapping)

The configuration is as follows.

Site A Site B
Database RACD RACDB
Tables in the replication configuration are identical create table test5.seat_assignment (
id             number(10) primary key,
passenger_name         varchar2(50),latest_timestamp        timestamp,

flight_no        number(10),

seat_no            varchar2(19),

flight_time        date);

create table test5.account (

account_id            number(10) primary key,

account_name        varchar2(50),

account_tel        varchar2(12),

account_address        varchar2(200),

balance            number(10));

create table test5.seat_assignment (
id             number(10) primary key,
passenger_name         varchar2(50),latest_timestamp        timestamp,

flight_no        number(10),

seat_no            varchar2(19),

flight_time        date);

create table test5.account (

account_id            number(10) primary key,

account_name        varchar2(50),

account_tel        varchar2(12),

account_address        varchar2(200),

balance            number(10));

Exception tables create table test5.seat_assignment_ex (
id_pk                number(10) primary key,
res_date            date,optype                varchar2(100),

dberrnum            varchar2(100),

dberrmsge            varchar2(400),

tablename            varchar2(20),

id_curr                number(10) ,

passenger_name_curr         varchar2(50),

latest_timestamp_curr        timestamp,

flight_no_curr            number(10),

seat_no_curr            varchar2(19),

flight_time_curr        date,

id_before             number(10) ,

passenger_name_before         varchar2(50),

latest_timestamp_before        timestamp,

flight_no_before        number(10),

seat_no_before            varchar2(19),

flight_time_before        date,

id_after            number(10) ,

passenger_name_after         varchar2(50),

latest_timestamp_after        timestamp,

flight_no_after            number(10),

seat_no_after            varchar2(19),

flight_time_after        date

);

create table test5.account_ex (

id_pk                number(10) primary key,

res_date            date,

optype                varchar2(100),

dberrnum            varchar2(100),

dberrmsge            varchar2(400),

tablename            varchar2(20),

account_id_curr            number(10) ,

account_name_curr        varchar2(50),

account_tel_curr        varchar2(12),

account_address_curr        varchar2(200),

balance_curr            number(10),

account_id_before        number(10) ,

account_name_before        varchar2(50),

account_tel_before        varchar2(12),

account_address_before        varchar2(200),

balance_before            number(10),

account_id            number(10) ,

account_name            varchar2(50),

account_tel            varchar2(12),

account_address            varchar2(200),

balance                number(10)

);

create table test5.seat_assignment_ex (
id_pk                number(10) primary key,
res_date            date,optype                varchar2(100),

dberrnum            varchar2(100),

dberrmsge            varchar2(400),

tablename            varchar2(20),

id_curr                number(10) ,

passenger_name_curr         varchar2(50),

latest_timestamp_curr        timestamp,

flight_no_curr            number(10),

seat_no_curr            varchar2(19),

flight_time_curr        date,

id_before             number(10) ,

passenger_name_before         varchar2(50),

latest_timestamp_before        timestamp,

flight_no_before        number(10),

seat_no_before            varchar2(19),

flight_time_before        date,

id_after            number(10) ,

passenger_name_after         varchar2(50),

latest_timestamp_after        timestamp,

flight_no_after            number(10),

seat_no_after            varchar2(19),

flight_time_after        date

);

create table test5.account_ex (

id_pk                number(10) primary key,

res_date            date,

optype                varchar2(100),

dberrnum            varchar2(100),

dberrmsge            varchar2(400),

tablename            varchar2(20),

account_id_curr            number(10) ,

account_name_curr        varchar2(50),

account_tel_curr        varchar2(12),

account_address_curr        varchar2(200),

balance_curr            number(10),

account_id_before        number(10) ,

account_name_before        varchar2(50),

account_tel_before        varchar2(12),

account_address_before        varchar2(200),

balance_before            number(10),

account_id            number(10) ,

account_name            varchar2(50),

account_tel            varchar2(12),

account_address            varchar2(200),

balance                number(10)

);

Table accessed by an application test5.seat_assignment
test5.account
test5.seat_assignment
test5.account
Extracts Extbi1 Extbi2
Replicats Repbi2 Repbi1
Trail files ./dirdat/3y ./dirdat/3z
Oracle sequence object to generate key values in the exception table map. See the replicat parameter file. CREATE SEQUENCE test4.exception
START WITH 1
INCREMENT BY 2CACHE 30000 ;
CREATE SEQUENCE test4.exception
START WITH 2
INCREMENT BY 2CACHE 30000 ;

The dataflow is as follows

Source database extract trail replicat Target database
From A to B RACD Extbi1 ./dirdat/3z Repbi1 RACDB
From B to A RACDB Extbi2 ./dirdat/3y Repbi2 RACD

The column test5.seat_assignment.latest_timestamp is instrumental for the USEMIN mode CDR. If there are no conflicts the data from the trail is applied as usual. In case of a conflict the after image from the trail file for this column is compared to the current value on the target. The earliest timestamp wins and the change record with the earliest after image of test5.seat_assignment.latest_timestam is written to the target. Note that as DEFAULT is used the resolution applies to all columns.

The column test5.account.balance is instrumental for the DELTA mode CDR. If there are no conflicts the data from the trail is applied as usual. In case of a conflict, the difference between the before and after values of test5.account.balance column from the trail record is added to the current value of the test5.account.balance column on the target database. The remaining columns are overwritten.

Implementing bidirectional active-active OGG replication with CDR includes the following steps performed in order.

  1. Prepare the Oracle to log into the redo log before and after images for all columns, not only keys and changed data, for tables in the replication (test5.seat_assignment,test5.account)
  1. Configuring replication from RACD to RACDB
  1. Configuring replication from RACDB to RACD
  2. Testing USEMIN and BALANCE CDR

Prepare Oracle to log data for all table columns

For each site perform the following

dblogin userid ogg_extract@[RACD-RACDB], password ogg_extrac

add trandata test5.seat_assignment, cols(passenger_name, latest_timestamp, flight_no, seat_no, flight_time)

add trandata test5.account, cols(account_name,account_tel,account_address,balance)

Use info trandata to verify the output. Take a note that the default used in DDLOPTIONS ADDTRANDATA and without columns specifications do not add all columns.

GGSCI (raclinux1.gj.com) 15> dblogin userid ogg_extract@racd, password ogg_extract

Successfully logged into database.

GGSCI (raclinux1.gj.com) 16> info trandata TEST5.ACCOUNT

Logging of supplemental redo log data is enabled for table TEST5.ACCOUNT.

Columns supplementally logged for table TEST5.ACCOUNT: ACCOUNT_ID, ACCOUNT_NAME, ACCOUNT_TEL, ACCOUNT_ADDRESS, BALANCE.

GGSCI (raclinux1.gj.com) 17> info trandata test5.seat_assignment

Logging of supplemental redo log data is enabled for table TEST5.SEAT_ASSIGNMENT.

Columns supplementally logged for table TEST5.SEAT_ASSIGNMENT: ID, PASSENGER_NAME, LATEST_TIMESTAMP, FLIGHT_NO, SEAT_NO, FLIGHT_TIME.

GGSCI (raclinux1.gj.com) 18>

Failure to add logging to all columns will result in

2012-11-07 15:41:30 ERROR OGG-01920 Missing COMPARECOLS column ACCOUNT_NAME in before image, while mapping to target table TEST5.ACCOUNT. Add the column to GETBEFORECOLS.

Configure replication from RACD to RACDB

Create an extract, trail and replicat

GGSCI (raclinux1.gj.com) 126> add extract extbi1, tranlog, begin now, threads 2

EXTRACT added.

GGSCI (raclinux1.gj.com) 127>

GGSCI (raclinux1.gj.com) 130> add rmttrail ./dirdat/3z, extract extbi1, megabytes 30

RMTTRAIL added.

GGSCI (raclinux1.gj.com) 131>

GGSCI (raclinux1.gj.com) 133> add replicat repbi1, exttrail ./dirdat/3z

REPLICAT added.

Pay attention to the parameter files

GGSCI (raclinux1.gj.com) 3> view params extbi1

extract extbi1

SETENV (ORACLE_SID = “RACDB1″)

tranlogoptions asmuser sys@ASM, asmpassword sys1

userid ogg_extract1@racd, password ogg_extract1

report at 10:00

reportcount every 10 minutes, rate

reportrollover on friday

nocompressupdates — Include the full record

nocompressdeletes — Include the full record

tranlogoptions excludeuser ogg_replicat — Exclude the replicat user

rmthost raclinux1, mgrport 7809

rmttrail ./dirdat/3z

table test5.seat_booking;

table test5.seat_assignment, getbeforecols(on update all, on delete all);

table test5.account, getbeforecols(on update all, on delete all);

GGSCI (raclinux1.gj.com) 4>

GGSCI (raclinux1.gj.com) 4> view params repbi1

replicat repbi1

–maxtransops 1

–TRANSACTIONTIMEOUT 5 S

reperror(default,exception) — For exeption mapping

reperror(default2,discard) — For test5.seat_booking as is withot exeption map

SETENV (ORACLE_SID = “RACDB1″)

userid ogg_replicat@racdb, password ogg_replicat

assumetargetdefs

report at 10:00

reportcount every 10 minutes, rate

reportrollover on friday

discardfile ./dirrpt/repbi1.dsc, purge

map test5.seat_booking, target test5.seat_booking;

map test5.seat_assignment, target test5.seat_assignment,

comparecols( on update all, on delete all),

RESOLVECONFLICT (UPDATEROWEXISTS, (DEFAULT, USEMIN (latest_timestamp))),

RESOLVECONFLICT (INSERTROWEXISTS, (DEFAULT, USEMIN (latest_timestamp))),

RESOLVECONFLICT (DELETEROWEXISTS, (DEFAULT, OVERWRITE)),

RESOLVECONFLICT (UPDATEROWMISSING, (DEFAULT, OVERWRITE)),

RESOLVECONFLICT (DELETEROWMISSING, (DEFAULT, DISCARD));

map test5.seat_assignment, target test5.seat_assignment_ex,

exceptionsonly,

insertallrecords,

sqlexec(id seq1, query ” select test4.exception.nextval from dual “, NOPARAMS),

sqlexec(id query_val1, query ” select ID, PASSENGER_NAME, LATEST_TIMESTAMP, FLIGHT_NO, SEAT_NO, FL

IGHT_TIME from test5.seat_assignment where id = :var_id “, PARAMS(var_id=id)),

colmap (

usedefaults,

id_pk = seq1.test4.exception.nextval,

res_date = @DATENOW(),

– captures and maps the DML operation type.

optype = @GETENV(“LASTERR”, “OPTYPE”),

– captures and maps the database error number that was returned.

dberrnum = @GETENV(“LASTERR”, “DBERRNUM”),

– captures and maps the database error that was returned.

dberrmsge = @GETENV(“LASTERR”, “DBERRMSG”),

– captures and maps the name of the target table

tablename = @GETENV(“GGHEADER”, “TABLENAME”),

id_curr = @GETVAL(query_val1.id),

passenger_name_curr = @GETVAL(query_val1.passenger_name),

latest_timestamp_curr = @GETVAL(query_val1.latest_timestamp),

flight_no_curr = @GETVAL(query_val1.flight_no),

seat_no_curr = @GETVAL(query_val1.seat_no),

flight_time_curr = @GETVAL(query_val1.flight_time),

id_before = before.id,

passenger_name_before = before.passenger_name,

latest_timestamp_before = before.latest_timestamp,

flight_no_before = before.flight_no,

seat_no_before = before.seat_no,

flight_time_before= before.flight_time,

id_after = id,

passenger_name_after = passenger_name,

latest_timestamp_after = latest_timestamp,

flight_no_after = flight_no,

seat_no_after = seat_no,

flight_time_after = flight_time);

map test5.account, target test5.account,

comparecols( on update all, on delete all),

RESOLVECONFLICT (UPDATEROWEXISTS, (delta_calc, USEDELTA, cols(balance)),(DEFAULT,OVERWRITE)),

RESOLVECONFLICT (DELETEROWEXISTS, (DEFAULT, OVERWRITE)),

RESOLVECONFLICT (UPDATEROWMISSING, (DEFAULT, OVERWRITE)),

RESOLVECONFLICT (DELETEROWMISSING, (DEFAULT, DISCARD));

map test5.account, target test5.account_ex,

exceptionsonly,

insertallrecords,

sqlexec(id seq2, query ” select test4.exception.nextval from dual “, NOPARAMS),

sqlexec(id current_val2, query ” select ACCOUNT_ID, ACCOUNT_NAME, ACCOUNT_TEL, ACCOUNT_ADDRESS, BA

LANCE from test5.account where account_id = :var_id “, PARAMS(var_id=account_id)),

colmap (

usedefaults,

id_pk = seq2.test4.exception.nextval,

res_date = @DATENOW(),

– captures and maps the DML operation type.

optype = @GETENV(“LASTERR”, “OPTYPE”),

– captures and maps the database error number that was returned.

dberrnum = @GETENV(“LASTERR”, “DBERRNUM”),

– captures and maps the database error that was returned.

dberrmsge = @GETENV(“LASTERR”, “DBERRMSG”),

– captures and maps the name of the target table

tablename = @GETENV(“GGHEADER”, “TABLENAME”),

account_id_curr = current_val2.account_id,

account_name_curr = current_val2.account_name,

account_tel_curr = current_val2.account_tel,

account_address_curr = current_val2.account_address,

balance_curr = current_val2.balance,

account_id_before = before.account_id,

account_name_before = before.account_name,

account_tel_before = before.account_tel,

account_address_before = before.account_address,

balance_before = before.balance

–account_id = account_id

–account_name = account_name

–account_tel = account_tel

–account_address = account_address

–balance = balance

);

GGSCI (raclinux1.gj.com) 5>

Configure replication from RACDB to RACD

Create an extract, trail and replicat

GGSCI (raclinux1.gj.com) 16>

GGSCI (raclinux1.gj.com) 18> add extract extbi2, tranlog, begin now, threads 2

EXTRACT added.

GGSCI (raclinux1.gj.com) 19> add rmttrail ./dirdat/3y extract extbi2,megabytes 30

RMTTRAIL added.

GGSCI (raclinux1.gj.com) 20>

GGSCI (raclinux1.gj.com) 21> add replicat repbi2, exttrail ./dirdat/3y

REPLICAT added.

GGSCI (raclinux1.gj.com) 22>

Pay attention to the parameter files

GGSCI (raclinux1.gj.com) 5> view params extbi2

extract extbi2

SETENV (ORACLE_SID = “RACDB1″)

tranlogoptions asmuser sys@ASM, asmpassword sys1

userid ogg_extract1@racdb, password ogg_extract1

report at 10:00

reportcount every 10 minutes, rate

reportrollover on friday

nocompressupdates — Include the full record

nocompressdeletes — Include the full record

tranlogoptions excludeuser ogg_replicat — Exclude the replicat user

rmthost raclinux1, mgrport 7809

rmttrail ./dirdat/3y

table test5.seat_booking;

table test5.seat_assignment, getbeforecols(on update all, on delete all);

table test5.account, getbeforecols(on update all, on delete all);

GGSCI (raclinux1.gj.com) 6>

GGSCI (raclinux1.gj.com) 6> view params repbi2

replicat repbi2

–maxtransops 1

–TRANSACTIONTIMEOUT 5 S

reperror(default,exception) — For exeption mapping

reperror(default2,discard) — For test5.seat_booking as is withot exeption map

SETENV (ORACLE_SID = “RACDB1″)

userid ogg_replicat@racd, password ogg_replicat

assumetargetdefs

report at 10:00

reportcount every 10 minutes, rate

reportrollover on friday

discardfile ./dirrpt/repdown.dsc, purge

map test5.seat_booking, target test5.seat_booking;

map test5.seat_assignment, target test5.seat_assignment,

comparecols( on update all, on delete all),

RESOLVECONFLICT (UPDATEROWEXISTS, (DEFAULT, USEMIN (latest_timestamp))),

RESOLVECONFLICT (INSERTROWEXISTS, (DEFAULT, USEMIN (latest_timestamp))),

RESOLVECONFLICT (DELETEROWEXISTS, (DEFAULT, OVERWRITE)),

RESOLVECONFLICT (UPDATEROWMISSING, (DEFAULT, OVERWRITE)),

RESOLVECONFLICT (DELETEROWMISSING, (DEFAULT, DISCARD));

map test5.seat_assignment, target test5.seat_assignment_ex,

exceptionsonly,

insertallrecords,

sqlexec(id seq1, query ” select test4.exception.nextval from dual “, NOPARAMS),

sqlexec(id query_val1, query ” select ID, PASSENGER_NAME, LATEST_TIMESTAMP, FLIGHT_NO, SEAT_NO, FL

IGHT_TIME from test5.seat_assignment where id = :var_id “, PARAMS(var_id=id)),

colmap (

usedefaults,

id_pk = seq1.test4.exception.nextval,

res_date = @DATENOW(),

– captures and maps the DML operation type.

optype = @GETENV(“LASTERR”, “OPTYPE”),

– captures and maps the database error number that was returned.

dberrnum = @GETENV(“LASTERR”, “DBERRNUM”),

– captures and maps the database error that was returned.

dberrmsge = @GETENV(“LASTERR”, “DBERRMSG”),

– captures and maps the name of the target table

tablename = @GETENV(“GGHEADER”, “TABLENAME”),

id_curr = @GETVAL(query_val1.id),

passenger_name_curr = @GETVAL(query_val1.passenger_name),

latest_timestamp_curr = @GETVAL(query_val1.latest_timestamp),

flight_no_curr = @GETVAL(query_val1.flight_no),

seat_no_curr = @GETVAL(query_val1.seat_no),

flight_time_curr = @GETVAL(query_val1.flight_time),

id_before = before.id,

passenger_name_before = before.passenger_name,

latest_timestamp_before = before.latest_timestamp,

flight_no_before = before.flight_no,

seat_no_before = before.seat_no,

flight_time_before= before.flight_time,

id_after = id,

passenger_name_after = passenger_name,

latest_timestamp_after = latest_timestamp,

flight_no_after = flight_no,

seat_no_after = seat_no,

flight_time_after = flight_time);

map test5.account, target test5.account,

comparecols( on update all, on delete all),

RESOLVECONFLICT (UPDATEROWEXISTS, (delta_calc, USEDELTA, cols(balance)),(DEFAULT,OVERWRITE)),

RESOLVECONFLICT (DELETEROWEXISTS, (DEFAULT, OVERWRITE)),

RESOLVECONFLICT (UPDATEROWMISSING, (DEFAULT, OVERWRITE)),

RESOLVECONFLICT (DELETEROWMISSING, (DEFAULT, DISCARD));

map test5.account, target test5.account_ex,

exceptionsonly,

insertallrecords,

sqlexec(id seq2, query ” select test4.exception.nextval from dual “, NOPARAMS),

sqlexec(id current_val2, query ” select ACCOUNT_ID, ACCOUNT_NAME, ACCOUNT_TEL, ACCOUNT_ADDRESS, BA

LANCE from test5.account where account_id = :var_id “, PARAMS(var_id=account_id)),

colmap (

usedefaults,

id_pk = seq2.test4.exception.nextval,

res_date = @DATENOW(),

– captures and maps the DML operation type.

optype = @GETENV(“LASTERR”, “OPTYPE”),

– captures and maps the database error number that was returned.

dberrnum = @GETENV(“LASTERR”, “DBERRNUM”),

– captures and maps the database error that was returned.

dberrmsge = @GETENV(“LASTERR”, “DBERRMSG”),

– captures and maps the name of the target table

tablename = @GETENV(“GGHEADER”, “TABLENAME”),

account_id_curr = current_val2.account_id,

account_name_curr = current_val2.account_name,

account_tel_curr = current_val2.account_tel,

account_address_curr = current_val2.account_address,

balance_curr = current_val2.balance,

account_id_before = before.account_id,

account_name_before = before.account_name,

account_tel_before = before.account_tel,

account_address_before = before.account_address,

balance_before = before.balance

–account_id = account_id

–account_name = account_name

–account_tel = account_tel

–account_address = account_address

–balance = balance

);

GGSCI (raclinux1.gj.com) 7>

Testing CDR

Make sure that extract and the replicates are running.

Testing BALANCE

The test will include series of SQL performed in the following order on RACD and RACDB.

RACD RACDB
1 16:27:04 SQL> insert into test5.account values(1,’Smith’,’555-555-5555′,’1234 Some street name’,1000);

commit;

1 row created.

Elapsed: 00:00:00.01

16:27:10 SQL>

Commit complete.

Elapsed: 00:00:00.02

16:27:12 SQL> select * from account;

ACCOUNT_ID ACCOUNT_NAME ACCOUNT_TEL ACCOUNT_AD BALANCE

———- ——————– ———— ———- ———-

1 Smith 555-555-5555 1234 Some 1000

street nam

e

Elapsed: 00:00:00.00

16:27:21 SQL> 2 16:27:08 SQL> select * from account;

ACCOUNT_ID ACCOUNT_NAME ACCOUNT_TEL ACCOUNT_AD BALANCE

———- ——————– ———— ———- ———-

1 Smith 555-555-5555 1234 Some 1000

street nam

e

Elapsed: 00:00:00.00

16:27:27 SQL>

16:28:20 SQL> insert into test5.account values(2,’Jones’,’666-666-6666′,’5678 Some street name’,2000);

commit;

1 row created.

Elapsed: 00:00:00.00

16:28:20 SQL>

Commit complete.

Elapsed: 00:00:00.01

16:28:20 SQL>

16:28:21 SQL> select * from account;

ACCOUNT_ID ACCOUNT_NAME ACCOUNT_TEL ACCOUNT_AD BALANCE

———- ——————– ———— ———- ———-

1 Smith 555-555-5555 1234 Some 1000

street nam

e

2 Jones 666-666-6666 5678 Some 2000

street nam

e

Elapsed: 00:00:00.00

16:28:27 SQL>316:28:58 SQL> select * from account;

ACCOUNT_ID ACCOUNT_NAME ACCOUNT_TEL ACCOUNT_AD BALANCE

———- ——————– ———— ———- ———-

1 Smith 555-555-5555 1234 Some 1000

street nam

e

2 Jones 666-666-6666 5678 Some 2000

street nam

e

Elapsed: 00:00:00.01

16:29:00 SQL> 416:29:00 SQL> update test5.account set balance=1200 where account_id=1;

1 row updated.

Elapsed: 00:00:00.00

16:29:36 SQL> commit;

Commit complete.

Elapsed: 00:00:00.00

16:30:04 SQL> select * from account;

ACCOUNT_ID ACCOUNT_NAME ACCOUNT_TEL ACCOUNT_AD BALANCE

———- ——————– ———— ———- ———-

1 Smith 555-555-5555 1234 Some 1100

street nam

e

2 Jones 666-666-6666 5678 Some 2000

street nam

e

Elapsed: 00:00:00.00

16:30:10 SQL>
16:28:27 SQL> update test5.account set balance=900 where account_id=1;

1 row updated.

Elapsed: 00:00:00.00

16:29:51 SQL> commit;

Commit complete.

Elapsed: 00:00:00.00

16:30:03 SQL> select * from account;

ACCOUNT_ID ACCOUNT_NAME ACCOUNT_TEL ACCOUNT_AD BALANCE

———- ——————– ———— ———- ———-

1 Smith 555-555-5555 1234 Some 1100

street nam

e

2 Jones 666-666-6666 5678 Some 2000

street nam

e

Elapsed: 00:00:00.01

16:30:14 SQL>

16:30:48 SQL>

Starting from an initial balance of 1000 and decreasing the balance by 100 to 900 on RACDB and increasing by 200 to 1200 on RACD results in balance of 1100. This is correct as 1100=1000+200-100.

You can use GGSCI stats to monitor what have happened. See the Appendix for a detail output but a snip will look like

Replicating from TEST5.ACCOUNT to TEST5.ACCOUNT:

*** Total statistics since 2012-11-07 16:28:24 ***

Total inserts 1.00

Total updates 1.00

Total deletes 0.00

Total discards 0.00

Total operations 2.00

Total CDR conflicts 1.00

CDR resolutions succeeded 1.00

CDR UPDATEROWEXISTS conflicts 1.00

The exception table gets populated as well. On RACD I have

16:56:43 SQL> select * from account_ex;

ID_PK RES_DATE OPTYPE DBERRNUM DBERRMSGE TABLENAME ACCOUNT_ID_CURR ACCOUNT_NAME_CURR ACCOUNT_TEL_ ACCOUNT_AD BALANCE_CURR ACCOUNT_ID_BEFORE ACCOUNT_NAME_BEFORE ACCOUNT_TEL_ ACCOUNT_AD BALANCE_BEFORE ACCOUNT_ID ACCOUNT_NAME ACCOUNT_TEL ACCOUNT_AD BALANCE

———- ——— ———— ———— ———— ——————– ————— ——————– ———— ———- ———— —————– ——————– ———— ———- ————– ———- ——————– ———— ———- ———-

180007 07-NOV-12 SQL COMPUPDA 1403 TEST5.ACCOUNT 1 Smith 555-555-5555 1234 Some 1100 1 Smith 555-555-5555 1234 Some 1000 1 900

TE street nam street nam

e e

Elapsed: 00:00:00.01

16:57:08 SQL>

On RACDB I have

16:57:59 SQL> select * from account_ex;

ID_PK RES_DATE OPTYPE DBERRNUM DBERRMSGE TABLENAME ACCOUNT_ID_CURR ACCOUNT_NAME_CURR ACCOUNT_TEL_ ACCOUNT_AD BALANCE_CURR ACCOUNT_ID_BEFORE ACCOUNT_NAME_BEFORE ACCOUNT_TEL_ ACCOUNT_AD BALANCE_BEFORE ACCOUNT_ID ACCOUNT_NAME ACCOUNT_TEL ACCOUNT_AD BALANCE

———- ——— ———— ———— ———— ——————– ————— ——————– ———— ———- ———— —————– ——————– ———— ———- ————– ———- ——————– ———— ———- ———-

180001 07-NOV-12 SQL COMPUPDA 1403 TEST5.ACCOUNT 1 Smith 555-555-5555 1234 Some 1100 1 Smith 555-555-5555 1234 Some 1000 1 1200

TE street nam street nam

e e

Elapsed: 00:00:00.01

16:58:11 SQL>

Testing USEMIN

The test will include series of SQL performed in the following order on RACD and RACDB.

RACD RACDB
1 16:37:25 SQL>

16:39:05 SQL> insert into test5.seat_assignment values(1,”,current_timestamp,120,’1A’,sysdate);

insert into test5.seat_assignment values(2,”,current_timestamp,120,’1B’,sysdate);

insert into test5.seat_assignment values(3,”,current_timestamp,120,’1C’,sysdate);

insert into test5.seat_assignment values(4,”,current_timestamp,120,’1D’,sysdate);

insert into test5.seat_assignment values(5,”,current_timestamp,120,’1E’,sysdate);

insert into test5.seat_assignment values(6,”,current_timestamp,120,’1F’,sysdate);

commit;

1 row created.

Elapsed: 00:00:00.02

16:39:13 SQL>

1 row created.

Elapsed: 00:00:00.01

16:39:13 SQL>

1 row created.

Elapsed: 00:00:00.00

16:39:13 SQL>

1 row created.

Elapsed: 00:00:00.01

16:39:13 SQL>

1 row created.

Elapsed: 00:00:00.00

16:39:13 SQL>

1 row created.

Elapsed: 00:00:00.01

16:39:13 SQL>

Commit complete.

Elapsed: 00:00:00.00

16:39:13 SQL>

16:39:19 SQL> select count(*) from seat_assignment;

COUNT(*)

———-

6

Elapsed: 00:00:00.01

16:39:34 SQL> 2 16:40:11 SQL> select count(*) from seat_assignment;

COUNT(*)

———-

6

Elapsed: 00:00:00.04

16:40:13 SQL> insert into test5.seat_assignment values(7,”,current_timestamp,120,’2A’,sysdate);

insert into test5.seat_assignment values(8,”,current_timestamp,120,’2B’,sysdate);

insert into test5.seat_assignment values(9,”,current_timestamp,120,’2C’,sysdate);

insert into test5.seat_assignment values(10,”,current_timestamp,120,’2D’,sysdate);

insert into test5.seat_assignment values(11,”,current_timestamp,120,’2E’,sysdate);

insert into test5.seat_assignment values(12,”,current_timestamp,120,’2F’,sysdate);

commit;

1 row created.

Elapsed: 00:00:00.00

16:40:25 SQL>

1 row created.

Elapsed: 00:00:00.00

16:40:25 SQL>

1 row created.

Elapsed: 00:00:00.02

16:40:25 SQL>

1 row created.

Elapsed: 00:00:00.00

16:40:25 SQL>

1 row created.

Elapsed: 00:00:00.01

16:40:25 SQL>

1 row created.

Elapsed: 00:00:00.02

16:40:25 SQL>

Commit complete.

Elapsed: 00:00:00.00

16:40:26 SQL>

16:40:27 SQL> select count(*) from seat_assignment

16:40:32 2 ;

COUNT(*)

———-

12

Elapsed: 00:00:00.01

16:40:34 SQL>
316:39:34 SQL> select count(*) from seat_assignment;

COUNT(*)

———-

12

Elapsed: 00:00:00.00

16:41:14 SQL> 416:51:57 SQL> select * from seat_assignment where seat_no=’2A’;

ID PASSENGER_NAME LATEST_TIMESTAMP FLIGHT_NO SEAT_NO FLIGHT_TI

———- ————————————————– ————————————————————————— ———- ——————- ———

7 07-NOV-12 04.40.25.481803 PM 120 2A 07-NOV-12

Elapsed: 00:00:00.00

16:51:59 SQL> update test5.seat_assignment set PASSENGER_NAME=’John Smith’, LATEST_TIMESTAMP=current_timestamp where seat_no=’2A’ and FLIGHT_NO=120;

1 row updated.

Elapsed: 00:00:00.00

16:52:31 SQL> commit;

Commit complete.

Elapsed: 00:00:00.03

16:53:00 SQL> select * from seat_assignment where seat_no=’2A’;

ID PASSENGER_NAME LATEST_TIMESTAMP FLIGHT_NO SEAT_NO FLIGHT_TI

———- ————————————————– ————————————————————————— ———- ——————- ———

7 John Smith 07-NOV-12 04.52.31.618956 PM 120 2A 07-NOV-12

Elapsed: 00:00:00.00

16:53:12 SQL>
16:52:09 SQL> select * from seat_assignment where seat_no=’2A’;

ID PASSENGER_NAME LATEST_TIMESTAMP FLIGHT_NO SEAT_NO FLIGHT_TI

———- ————————————————– ————————————————————————— ———- ——————- ———

7 07-NOV-12 04.40.25.481803 PM 120 2A 07-NOV-12

Elapsed: 00:00:00.01

16:52:12 SQL> update test5.seat_assignment set PASSENGER_NAME=’Pier Cardin’, LATEST_TIMESTAMP=current_timestamp where seat_no=’2A’ and FLIGHT_NO=120;

1 row updated.

Elapsed: 00:00:00.01

16:52:46 SQL> commit;

Commit complete.

Elapsed: 00:00:00.05

16:53:03 SQL> select * from seat_assignment where seat_no=’2A’;

ID PASSENGER_NAME LATEST_TIMESTAMP FLIGHT_NO SEAT_NO FLIGHT_TI

———- ————————————————– ————————————————————————— ———- ——————- ———

7 John Smith 07-NOV-12 04.52.31.618956 PM 120 2A 07-NOV-12

Elapsed: 00:00:00.01

16:53:16 SQL>

Here 1st committed change is for ‘John Smith’ that wins against the change for ‘Pier Cardin’. Therefore, the 1st record persists across both sites and ‘John Smith’ change is visible on both sites.

You can use GGSCI stats to monitor what have happened. See the Appendix for a detail output but a snip will look like

Replicating from TEST5.SEAT_ASSIGNMENT to TEST5.SEAT_ASSIGNMENT:

*** Total statistics since 2012-11-07 16:27:17 ***

Total inserts 6.00

Total updates 1.00

Total deletes 0.00

Total discards 0.00

Total operations 7.00

Total CDR conflicts 1.00

CDR resolutions succeeded 1.00

CDR UPDATEROWEXISTS conflicts 1.00

Replicating from TEST5.SEAT_ASSIGNMENT to TEST5.SEAT_ASSIGNMENT_EX:

*** Total statistics since 2012-11-07 16:27:17 ***

Total inserts 0.00

Total updates 1.00

Total deletes 0.00

Total discards 0.00

Total operations 1.00

The exception table gets populated as well. On RACD I have

16:56:25 SQL> select * from seat_assignment_ex;

ID_PK RES_DATE OPTYPE DBERRNUM DBERRMSGE TABLENAME ID_CURR PASSENGER_NAME_CURR LATEST_TIMESTAMP_CURR FLIGHT_NO_CURR SEAT_NO_CURR FLIGHT_TI ID_BEFORE PASSENGER_NAME_BEFOR LATEST_TIMESTAMP_BEF FLIGHT_NO_BEFORE SEAT_NO_BEFORE FLIGHT_TI ID_AFTER PASSENGER_NAME_AFTER LATEST_TIMESTAMP_AFTER FLIGHT_NO_AFTER SEAT_NO_AFTER FLIGHT_TI

———- ——— ———— ———— ———— ——————– ———- ——————– ————————————————————————— ————– ——————- ——— ———- ——————– ——————– —————- ——————- ——— ———- ————————————————– ————————————————————————— ————— ——————- ———

180009 07-NOV-12 SQL COMPUPDA 1403 OCI Error OR TEST5.SEAT_ASSIGNMEN 7 John Smith 07-NOV-12 04.52.31.618956 PM 120 2A 07-NOV-12 7 07-NOV-12 04.40.25.4 120 2A 07-NOV-12 7 Pier Cardin 07-NOV-12 04.52.46.481506 PM

TE A-01403: no 81803 PM

data found,

SQL <UPDATE

“TEST5″.”SEA

T_ASSIGNMENT

” SET “PASSE

NGER_NAME” =

:a1,”LATEST

_TIMESTAMP”

= :a2 WHERE

ID_PK RES_DATE OPTYPE DBERRNUM DBERRMSGE TABLENAME ID_CURR PASSENGER_NAME_CURR LATEST_TIMESTAMP_CURR FLIGHT_NO_CURR SEAT_NO_CURR FLIGHT_TI ID_BEFORE PASSENGER_NAME_BEFOR LATEST_TIMESTAMP_BEF FLIGHT_NO_BEFORE SEAT_NO_BEFORE FLIGHT_TI ID_AFTER PASSENGER_NAME_AFTER LATEST_TIMESTAMP_AFTER FLIGHT_NO_AFTER SEAT_NO_AFTER FLIGHT_TI

———- ——— ———— ———— ———— ——————– ———- ——————– ————————————————————————— ————– ——————- ——— ———- ——————– ——————– —————- ——————- ——— ———- ————————————————– ————————————————————————— ————— ——————- ———

“ID” = :b0 A

ND “LATEST_T

IMESTAMP” >

:b1>

Elapsed: 00:00:00.01

16:56:43 SQL>

On RACDB I have

16:57:41 SQL> select * from seat_assignment where seat_no=’2A’;

ID PASSENGER_NAME LATEST_TIMESTAMP FLIGHT_NO SEAT_NO FLIGHT_TI

———- ————————————————– ————————————————————————— ———- ——————- ———

7 John Smith 07-NOV-12 04.52.31.618956 PM 120 2A 07-NOV-12

Elapsed: 00:00:00.00

16:57:48 SQL> select * from seat_assignment_ex;

ID_PK RES_DATE OPTYPE DBERRNUM DBERRMSGE TABLENAME ID_CURR PASSENGER_NAME_CURR LATEST_TIMESTAMP_CURR FLIGHT_NO_CURR SEAT_NO_CURR FLIGHT_TI ID_BEFORE PASSENGER_NAME_BEFOR LATEST_TIMESTAMP_BEF FLIGHT_NO_BEFORE SEAT_NO_BEFORE FLIGHT_TI ID_AFTER PASSENGER_NAME_AFTER LATEST_TIMESTAMP_AFTER FLIGHT_NO_AFTER SEAT_NO_AFTER FLIGHT_TI

———- ——— ———— ———— ———— ——————– ———- ——————– ————————————————————————— ————– ——————- ——— ———- ——————– ——————– —————- ——————- ——— ———- ————————————————– ————————————————————————— ————— ——————- ———

180003 07-NOV-12 SQL COMPUPDA 1403 TEST5.SEAT_ASSIGNMEN 7 John Smith 07-NOV-12 04.52.31.618956 PM 120 2A 07-NOV-12 7 07-NOV-12 04.40.25.4 120 2A 07-NOV-12 7 John Smith 07-NOV-12 04.52.31.618956 PM

TE 81803 PM

Elapsed: 00:00:00.00

16:57:59 SQL> select * from account_ex;

ID_PK RES_DATE OPTYPE DBERRNUM DBERRMSGE TABLENAME ACCOUNT_ID_CURR ACCOUNT_NAME_CURR ACCOUNT_TEL_ ACCOUNT_AD BALANCE_CURR ACCOUNT_ID_BEFORE ACCOUNT_NAME_BEFORE ACCOUNT_TEL_ ACCOUNT_AD BALANCE_BEFORE ACCOUNT_ID ACCOUNT_NAME ACCOUNT_TEL ACCOUNT_AD BALANCE

———- ——— ———— ———— ———— ——————– ————— ——————– ———— ———- ———— —————– ——————– ———— ———- ————– ———- ——————– ———— ———- ———-

180001 07-NOV-12 SQL COMPUPDA 1403 TEST5.ACCOUNT 1 Smith 555-555-5555 1234 Some 1100 1 Smith 555-555-5555 1234 Some 1000 1 1200

TE street nam street nam

e e

Elapsed: 00:00:00.01

16:58:11 SQL>

OGG replicat report files also provide CDR related information that looks like.

From Table TEST5.SEAT_ASSIGNMENT to TEST5.SEAT_ASSIGNMENT:

# inserts: 29

# updates: 2

# deletes: 23

# discards: 0

# CDR conflicts : 1

# CDR resolutions succeeded : 1

# CDR UPDATEROWEXISTS conflicts : 1

From Table TEST5.SEAT_ASSIGNMENT to TEST5.SEAT_ASSIGNMENT_EX:

# inserts: 0

# updates: 1

# deletes: 0

# discards: 0

Stored procedure seq1:

attempts: 1

successful: 1

Stored procedure query_val1:

attempts: 1

successful: 1

From Table TEST5.ACCOUNT to TEST5.ACCOUNT:

# inserts: 1

# updates: 3

# deletes: 0

# discards: 0

# CDR conflicts : 2

# CDR resolutions succeeded : 2

# CDR UPDATEROWEXISTS conflicts : 2

From Table TEST5.ACCOUNT to TEST5.ACCOUNT_EX:

# inserts: 0

# updates: 2

# deletes: 0

# discards: 0

Stored procedure seq2:

attempts: 2

successful: 2

Stored procedure current_val2:

attempts: 2

successful: 2

Summary

In the article you had a look at a way to setup a bidirectional active-active OGG replication implementing CDR using a timestamp USEMIN and DELTA methods.

Appendix

After DELTA test

GGSCI (raclinux1.gj.com) 52> stats extbi1 reportcdr

Sending STATS request to EXTRACT EXTBI1 …

Start of Statistics at 2012-11-07 16:31:30.

Output to ./dirdat/3z:

Extracting from TEST5.ACCOUNT to TEST5.ACCOUNT:

*** Total statistics since 2012-11-07 16:27:16 ***

Total inserts 1.00

Total updates 1.00

Total befores 1.00

Total deletes 0.00

Total discards 0.00

Total operations 3.00

*** Daily statistics since 2012-11-07 16:27:16 ***

Total inserts 1.00

Total updates 1.00

Total befores 1.00

Total deletes 0.00

Total discards 0.00

Total operations 3.00

*** Hourly statistics since 2012-11-07 16:27:16 ***

Total inserts 1.00

Total updates 1.00

Total befores 1.00

Total deletes 0.00

Total discards 0.00

Total operations 3.00

*** Latest statistics since 2012-11-07 16:27:16 ***

Total inserts 1.00

Total updates 1.00

Total befores 1.00

Total deletes 0.00

Total discards 0.00

Total operations 3.00

End of Statistics.

GGSCI (raclinux1.gj.com) 53>

GGSCI (raclinux1.gj.com) 75> stats repbi1 reportcdr

Sending STATS request to REPLICAT REPBI1 …

Start of Statistics at 2012-11-07 16:32:36.

Replicating from TEST5.ACCOUNT to TEST5.ACCOUNT:

*** Total statistics since 2012-11-07 16:27:17 ***

Total inserts 1.00

Total updates 1.00

Total deletes 0.00

Total discards 0.00

Total operations 2.00

Total CDR conflicts 1.00

CDR resolutions succeeded 1.00

CDR UPDATEROWEXISTS conflicts 1.00

*** Daily statistics since 2012-11-07 16:27:17 ***

Total inserts 1.00

Total updates 1.00

Total deletes 0.00

Total discards 0.00

Total operations 2.00

Total CDR conflicts 1.00

CDR resolutions succeeded 1.00

CDR UPDATEROWEXISTS conflicts 1.00

*** Hourly statistics since 2012-11-07 16:27:17 ***

Total inserts 1.00

Total updates 1.00

Total deletes 0.00

Total discards 0.00

Total operations 2.00

Total CDR conflicts 1.00

CDR resolutions succeeded 1.00

CDR UPDATEROWEXISTS conflicts 1.00

*** Latest statistics since 2012-11-07 16:27:17 ***

Total inserts 1.00

Total updates 1.00

Total deletes 0.00

Total discards 0.00

Total operations 2.00

Total CDR conflicts 1.00

CDR resolutions succeeded 1.00

CDR UPDATEROWEXISTS conflicts 1.00

Replicating from TEST5.ACCOUNT to TEST5.ACCOUNT_EX:

*** Total statistics since 2012-11-07 16:27:17 ***

Total inserts 0.00

Total updates 1.00

Total deletes 0.00

Total discards 0.00

Total operations 1.00

*** Daily statistics since 2012-11-07 16:27:17 ***

Total inserts 0.00

Total updates 1.00

Total deletes 0.00

Total discards 0.00

Total operations 1.00

*** Hourly statistics since 2012-11-07 16:27:17 ***

Total inserts 0.00

Total updates 1.00

Total deletes 0.00

Total discards 0.00

Total operations 1.00

*** Latest statistics since 2012-11-07 16:27:17 ***

Total inserts 0.00

Total updates 1.00

Total deletes 0.00

Total discards 0.00

Total operations 1.00

End of Statistics.

GGSCI (raclinux1.gj.com) 76>

GGSCI (raclinux1.gj.com) 76> stats extbi2 reportcdr

Sending STATS request to EXTRACT EXTBI2 …

Start of Statistics at 2012-11-07 16:33:27.

Output to ./dirdat/3y:

Extracting from TEST5.ACCOUNT to TEST5.ACCOUNT:

*** Total statistics since 2012-11-07 16:28:24 ***

Total inserts 1.00

Total updates 1.00

Total befores 1.00

Total deletes 0.00

Total discards 0.00

Total operations 3.00

*** Daily statistics since 2012-11-07 16:28:24 ***

Total inserts 1.00

Total updates 1.00

Total befores 1.00

Total deletes 0.00

Total discards 0.00

Total operations 3.00

*** Hourly statistics since 2012-11-07 16:28:24 ***

Total inserts 1.00

Total updates 1.00

Total befores 1.00

Total deletes 0.00

Total discards 0.00

Total operations 3.00

*** Latest statistics since 2012-11-07 16:28:24 ***

Total inserts 1.00

Total updates 1.00

Total befores 1.00

Total deletes 0.00

Total discards 0.00

Total operations 3.00

End of Statistics.

GGSCI (raclinux1.gj.com) 77>

GGSCI (raclinux1.gj.com) 53> stats repbi2 reportcdr

Sending STATS request to REPLICAT REPBI2 …

Start of Statistics at 2012-11-07 16:34:05.

Replicating from TEST5.ACCOUNT to TEST5.ACCOUNT:

*** Total statistics since 2012-11-07 16:28:24 ***

Total inserts 1.00

Total updates 1.00

Total deletes 0.00

Total discards 0.00

Total operations 2.00

Total CDR conflicts 1.00

CDR resolutions succeeded 1.00

CDR UPDATEROWEXISTS conflicts 1.00

*** Daily statistics since 2012-11-07 16:28:24 ***

Total inserts 1.00

Total updates 1.00

Total deletes 0.00

Total discards 0.00

Total operations 2.00

Total CDR conflicts 1.00

CDR resolutions succeeded 1.00

CDR UPDATEROWEXISTS conflicts 1.00

*** Hourly statistics since 2012-11-07 16:28:24 ***

Total inserts 1.00

Total updates 1.00

Total deletes 0.00

Total discards 0.00

Total operations 2.00

Total CDR conflicts 1.00

CDR resolutions succeeded 1.00

CDR UPDATEROWEXISTS conflicts 1.00

*** Latest statistics since 2012-11-07 16:28:24 ***

Total inserts 1.00

Total updates 1.00

Total deletes 0.00

Total discards 0.00

Total operations 2.00

Total CDR conflicts 1.00

CDR resolutions succeeded 1.00

CDR UPDATEROWEXISTS conflicts 1.00

Replicating from TEST5.ACCOUNT to TEST5.ACCOUNT_EX:

*** Total statistics since 2012-11-07 16:28:24 ***

Total inserts 0.00

Total updates 1.00

Total deletes 0.00

Total discards 0.00

Total operations 1.00

*** Daily statistics since 2012-11-07 16:28:24 ***

Total inserts 0.00

Total updates 1.00

Total deletes 0.00

Total discards 0.00

Total operations 1.00

*** Hourly statistics since 2012-11-07 16:28:24 ***

Total inserts 0.00

Total updates 1.00

Total deletes 0.00

Total discards 0.00

Total operations 1.00

*** Latest statistics since 2012-11-07 16:28:24 ***

Total inserts 0.00

Total updates 1.00

Total deletes 0.00

Total discards 0.00

Total operations 1.00

End of Statistics.

GGSCI (raclinux1.gj.com) 54>

After USEMIN test

GGSCI (raclinux1.gj.com) 57> stats extbi1 reportcdr

Sending STATS request to EXTRACT EXTBI1 …

Start of Statistics at 2012-11-07 16:59:53.

Output to ./dirdat/3z:

Extracting from TEST5.ACCOUNT to TEST5.ACCOUNT:

*** Total statistics since 2012-11-07 16:27:16 ***

Total inserts 1.00

Total updates 1.00

Total befores 1.00

Total deletes 0.00

Total discards 0.00

Total operations 3.00

*** Daily statistics since 2012-11-07 16:27:16 ***

Total inserts 1.00

Total updates 1.00

Total befores 1.00

Total deletes 0.00

Total discards 0.00

Total operations 3.00

*** Hourly statistics since 2012-11-07 16:27:16 ***

Total inserts 1.00

Total updates 1.00

Total befores 1.00

Total deletes 0.00

Total discards 0.00

Total operations 3.00

*** Latest statistics since 2012-11-07 16:27:16 ***

Total inserts 1.00

Total updates 1.00

Total befores 1.00

Total deletes 0.00

Total discards 0.00

Total operations 3.00

Extracting from TEST5.SEAT_ASSIGNMENT to TEST5.SEAT_ASSIGNMENT:

*** Total statistics since 2012-11-07 16:27:16 ***

Total inserts 6.00

Total updates 1.00

Total befores 1.00

Total deletes 0.00

Total discards 0.00

Total operations 8.00

*** Daily statistics since 2012-11-07 16:27:16 ***

Total inserts 6.00

Total updates 1.00

Total befores 1.00

Total deletes 0.00

Total discards 0.00

Total operations 8.00

*** Hourly statistics since 2012-11-07 16:27:16 ***

Total inserts 6.00

Total updates 1.00

Total befores 1.00

Total deletes 0.00

Total discards 0.00

Total operations 8.00

*** Latest statistics since 2012-11-07 16:27:16 ***

Total inserts 6.00

Total updates 1.00

Total befores 1.00

Total deletes 0.00

Total discards 0.00

Total operations 8.00

End of Statistics.

GGSCI (raclinux1.gj.com) 58>

GGSCI (raclinux1.gj.com) 80> stats repbi1 reportcdr

Sending STATS request to REPLICAT REPBI1 …

Start of Statistics at 2012-11-07 17:00:36.

Replicating from TEST5.ACCOUNT to TEST5.ACCOUNT:

*** Total statistics since 2012-11-07 16:27:17 ***

Total inserts 1.00

Total updates 1.00

Total deletes 0.00

Total discards 0.00

Total operations 2.00

Total CDR conflicts 1.00

CDR resolutions succeeded 1.00

CDR UPDATEROWEXISTS conflicts 1.00

*** Daily statistics since 2012-11-07 16:27:17 ***

Total inserts 1.00

Total updates 1.00

Total deletes 0.00

Total discards 0.00

Total operations 2.00

Total CDR conflicts 1.00

CDR resolutions succeeded 1.00

CDR UPDATEROWEXISTS conflicts 1.00

*** Hourly statistics since 2012-11-07 17:00:00 ***

No database operations have been performed.

*** Latest statistics since 2012-11-07 16:27:17 ***

Total inserts 1.00

Total updates 1.00

Total deletes 0.00

Total discards 0.00

Total operations 2.00

Total CDR conflicts 1.00

CDR resolutions succeeded 1.00

CDR UPDATEROWEXISTS conflicts 1.00

Replicating from TEST5.ACCOUNT to TEST5.ACCOUNT_EX:

*** Total statistics since 2012-11-07 16:27:17 ***

Total inserts 0.00

Total updates 1.00

Total deletes 0.00

Total discards 0.00

Total operations 1.00

*** Daily statistics since 2012-11-07 16:27:17 ***

Total inserts 0.00

Total updates 1.00

Total deletes 0.00

Total discards 0.00

Total operations 1.00

*** Hourly statistics since 2012-11-07 17:00:00 ***

No database operations have been performed.

*** Latest statistics since 2012-11-07 16:27:17 ***

Total inserts 0.00

Total updates 1.00

Total deletes 0.00

Total discards 0.00

Total operations 1.00

Replicating from TEST5.SEAT_ASSIGNMENT to TEST5.SEAT_ASSIGNMENT:

*** Total statistics since 2012-11-07 16:27:17 ***

Total inserts 6.00

Total updates 1.00

Total deletes 0.00

Total discards 0.00

Total operations 7.00

Total CDR conflicts 1.00

CDR resolutions succeeded 1.00

CDR UPDATEROWEXISTS conflicts 1.00

*** Daily statistics since 2012-11-07 16:27:17 ***

Total inserts 6.00

Total updates 1.00

Total deletes 0.00

Total discards 0.00

Total operations 7.00

Total CDR conflicts 1.00

CDR resolutions succeeded 1.00

CDR UPDATEROWEXISTS conflicts 1.00

*** Hourly statistics since 2012-11-07 17:00:00 ***

No database operations have been performed.

*** Latest statistics since 2012-11-07 16:27:17 ***

Total inserts 6.00

Total updates 1.00

Total deletes 0.00

Total discards 0.00

Total operations 7.00

Total CDR conflicts 1.00

CDR resolutions succeeded 1.00

CDR UPDATEROWEXISTS conflicts 1.00

Replicating from TEST5.SEAT_ASSIGNMENT to TEST5.SEAT_ASSIGNMENT_EX:

*** Total statistics since 2012-11-07 16:27:17 ***

Total inserts 0.00

Total updates 1.00

Total deletes 0.00

Total discards 0.00

Total operations 1.00

*** Daily statistics since 2012-11-07 16:27:17 ***

Total inserts 0.00

Total updates 1.00

Total deletes 0.00

Total discards 0.00

Total operations 1.00

*** Hourly statistics since 2012-11-07 17:00:00 ***

No database operations have been performed.

*** Latest statistics since 2012-11-07 16:27:17 ***

Total inserts 0.00

Total updates 1.00

Total deletes 0.00

Total discards 0.00

Total operations 1.00

End of Statistics.

GGSCI (raclinux1.gj.com) 81>

GGSCI (raclinux1.gj.com) 81> stats extbi2 reportcdr

Sending STATS request to EXTRACT EXTBI2 …

Start of Statistics at 2012-11-07 17:01:26.

Output to ./dirdat/3y:

Extracting from TEST5.ACCOUNT to TEST5.ACCOUNT:

*** Total statistics since 2012-11-07 16:28:24 ***

Total inserts 1.00

Total updates 1.00

Total befores 1.00

Total deletes 0.00

Total discards 0.00

Total operations 3.00

*** Daily statistics since 2012-11-07 16:28:24 ***

Total inserts 1.00

Total updates 1.00

Total befores 1.00

Total deletes 0.00

Total discards 0.00

Total operations 3.00

*** Hourly statistics since 2012-11-07 17:00:00 ***

No database operations have been performed.

*** Latest statistics since 2012-11-07 16:28:24 ***

Total inserts 1.00

Total updates 1.00

Total befores 1.00

Total deletes 0.00

Total discards 0.00

Total operations 3.00

Extracting from TEST5.SEAT_ASSIGNMENT to TEST5.SEAT_ASSIGNMENT:

*** Total statistics since 2012-11-07 16:28:24 ***

Total inserts 6.00

Total updates 1.00

Total befores 1.00

Total deletes 0.00

Total discards 0.00

Total operations 8.00

*** Daily statistics since 2012-11-07 16:28:24 ***

Total inserts 6.00

Total updates 1.00

Total befores 1.00

Total deletes 0.00

Total discards 0.00

Total operations 8.00

*** Hourly statistics since 2012-11-07 17:00:00 ***

No database operations have been performed.

*** Latest statistics since 2012-11-07 16:28:24 ***

Total inserts 6.00

Total updates 1.00

Total befores 1.00

Total deletes 0.00

Total discards 0.00

Total operations 8.00

End of Statistics.

GGSCI (raclinux1.gj.com) 82>

GGSCI (raclinux1.gj.com) 58> stats repbi2 reportcdr

Sending STATS request to REPLICAT REPBI2 …

Start of Statistics at 2012-11-07 17:02:03.

Replicating from TEST5.ACCOUNT to TEST5.ACCOUNT:

*** Total statistics since 2012-11-07 16:28:24 ***

Total inserts 1.00

Total updates 1.00

Total deletes 0.00

Total discards 0.00

Total operations 2.00

Total CDR conflicts 1.00

CDR resolutions succeeded 1.00

CDR UPDATEROWEXISTS conflicts 1.00

*** Daily statistics since 2012-11-07 16:28:24 ***

Total inserts 1.00

Total updates 1.00

Total deletes 0.00

Total discards 0.00

Total operations 2.00

Total CDR conflicts 1.00

CDR resolutions succeeded 1.00

CDR UPDATEROWEXISTS conflicts 1.00

*** Hourly statistics since 2012-11-07 17:00:00 ***

No database operations have been performed.

*** Latest statistics since 2012-11-07 16:28:24 ***

Total inserts 1.00

Total updates 1.00

Total deletes 0.00

Total discards 0.00

Total operations 2.00

Total CDR conflicts 1.00

CDR resolutions succeeded 1.00

CDR UPDATEROWEXISTS conflicts 1.00

Replicating from TEST5.ACCOUNT to TEST5.ACCOUNT_EX:

*** Total statistics since 2012-11-07 16:28:24 ***

Total inserts 0.00

Total updates 1.00

Total deletes 0.00

Total discards 0.00

Total operations 1.00

*** Daily statistics since 2012-11-07 16:28:24 ***

Total inserts 0.00

Total updates 1.00

Total deletes 0.00

Total discards 0.00

Total operations 1.00

*** Hourly statistics since 2012-11-07 17:00:00 ***

No database operations have been performed.

*** Latest statistics since 2012-11-07 16:28:24 ***

Total inserts 0.00

Total updates 1.00

Total deletes 0.00

Total discards 0.00

Total operations 1.00

Replicating from TEST5.SEAT_ASSIGNMENT to TEST5.SEAT_ASSIGNMENT:

*** Total statistics since 2012-11-07 16:28:24 ***

Total inserts 6.00

Total updates 1.00

Total deletes 0.00

Total discards 0.00

Total operations 7.00

Total CDR conflicts 1.00

CDR resolutions succeeded 1.00

CDR UPDATEROWEXISTS conflicts 1.00

*** Daily statistics since 2012-11-07 16:28:24 ***

Total inserts 6.00

Total updates 1.00

Total deletes 0.00

Total discards 0.00

Total operations 7.00

Total CDR conflicts 1.00

CDR resolutions succeeded 1.00

CDR UPDATEROWEXISTS conflicts 1.00

*** Hourly statistics since 2012-11-07 17:00:00 ***

No database operations have been performed.

*** Latest statistics since 2012-11-07 16:28:24 ***

Total inserts 6.00

Total updates 1.00

Total deletes 0.00

Total discards 0.00

Total operations 7.00

Total CDR conflicts 1.00

CDR resolutions succeeded 1.00

CDR UPDATEROWEXISTS conflicts 1.00

Replicating from TEST5.SEAT_ASSIGNMENT to TEST5.SEAT_ASSIGNMENT_EX:

*** Total statistics since 2012-11-07 16:28:24 ***

Total inserts 0.00

Total updates 1.00

Total deletes 0.00

Total discards 0.00

Total operations 1.00

*** Daily statistics since 2012-11-07 16:28:24 ***

Total inserts 0.00

Total updates 1.00

Total deletes 0.00

Total discards 0.00

Total operations 1.00

*** Hourly statistics since 2012-11-07 17:00:00 ***

No database operations have been performed.

*** Latest statistics since 2012-11-07 16:28:24 ***

Total inserts 0.00

Total updates 1.00

Total deletes 0.00

Total discards 0.00

Total operations 1.00

End of Statistics.

GGSCI (raclinux1.gj.com) 59>

Output from the exception tables.

16:56:25 SQL> select * from seat_assignment_ex;

ID_PK RES_DATE OPTYPE DBERRNUM DBERRMSGE TABLENAME ID_CURR PASSENGER_NAME_CURR LATEST_TIMESTAMP_CURR FLIGHT_NO_CURR SEAT_NO_CURR FLIGHT_TI ID_BEFORE PASSENGER_NAME_BEFOR LATEST_TIMESTAMP_BEF FLIGHT_NO_BEFORE SEAT_NO_BEFORE FLIGHT_TI ID_AFTER PASSENGER_NAME_AFTER LATEST_TIMESTAMP_AFTER FLIGHT_NO_AFTER SEAT_NO_AFTER FLIGHT_TI

———- ——— ———— ———— ———— ——————– ———- ——————– ————————————————————————— ————– ——————- ——— ———- ——————– ——————– —————- ——————- ——— ———- ————————————————– ————————————————————————— ————— ——————- ———

180009 07-NOV-12 SQL COMPUPDA 1403 OCI Error OR TEST5.SEAT_ASSIGNMEN 7 John Smith 07-NOV-12 04.52.31.618956 PM 120 2A 07-NOV-12 7 07-NOV-12 04.40.25.4 120 2A 07-NOV-12 7 Pier Cardin 07-NOV-12 04.52.46.481506 PM

TE A-01403: no 81803 PM

data found,

SQL <UPDATE

“TEST5″.”SEA

T_ASSIGNMENT

” SET “PASSE

NGER_NAME” =

:a1,”LATEST

_TIMESTAMP”

= :a2 WHERE

ID_PK RES_DATE OPTYPE DBERRNUM DBERRMSGE TABLENAME ID_CURR PASSENGER_NAME_CURR LATEST_TIMESTAMP_CURR FLIGHT_NO_CURR SEAT_NO_CURR FLIGHT_TI ID_BEFORE PASSENGER_NAME_BEFOR LATEST_TIMESTAMP_BEF FLIGHT_NO_BEFORE SEAT_NO_BEFORE FLIGHT_TI ID_AFTER PASSENGER_NAME_AFTER LATEST_TIMESTAMP_AFTER FLIGHT_NO_AFTER SEAT_NO_AFTER FLIGHT_TI

———- ——— ———— ———— ———— ——————– ———- ——————– ————————————————————————— ————– ——————- ——— ———- ——————– ——————– —————- ——————- ——— ———- ————————————————– ————————————————————————— ————— ——————- ———

“ID” = :b0 A

ND “LATEST_T

IMESTAMP” >

:b1>

Elapsed: 00:00:00.01

16:56:43 SQL> select * from account_ex;

ID_PK RES_DATE OPTYPE DBERRNUM DBERRMSGE TABLENAME ACCOUNT_ID_CURR ACCOUNT_NAME_CURR ACCOUNT_TEL_ ACCOUNT_AD BALANCE_CURR ACCOUNT_ID_BEFORE ACCOUNT_NAME_BEFORE ACCOUNT_TEL_ ACCOUNT_AD BALANCE_BEFORE ACCOUNT_ID ACCOUNT_NAME ACCOUNT_TEL ACCOUNT_AD BALANCE

———- ——— ———— ———— ———— ——————– ————— ——————– ———— ———- ———— —————– ——————– ———— ———- ————– ———- ——————– ———— ———- ———-

180007 07-NOV-12 SQL COMPUPDA 1403 TEST5.ACCOUNT 1 Smith 555-555-5555 1234 Some 1100 1 Smith 555-555-5555 1234 Some 1000 1 900

TE street nam street nam

e e

Elapsed: 00:00:00.01

16:57:08 SQL> spool off

16:57:41 SQL> select * from seat_assignment where seat_no=’2A’;

ID PASSENGER_NAME LATEST_TIMESTAMP FLIGHT_NO SEAT_NO FLIGHT_TI

———- ————————————————– ————————————————————————— ———- ——————- ———

7 John Smith 07-NOV-12 04.52.31.618956 PM 120 2A 07-NOV-12

Elapsed: 00:00:00.00

16:57:48 SQL> select * from seat_assignment_ex;

ID_PK RES_DATE OPTYPE DBERRNUM DBERRMSGE TABLENAME ID_CURR PASSENGER_NAME_CURR LATEST_TIMESTAMP_CURR FLIGHT_NO_CURR SEAT_NO_CURR FLIGHT_TI ID_BEFORE PASSENGER_NAME_BEFOR LATEST_TIMESTAMP_BEF FLIGHT_NO_BEFORE SEAT_NO_BEFORE FLIGHT_TI ID_AFTER PASSENGER_NAME_AFTER LATEST_TIMESTAMP_AFTER FLIGHT_NO_AFTER SEAT_NO_AFTER FLIGHT_TI

———- ——— ———— ———— ———— ——————– ———- ——————– ————————————————————————— ————– ——————- ——— ———- ——————– ——————– —————- ——————- ——— ———- ————————————————– ————————————————————————— ————— ——————- ———

180003 07-NOV-12 SQL COMPUPDA 1403 TEST5.SEAT_ASSIGNMEN 7 John Smith 07-NOV-12 04.52.31.618956 PM 120 2A 07-NOV-12 7 07-NOV-12 04.40.25.4 120 2A 07-NOV-12 7 John Smith 07-NOV-12 04.52.31.618956 PM

TE 81803 PM

Elapsed: 00:00:00.00

16:57:59 SQL> select * from account_ex;

ID_PK RES_DATE OPTYPE DBERRNUM DBERRMSGE TABLENAME ACCOUNT_ID_CURR ACCOUNT_NAME_CURR ACCOUNT_TEL_ ACCOUNT_AD BALANCE_CURR ACCOUNT_ID_BEFORE ACCOUNT_NAME_BEFORE ACCOUNT_TEL_ ACCOUNT_AD BALANCE_BEFORE ACCOUNT_ID ACCOUNT_NAME ACCOUNT_TEL ACCOUNT_AD BALANCE

———- ——— ———— ———— ———— ——————– ————— ——————– ———— ———- ———— —————– ——————– ———— ———- ————– ———- ——————– ———— ———- ———-

180001 07-NOV-12 SQL COMPUPDA 1403 TEST5.ACCOUNT 1 Smith 555-555-5555 1234 Some 1100 1 Smith 555-555-5555 1234 Some 1000 1 1200

TE street nam street nam

e e

Elapsed: 00:00:00.01

16:58:11 SQL> spool off

Reference

  1. OGG Oracle Installation and Setup Guide
  1. OGG Reference Guide

Best Practices for Conflict Detection and Resolution in Active-Active Database Configurations Using Oracle GoldenGate

Publicado em GOLDENGATE | Marcado com , , , , , , | Deixe um comentário

Oracle Recycle Bin

Oracle Recycle Bin

Oracle 10g introduced the recycle bin. You can recover a table that you have dropped from the Oracle recycle bin by using the flashback table command as seen here:

SQL> DROP TABLE books;

SQL> FLASHBACK TABLE books TO BEFORE DROP;

The recycle bin uses the flashback table command.

However, the more time that has passed since the table was dropped, the less likely it will be in the Oracle recycle bin (The Oracle recycle bin is purged periodically based on a number of different criteria).

The contents of the Oracle recycle bin can be viewed from SQL*Plus by using the show recyclebin command:

SQL> show recyclebin;
 
ORIGINAL NAME    RECYCLEBIN NAME                OBJECT TYPE  DROP TIME
—————- —————————— ———— ————
BOOKS            BIN$D3XWKKUCQVq2EG8/vkjNDw==$0 TABLE        2005-05-30

Managing the Recycle Bin

The recycle binis a new feature in Oracle 10g which keeps dropped objects. When you drop an object, no space is released and the object is moved to the logical container called the recycle bin. In case you want to get the table back, issue the flashback drop command as it was explained in the previous scenarios. Each user has a view called recycle_bin which he can use to get the list of dropped objects.

You can query the dropped object by not restoring it from the recycle bin. This is done by using the special name that was given to the dropped object by Oracle, i.e. the object name starting with bin$. To get the name of all dropped objects, use the show recycle_bin command. More detailed information can be found by querying the user_recyclebin view. To understand the concept, see the following example:

SQL>
create
table tbl_rc_bin (id number);
Table created. 

SQL>
drop
table tbl_rc_bin;
Table dropped.

SQL>
show
recyclebin;

ORIGINAL NAME    RECYCLEBIN NAME        OBJECTTYPE  DROP TIME
————- ——————–      ———– ——-
TBL_RC_BIN    BIN$fzdTKcxkrMDgQAB/AQAUbA==$0 TABLE  2010-02-09:22:06:47

SQL>
select
object_name, original_name, operation, type, droptime
from
user_recyclebin;

OBJECT_NAME              ORIGINAL_NAME   OPERATION  TYPE    DROPTIME
————             ————— ———– —-  ——-
BIN$fzdTKcxkrMDgQAB/AQAUbA==$0 TBL_RC_BIN   DROP TABLE       2010-02-
9:22:06:47

SQL>

Note: When running queries for used space and free space in a tablespace, segments that have moved to the recyclebin will not be listed as normal table/index segments consuming used space, but will reduce the free space.  So be aware of the size of the recyclebin when generating space usage reports for the database.

Recycle bin objects, i.e. dropped objects, will not be included during the Oracle export. Only the real objects can be exported. So after importing, there is no need to panic when you find that the total number of objects count is different from source to target.

Purging Objects From the Recycle Bin

To remote the tables and indexes from the recycle binand free the space that they consume, use the PURGE clause.

To purge the specific table or index, use:

SQL>
purge
table tbl_rc_bin;
Table purged.
SQL>
purge user_recyclebin;
Recyclebin purged.

To purge the objects in the user’s recycle bin, use:

SQL>
purge
recyclebin;
Recyclebin purged.

To purge all objects from the recycle bin, use:

SQL>
conn / as
sysdba
Connected.
SQL>
purge
dba_recyclebin;
DBA Recyclebin purged.
SQL>

To purge all objects of the specific tablespace, use:

SQL>
purge
tablespace users;
Tablespace purged.
SQL>

Disabling the Recycle Bin Functionality

There is a recyclebin parametern the parameter file whose default is ON.

SQL>
show
parameter recyclebin;

NAME                       TYPE        VALUE
————————– ———– —————————–
recyclebin                 string      ON
SQL>

To disable it to function, use:

alter system set recyclebin=off;

To disable it in the session level, use:

alter session set recyclebin=off;

To delete the table without putting it in the recycle bin, use the purge command at the end of the DROP TABLE clauseas follows:

SQL>
create
table tbl_rc (id number);
Table created. 

SQL>
drop
table tbl_rc purge;
Table dropped.

SQL>
show
recyclebin;
SQL>

Although this is enabled by default, if the parameter recyclebin is set to OFF at the instance level, dropped tables are not retained in the recycle bin.  Similarly, if the tablespace has low free space, older dropped tables are silently purged from the recycle bin.  So it is advisable to query the recycle bin immediately after the problem is identified.  Take care to ensure that a recycle bin is available before running your tests for flashback query on a dropped table or flashback table to before drop.

Purging Objects in the Oracle Recycle Bin

If you decide that you are never going to restore an item from the recycle bin, you can use the PURGE statement to remove the items and their associated objects from the recycle bin and release their storage space. You need the same privileges as if you were dropping the item.

PURGE TABLE BIN$jsleilx392mk2=293$0;

You can achieve the same result with the following statement:

PURGE TABLE int_admin_emp;

For more information on Oracle recycle bin, see my notes below:

RECYCLE BIN view tips

Oracle Concepts – Dropping Tables – Oracle Recycle Bin

Purge Recyclebin

Publicado em RECYCLE BIN | Marcado com , | Deixe um comentário

Usando o Recycle Bin

Using Oracle’s recycle bin

http://www.orafaq.com/node/968

Submitted by Natalka Roshak on Sat, 2006-09-02 00:17

Natalka Roshak's picture

One of the many new features that Oracle 10g introduced is the recyclebin. When enabled, this feature works a little bit like the familiar Windows recycle bin or Mac Trash. Dropped tables go “into” the recyclebin, and can be restored from the recyclebin. OraFAQ has already published an article covering the basics; in this article, I’ll cover some of the more subtle aspects of the recyclebin.

THE BASICS

First, a quick review of the basics. There are two recyclebin views: USER_RECYCLEBIN and DBA_RECYCLEBIN. For convenience, the synonym RECYCLEBIN points to your USER_RECYCLEBIN. The recyclebin is enabled by default in 10g, but you can turn it on or off with the RECYCLEBIN initialization parameter, at the system or session level.

When the recyclebin is enabled, any tables that you drop do not actually get deleted. Instead, when you drop a table, Oracle just renames the table and all its associated objects (indexes, triggers, LOB segments, etc) to a system-generated name that begins with BIN$.

For example, consider this simple table:

SQL> create table tst (col varchar2(10), row_chng_dt date);

Table created.

SQL> insert into tst values ('Version1', sysdate);

1 row created.

SQL> select * from tst ;

COL        ROW_CHNG
---------- --------
Version1   16:10:03

If the RECYCLEBIN initialization parameter is set to ON (the default in 10g), then dropping this table will place it in the recyclebin:

SQL> drop table tst;

Table dropped.

SQL> select object_name, original_name, type, can_undrop as "UND", can_purge as "PUR", droptime
  2  from recyclebin 
SQL> /

OBJECT_NAME                    ORIGINAL_NAME TYPE  UND PUR DROPTIME
------------------------------ ------------- ----- --- -------------------
BIN$HGnc55/7rRPgQPeM/qQoRw==$0 TST           TABLE YES YES 2006-09-01:16:10:12

All that happened to the table when we dropped it was that it got renamed. The table data is still there and can be queried just like a normal table:

SQL> alter session set nls_date_format='HH24:MI:SS' ;

Session altered.

SQL> select * from "BIN$HGnc55/7rRPgQPeM/qQoRw==$0" ;

COL        ROW_CHNG
---------- --------
Version1   16:10:03

Since the table data is still there, it’s very easy to “undrop” the table. This operation is known as a “flashback drop”. The command is FLASHBACK TABLE… TO BEFORE DROP, and it simply renames the BIN$… table to its original name:

SQL> flashback table tst to before drop;

Flashback complete.

SQL> select * from tst ;

COL        ROW_CHNG
---------- --------
Version1   16:10:03

SQL> select * from recyclebin ;

no rows selected

It’s important to know that after you’ve dropped a table, it has only been renamed; the table segments are still sitting there in your tablespace, unchanged, taking up space. This space still counts against your user tablespace quotas, as well as filling up the tablespace. It will not be reclaimed until you get the table out of the recyclebin. You can remove an object from the recyclebin by restoring it, or by purging it from the recyclebin.

SQL> select object_name, original_name, type, can_undrop as "UND", can_purge as "PUR", droptime
  2  from recyclebin
SQL> /

OBJECT_NAME                    ORIGINAL_NAME TYPE                      UND PUR DROPTIME
------------------------------ ------------- ------------------------- --- --- -------------------
BIN$HGnc55/7rRPgQPeM/qQoRw==$0 TST           TABLE                     YES YES 2006-09-01:16:10:12

SQL> purge table "BIN$HGnc55/7rRPgQPeM/qQoRw==$0" ;

Table purged.

SQL> select * from recyclebin ;

no rows selected

You have several purge options. You can also purge everything from the USER_RECYCLEBIN using PURGE RECYCLEBIN; a user with DBA privileges can purge everything from all recyclebins using DBA_RECYCLEBIN; and finally, you can purge recyclebin objects by schema and user with PURGE TABLESPACE USER .

Unless you purge them, Oracle will leave objects in the recyclebin until the tablespace runs out of space, or until you hit your user quota on the tablespace. At that point, Oracle purges the objects one at a time, starting with the ones dropped the longest time ago, until there is enough space for the current operation. If the tablespace data files are AUTOEXTEND ON, Oracle will purge recyclebin objects before it autoextends a datafile.

DROPPED TABLE VERSIONS

Just as you can wind up with several versions of a file with the same name in the Windows recycle bin, you can wind up with several versions of a table in the Oracle recyclebin. For example, if we create and drop the TST table twice, we’ll have two versions in the recyclebin:

SQL> create table tst (col varchar2(10), row_chng_dt date);

Table created.

SQL> insert into tst values ('Version1', sysdate);

1 row created.

SQL> drop table tst;

Table dropped.

SQL> create table tst (col varchar2(10), row_chng_dt date);

Table created.

SQL> insert into tst values ('Version2', sysdate);

1 row created.

SQL> drop table tst;

Table dropped.

SQL> select object_name, original_name, type, can_undrop as "UND", can_purge as "PUR", droptime
  2  from recyclebin;

OBJECT_NAME                    ORIGINAL_NAME TYPE  UND PUR DROPTIME
------------------------------ ------------- ----- --- --- -------------------
BIN$HGnc55/7rRPgQPeM/qQoRw==$0 TST           TABLE YES YES 2006-09-01:16:10:12
BIN$HGnc55/8rRPgQPeM/qQoRw==$0 TST           TABLE YES YES 2006-09-01:16:19:53

Query the two dropped tables to verify that they are different:

SQL> select * from "BIN$HGnc55/7rRPgQPeM/qQoRw==$0";

COL        ROW_CHNG
---------- --------
Version1   16:10:03

SQL> select * from "BIN$HGnc55/8rRPgQPeM/qQoRw==$0" ;

COL        ROW_CHNG
---------- --------
Version2   16:19:45

If we issue a FLASHBACK DROP command for TST, which version will Oracle restore?

SQL> flashback table tst to before drop;

Flashback complete.

SQL> select * from tst;

COL        ROW_CHNG
---------- --------
Version2   16:19:45

Oracle always restores the most recent version of the dropped object. To restore the earlier version of the table, instead of the later one, we can either keep flashing back until we hit the version we want, or we can simply refer to the correct version of the table by using its new BIN$… name. For example, dropping TST once more gives us two versions in the recyclebin again:

SQL> drop table tst;

Table dropped.

SQL> select object_name, original_name, type, can_undrop as "UND", can_purge as "PUR", droptime
  2  from recyclebin;

OBJECT_NAME                    ORIGINAL_NAME TYPE   UND PUR DROPTIME
------------------------------ ------------- ------ --- --- -------------------
BIN$HGnc55/7rRPgQPeM/qQoRw==$0 TST           TABLE  YES YES 2006-09-01:16:10:12
BIN$HGnc55/9rRPgQPeM/qQoRw==$0 TST           TABLE  YES YES 2006-09-01:16:21:00

To flashback to the first version, refer to the BIN$… name of the first version of TST:

SQL> flashback table "BIN$HGnc55/7rRPgQPeM/qQoRw==$0" to before drop;

Flashback complete.

SQL> select * from tst;

COL        ROW_CHNG
---------- --------
Version1   16:10:03

The second version is still hanging out in the recyclebin:

SQL> select object_name, original_name, operation, can_undrop as "UND", can_purge as "PUR", droptime
  2  from recyclebin;

OBJECT_NAME                    ORIGINAL_NAME  OPERATION UND PUR DROPTIME
------------------------------ -------------- --------- --- --- -------------------
BIN$HGnc55/9rRPgQPeM/qQoRw==$0 TST            DROP      YES YES 2006-09-01:16:21:00

DEPENDENT OBJECTS

In a modern relational database, few tables stand alone. Most will have indexes, constraints, and/or triggers. Dropping a table also drops these dependent objects. When you drop a table with the recyclebin enabled, the table and its dependent objects get renamed, but still have the same structure as before. The triggers and indexes get modified to point to the new BIN$ table name. (Any stored procedures that referenced the original object, though, are invalidated.) For example:

SQL> truncate table tst;

Table truncated.

SQL> insert into tst values ('Version3', sysdate);

1 row created.

SQL> create index ind_tst_col on tst(col);

Index created.

SQL> select * from tst;

COL        ROW_CHNG
---------- --------
Version3   16:26:10

SQL> drop table tst ;

Table dropped.

SQL> select object_name, original_name, type, can_undrop as "UND", can_purge as "PUR", droptime
  2  from recyclebin
  3  order by droptime;

OBJECT_NAME                    ORIGINAL_NAME  TYPE   UND PUR DROPTIME
------------------------------ -------------- ------ --- --- -------------------
BIN$HGnc55/9rRPgQPeM/qQoRw==$0 TST            TABLE  YES YES 2006-09-01:16:21:00
BIN$HGnc55//rRPgQPeM/qQoRw==$0 TST            TABLE  YES YES 2006-09-01:16:27:36
BIN$HGnc55/+rRPgQPeM/qQoRw==$0 IND_TST_COL    INDEX  NO  YES 2006-09-01:16:27:36

The RECYCLEBIN views have a few other columns that make the relationship between TST and IND_TST_COL clear:

SQL> select object_name, original_name, type, can_undrop as "UND", 
  2  can_purge as "PUR", droptime, base_object, purge_object
  3  from recyclebin
  4  order by droptime;

OBJECT_NAME                    ORIGINAL_NAME   TYPE  UND PUR DROPTIME            BASE_OBJECT   PURGE_OBJECT
------------------------------ --------------- ----- --- --- ------------------- -----------   ------------
BIN$HGnc55/9rRPgQPeM/qQoRw==$0 TST             TABLE  YES YES 2006-09-01:16:21:00 233032        233032
BIN$HGnc55//rRPgQPeM/qQoRw==$0 TST             TABLE  YES YES 2006-09-01:16:27:36 233031        233031
BIN$HGnc55/+rRPgQPeM/qQoRw==$0 IND_TST_COL     INDEX  NO  YES 2006-09-01:16:27:36 233031        233434

The PURGE_OBJECT column is the object number of the object itself; eg. the object number of IND_TST_COL is 233434. Note the value of the BASE_OBJECT column for IND_TST_COL: 233031, the object number of the associated version of the TST table.

If we FLASHBACK DROP the TST table, its index will be restored – but Oracle will not rename it to its original name. It will retain its BIN$.. name:

SQL> flashback table tst to before drop;

Flashback complete.

SQL> select * from tst ;

COL        ROW_CHNG
---------- --------
Version3   16:26:10

SQL> select index_name from user_indexes where table_name='TST' ;

INDEX_NAME
------------------------------
BIN$HGnc55/+rRPgQPeM/qQoRw==$0

I’m not sure why Oracle bothers storing the index’s original name, since it doesn’t seem to be used for anything. If we now drop this copy of the TST table, Oracle doesn’t “remember” that the original name of the index “BIN$HGnc55/+rRPgQPeM/qQoRw==$0″was IND_TST_COL – the ORIGINAL_NAME column in RECYCLEBIN holds the ugly string “BIN$HGnc55/+rRPgQPeM/qQoRw==$0” :

SQL> drop table tst;

Table dropped.

SQL> select object_name, original_name, type, can_undrop as "UND", can_purge as "PUR", 
  2  droptime, base_object, purge_object
  3  from recyclebin
  4  order by droptime;

OBJECT_NAME                    ORIGINAL_NAME   TYPE  UND PUR DROPTIME            BASE_OBJECT PURGE_OBJECT
------------------------------ --------------- ----- --- --- ------------------- ----------- ------------
BIN$HGnc55/9rRPgQPeM/qQoRw==$0 TST             TABLE YES YES 2006-09-01:16:21:00      233032       233032
BIN$HGnc56ABrRPgQPeM/qQoRw==$0 TST             TABLE YES YES 2006-09-01:16:31:43      233031       233031
BIN$HGnc56AArRPgQPeM/qQoRw==$1 BIN$HGnc55/+rRP INDEX NO  YES 2006-09-01:16:31:43      233031       233434
                               gQPeM/qQoRw==$0

Note the values in the CAN_UNDROP and CAN_PURGE columns for the index (displayed as “UND” and “PUR” above). An index cannot be undropped without the table – so CAN_UNDROP is set to NO. It can, however, be purged without purging the table:

SQL> purge index "BIN$HGnc56AArRPgQPeM/qQoRw==$1" ;

Index purged.

SQL> select object_name, original_name, type, can_undrop as "UND", can_purge as "PUR", 
  2  droptime, base_object, purge_object
  3  from recyclebin
  4  order by droptime;

OBJECT_NAME                    ORIGINAL_NAME  TYPE  UND PUR DROPTIME            BASE_OBJECT PURGE_OBJECT
------------------------------ -------------- ----- --- --- ------------------- ----------- ------------
BIN$HGnc55/9rRPgQPeM/qQoRw==$0 TST            TABLE YES YES 2006-09-01:16:21:00      233032       233032
BIN$HGnc56ABrRPgQPeM/qQoRw==$0 TST            TABLE YES YES 2006-09-01:16:31:43      233031       233031

Now, if we restore the table, it will be restored without the index:

SQL> flashback table tst to before drop;

Flashback complete.

SQL> select * from tst ;

COL        ROW_CHNG
---------- --------
Version3   16:26:10

SQL> select index_name from user_indexes where table_name='TST' ;

no rows selected

If you drop a table with associated LOB segments, they are handled in a similar way, except that they cannot be independently purged: CAN_UNDROP and CAN_PURGE are set to NO, and they are purged if you purge the table from the recyclebin, restored with the table if you restore it.

LIMITATIONS

A few types of dependent objects are not handled like the simple index above.

  • Bitmap join indexes are not put in the recyclebin when their base table is DROPped, and not retrieved when the table is restored with FLASHBACK DROP.
  • The same goes for materialized view logs; when you drop a table, all mview logs defined on that table are permanently dropped, not put in the recyclebin.
  • Referential integrity constraints that reference another table are lost when the table is put in the recyclebin and then restored.

If space limitations force Oracle to start purging objects from the recyclebin, it purges indexes first. If you FLASHBACK DROP a table whose associated indexes have already been purged, it will be restored without the indexes.

DISABLING THE RECYCLEBIN

In Windows, you can choose to permanently delete a file instead of sending it to the recycle bin. Similarly, you can choose to drop a table permanently, bypassing the Oracle recyclebin, by using the PURGE clause in your DROP TABLE statement.

SQL> purge recyclebin;

Recyclebin purged.

SQL> select * from recyclebin;

no rows selected

SQL> create table my_new_table (dummy varchar2(1));

Table created.

SQL> drop table my_new_table purge;

Table dropped.

SQL> select * from recyclebin;

no rows selected

If you disable the recyclebin at the session level, with ALTER SESSION SET RECYCLEBIN=OFF, it has the same effect as putting PURGE at the end of all your drop statements. Note, however, that you can still use FLASHBACK DROP to restore objects that were put in the recyclebin before you set RECYCLEBIN=OFF. For example:

SQL> select object_name, original_name, type, can_undrop as "UND", can_purge as "PUR",
  2   droptime, base_object, purge_object
  3  from recyclebin
  4  order by droptime;

OBJECT_NAME                    ORIGINAL_NAME TYPE  UND PUR DROPTIME            BASE_OBJECT PURGE_OBJECT
------------------------------ ------------- ----- --- --- ------------------- ----------- ------------
BIN$HGnc56ACrRPgQPeM/qQoRw==$0 TST           TABLE YES YES 2006-09-01:16:34:12      233031       233031

SQL> alter session set recyclebin=off ;

Session altered.

SQL> create table tst (col varchar2(10), row_chng_dt date);

Table created.

SQL> insert into tst values ('Version5', sysdate);

1 row created.

SQL> drop table tst ;

Table dropped.

SQL> select object_name, original_name, type, can_undrop as "UND", can_purge as "PUR",
  2   droptime, base_object, purge_object
  3  from recyclebin
  4  order by droptime;

OBJECT_NAME                    ORIGINAL_NAME   TYPE UND PUR DROPTIME            BASE_OBJECT PURGE_OBJECT
------------------------------ -------------- ----- --- --- ------------------- ----------- ------------
BIN$HGnc56ACrRPgQPeM/qQoRw==$0 TST            TABLE YES YES 2006-09-01:16:34:12      233031       233031

SQL> flashback table tst to before drop;

Flashback complete.

SQL> select * from tst ;

COL        ROW_CHNG
---------- --------
Version3   16:26:10

CONCLUSION

This article has explored some of the subtler ramifications of the recyclebin. To sum up:

-The recyclebin may contain several versions of a dropped object. Oracle restores them in LIFO order; you can restore older versions by repeatedly restoring until you get the version you want, or by using the correct version’s BIN$… name directly.
– Oracle drops most dependent objects along with the table, and restores them when the table is restored with FLASHBACK DROP, but does not restore their names. You can purge dependent objects separately to restore the table without them.
– Even after turning RECYCLEBIN OFF, you can FLASHBACK DROP objects that were already in the RECYCLEBIN.

About the author

Natalka Roshak is a senior Oracle and Sybase database administrator, analyst, and architect. She is based in Kingston, Ontario, and consults across North America. More of her scripts and tips can be found in her online DBA toolkit at http://toolkit.rdbms-insight.com/.

Publicado em RECYCLE BIN | Marcado com , , , , , | Deixe um comentário