Is there a way to improve REPLICAT performance/throughput?
In its normal mode, Replicat applies one SQL statement at a time, in the same order as the changes occurred on the source system.
Despite optimizations and statement caching, the overhead of preparing statements, binding column values, and executing statements for each change can cause REPLICAT to be slower than necessary to keep up with the changes arriving in the trail.
The use of the HANDLECOLLISIONS parameter can also cause performance issues if there are collisions since there needs to be additional processing done with those records. This parameter should be removed if you are not during your initial load for your tables.
Indexing on your target tables will also have an impact on performance, you should have a PK (Primary or UI (unique Index) to quickly apply the updates or deletes to the tables. You should also investigate if there are any FK(foreign Key) relationships and if the referenced columns should also be indexed appropriately, with no INDEX a full table scan will be doing on the referenced table even on insert.
Three common approaches for increasing throughput are:
1) Run multiple Replicat processes, each applying changes to different tables and/or ranges.
2) Use Replicat’s BATCHSQL mode.
3) Review indexing on your target tables and add if appropriate even on referenced tables such as FK’s
Replicat’s BATCHSQL feature causes Replicat to organize similar SQL statements into arrays and apply them at an accelerated rate.
When Replicat is in BATCHSQL mode, smaller row changes will show a higher gain in performance than larger row changes. At 100 bytes of data per row change, BATCHSQL has been known to improve Replicat’s performance by up to 300 percent, but actual performance benefits will vary, depending on the mix of operations. At around 5,000 bytes of data per row change, the benefits of using BATCHSQL diminish.
To activate this feature using default tuning subparameters, add the following parameter to the REPLICAT parameter file: