自学内容网 自学内容网

impdp 调优参数

DATA_OPTIONS = TRUST_EXISTING_TABLE_PARTITIONS

data_options=disable_append_hint

TABLES=<SCHEMA_NAME>.<TABLE_NAME>:<PARTITION_NAME>
PARTITION_OPTIONS=DEPARTITION
REMAP_TABLE=<SCHEMA_NAME>.<TABLE_NAME>:<PARTITION_NAME>:<TABLE_NAME>
CONTENT=ALL
SKIP_UNUSABLE_INDEXES=YES
DIRECTORY=EXPDP
DUMPFILE=<DUMP_NAME>_%U.dmp
LOGFILE=<LOG_NAME>.log
PARALLEL=8
DATA_OPTIONS=skip_constraint_errors

%expdp system/<password> ... ACCESS_METHOD=DIRECT_PATH 
%expdp system/<password> ... ACCESS_METHOD=EXTERNAL_TABLE 

or:

%impdp system/<password> ... ACCESS_METHOD=DIRECT_PATH 
%impdp system/<password> ... ACCESS_METHOD=EXTERNAL_TABLE 

11g 12c都建议drop 表在impdp,但是12多了参数TRUST_EXISTING_TABLE_PARTITIONS

SYMPTOMS

Partitioned tables have been created prior to the Import with IMPDP and the PARALLEL option is being used but only one of the workers is active and the others are idle resulting in a slow import compared to the export time.

The tables involved will show in the trace:

INSERT /*+ APPEND PARALLEL */


Using "data_options=disable_append_hint" does not change the "Append" hint being used.
 

CAUSE

IMPDP is working as designed: In order to use Parallel, let IMPDP create the Partitioned tables itself.
 
According to Oracle documentation
PARTITION_OPTIONS

"Parallel processing during import of partitioned tables is subject to the following:

  If a partitioned table is imported into an existing partitioned table, then Data Pump only processes one partition or subpartition at a time, regardless of any value that might be specified with the PARALLEL parameter.

  If the table into which you are importing does not already exist and Data Pump has to create it, then the import runs in parallel up to the parallelism specified on the PARALLEL parameter when the import is started.

SOLUTION

Let IMPDP create the Partitioned Tables (rather than creating them before the import) if you wish to use the Parallel option with IMPDP to speed up the Import process.
 

Or

Use the following parameter:

         DATA_OPTIONS = TRUST_EXISTING_TABLE_PARTITIONS

This flag allows the data to be loaded in parallel.

APPLIES TO:

Oracle Database - Enterprise Edition - Version 10.2.0.4 to 11.2.0.1 [Release 10.2 to 11.2]
Oracle Database Cloud Schema Service - Version N/A and later
Oracle Database Exadata Cloud Machine - Version N/A and later
Oracle Cloud Infrastructure - Database Service - Version N/A and later
Oracle Database Cloud Exadata Service - Version N/A and later
Information in this document applies to any platform.

SYMPTOMS

When running IMPDP against an existing partitioned table with the TABLE_EXISTS_ACTION command line parameter set to TRUNCATE or APPEND and also using parallellism, the worker processes seem to be idle and waiting for the following event:

enq: TM - contention


The problem does not occur when the table does not yet exist in the database, or when the IMPDP operation is run in serial mode.

CHANGES

CAUSE

The cause of this problem has been identified in Bug 7197735, closed as duplicate of unpublished Bug 8448327. It is caused by multiple Datapump worker processes being assigned to a single partition, causing one worker process to be able to write to the partition, and the other worker processes waiting to get an exclusive lock on the same partition.

SOLUTION

This issue has been fixed in Oracle12c release and the 11.2.0.2 patchset.

Workarounds for the other Oracle versions are:

  1. apply Patch 8448327 (for Generic platform) if available for your Oracle version

    - OR -
     
  2. run IMPDP in serial mode (so not using the PARALLEL command line parameter)

    - OR -
     
  3. drop the table prior to running IMPDP

SYMPTOMS

When importing partitioned tables into existing partitioned tables with PARALLEL > 1 there will be Worker processes waiting for an exclusive table lock which is indicated by event "enq: TM - contention".

To confirm the issue, execute the following queries during the Impdp client session reports:

Processing object type TABLE_EXPORT/TABLE/TABLE_DATA

-- DW process which is holding the exclusive lock
select a.sid, a.type, a.id1, b.program
from   v$lock a, v$session b
where  a.sid = b.sid and
       a.type = 'TM' and
       a.lmode = 6 and
       b.program like '%DW%';

-- DW process(es) requesting an exclusive lock
select a.sid, a.type, a.id1, b.program, b.event
from   v$lock a, v$session b
where  a.sid = b.sid and
       a.type = 'TM' and
       a.request = 6 and
       b.program like '%DW%';

-- Affected table
select c.owner, c.object_name, b.program, b.event
from   v$lock a, v$session b, dba_objects c
where  a.sid = b.sid and
       a.type = 'TM' and
       a.id1 = c.object_id and
       a.request = 6 and
       b.program like '%DW%';

NOTE: In the testcase content below, the user information and metadata used represents fictitious data. Any similarity to actual persons, living or dead, is purely coincidental and not intended in any manner.

The following example will illustrate the scenario:

impdp tc1/****** DIRECTORY=test_dp DUMPFILE=test%U.dmp tables=tc1.t1 TABLE_EXISTS_ACTION=APPEND PARALLEL=10

Or:

impdp tc1/****** DIRECTORY=test_dp DUMPFILE=test%U.dmp tables=tc1.t1 TABLE_EXISTS_ACTION=TRUNCATE PARALLEL=10

-- There will be a single DW process which is holding an exclusive lock on the table

-- DW process which is holding the exclusive lock
select a.sid, a.type, a.id1, b.program
from   v$lock a, v$session b
where  a.sid = b.sid and
       a.type = 'TM' and
       a.lmode = 6 and
       b.program like '%DW%';

  SID TY ID1        PROGRAM
----- -- ---------- -------------------------
  115 TM     282361 ORACLE.EXE (DW02)

-- Several other worker processes are waiting until the exclusive lock is released.

-- DW process(es) requesting an exclusive lock
select a.sid, a.type, a.id1, b.program, b.event
from   v$lock a, v$session b
where  a.sid = b.sid and
       a.type = 'TM' and
       a.request = 6 and
       b.program like '%DW%';

  SID TY ID1        PROGRAM                EVENT
----- -- ---------- ---------------------- ------------------------
  116 TM     282361 ORACLE.EXE (DW07)      enq: TM - contention
  136 TM     282361 ORACLE.EXE (DW03)      enq: TM - contention
  138 TM     282361 ORACLE.EXE (DW09)      enq: TM - contention
  152 TM     282361 ORACLE.EXE (DW04)      enq: TM - contention

-- The query will report the affected table
select c.owner, c.object_name, b.program, b.event
from   v$lock a, v$session b, dba_objects c
where  a.sid = b.sid and
       a.type = 'TM' and
       a.id1 = c.object_id and
       a.request = 6 and
       b.program like '%DW%';

OWNER OBJECT_NAME          PROGRAM                 EVENT
----- -------------------- ----------------------- ---------------------------
TC1   T1                   ORACLE.EXE (DW04)       enq: TM - contention
TC1   T1                   ORACLE.EXE (DW09)       enq: TM - contention
TC1   T1                   ORACLE.EXE (DW03)       enq: TM - contention
TC1   T1                   ORACLE.EXE (DW07)       enq: TM - contention

Note: A TM table lock is usually acquired during the execution of a transaction when referencing a table with a DML statement to prevent the object from being dropped or altered during the execution of the transaction.

CHANGES

This issue occurs if the following is true:

- Imported table is a partitioned table
- Data Pump Import (Impdp) uses parameter PARALLEL > 1 and TABLE_EXISTS_ACTION=APPEND or TRUNCATE

CAUSE

This is produced by unpublished Bug 8448327.

SOLUTION

The issue is fixed in release 11.2.0.2 and 12.1. There is no workaround.

Please check, if patch 8448327 is available for your RDBMS version and platform.

Important

Please read the README file delivered with the patch. There are post-install steps to be completed, otherwise the fix won't work.

----------------

SYMPTOMS

  Extremely slow performance is observed while running an Import DataPump (IMPDP) job against a highly partitioned table with 143 partitions. The import parameter file looks like:

TABLES=<SCHEMA_NAME>.<TABLE_NAME>:<PARTITION_NAME>
PARTITION_OPTIONS=DEPARTITION
REMAP_TABLE=<SCHEMA_NAME>.<TABLE_NAME>:<PARTITION_NAME>:<TABLE_NAME>
CONTENT=ALL
SKIP_UNUSABLE_INDEXES=YES
DIRECTORY=EXPDP
DUMPFILE=<DUMP_NAME>_%U.dmp
LOGFILE=<LOG_NAME>.log
PARALLEL=8
DATA_OPTIONS=skip_constraint_errors

 
  The SQL found in DW00 generated trace file has the following form:

INSERT /*+ APPEND PARALLEL("<table name>",<parallel degree>)+*/ INTO RELATIONAL("<schema name>"."<table name>") ...PARTITION <partition_name>
FROM "ET$...." KU$


  The DataPump job was loading only approx 5% (or approx 5GB) per hour, on a 700GB table. The performance issue is introduced by the fact that the DML workers processes are not running in parallel even after setting up PARALLEL DML and PARALLEL=8.
Even if the parallel slaves are being launched:

PROGRAM    COMMAND    Username    QC/Slave    Slave Set    SID    SERIAL#    QC      SID    REQ_DOP    ACT_DOP
============================================================================================================
oracle <SERVICE_NAME> (P000)    2     - p000    (Slave)    1    787    10351    1044    8    8
oracle <SERVICE_NAME> (P001)    2     - p001    (Slave)    1    853    1203    1044    8    8
oracle <SERVICE_NAME> (P002)    2     - p002    (Slave)    1    915    50799    1044    8    8
oracle <SERVICE_NAME> (P003)    2     - p003    (Slave)    1    981    11441    1044    8    8
oracle <SERVICE_NAME> (DW00)    2       SYS     QC        1044    56901    1044            
oracle <SERVICE_NAME> (P000)    2     - p000    (Slave)    1    788    12009    1044    8    8
oracle <SERVICE_NAME> (P001)    2     - p001    (Slave)    1    850    50447    1044    8    8
oracle <SERVICE_NAME> (P002)    2     - p002    (Slave)    1    917    9857    1044    8    8
oracle <SERVICE_NAME> (P003)    2     - p003    (Slave)    1    980    39721    1044    8    8


the problem is that PARALLEL DML is disabled thus all of those parallel slaves are doing nothing on the INSERT. From below you can very clearly see that the PDML_STATUS is DISABLED:

select *from gv$session 里面的PDML_STATUS

INST_ID LOGON_TIME USERNAME STATUS PDML_STATUS PROGRAM SQL_ID
===================================================================================================
1 11/4/2014 16:17:03 A11P11 SYS ACTIVE ENABLED udi@<SERVICE_NAME> (TNS V1-V3) 7wn3wubg7gjds
2 11/4/2014 16:17:01 A11P11 SYS ACTIVE DISABLED oracle@<<SERVICE_NAME> (P001) asmy11x7ya7ws
2 11/4/2014 16:17:01 A11P11 SYS ACTIVE DISABLED oracle@<SERVICE_NAME>g (P000) asmy11x7ya7ws
2 11/4/2014 16:17:01 A11P11 SYS ACTIVE DISABLED oracle@<SERVICE_NAME> (P003) asmy11x7ya7ws
2 11/4/2014 16:17:01 A11P11 SYS ACTIVE DISABLED oracle@<SERVICE_NAME> (P002) asmy11x7ya7ws
1 11/4/2014 16:17:04 A11P11 SYS ACTIVE ENABLED oracle@<SERVICE_NAME> (DW00) asmy11x7ya7ws
1 11/4/2014 16:17:07 A11P11 SYS ACTIVE DISABLED oracle@<SERVICE_NAME> (P001) asmy11x7ya7ws
1 11/4/2014 16:17:07 A11P11 SYS ACTIVE DISABLED oracle@<SERVICE_NAME>g (P000) asmy11x7ya7ws
1 11/4/2014 16:17:07 A11P11 SYS ACTIVE DISABLED oracle@<SERVICE_NAME> (P003) asmy11x7ya7ws
1 11/4/2014 16:17:07 A11P11 SYS ACTIVE DISABLED oracle@<SERVICE_NAME> (P002) asmy11x7ya7ws
1 11/4/2014 16:17:04 A11P11 SYS ACTIVE ENABLED oracle@<SERVICE_NAME> (DM00) bjf05cwcj5s6p
 

 
  As per worker trace, it is confirmed that the insert operation is loading data serially :

这里是因为这两个,但是select是并行的

---------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                        | Name            | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
---------------------------------------------------------------------------------------------------------------------------------
|   0 | INSERT STATEMENT                 |                 |  8168 |  4817K|     4   (0)| 00:00:01 |        |      |            |
|   1 |  LOAD AS SELECT       
           | <NAME>     |       |       |            |          |        |      |            |
|   2 |   ERROR LOGGING                  | <NAME>     |       |       |            |          |        |      |            |
|   3 |    PX COORDINATOR                |                 |       |       |            |          |        |      |            |
|   4 |     PX SEND QC (RANDOM)          | :TQ10000        |  8168 |  4817K|     4   (0)| 00:00:01 |  Q1,00 | P->S | QC (RAND)  |
|   5 |      ERROR LOGGING QC (RANDOM)   | :TQ10000        |       |       |     4   (0)| 00:00:01 |  Q1,00 | PCWP | QC (RAND)  |
|   6 |       PX BLOCK ITERATOR          |                 |  8168 |  4817K|     4   (0)| 00:00:01 |  Q1,00 | PCWC |            |
|   7 |        EXTERNAL TABLE ACCESS FULL| ET$03D4837F0001 |  8168 |  4817K|     4   (0)| 00:00:01 |  Q1,00 | PCWP |            |
---------------------------------------------------------------------------------------------------------------------------------

 
and the query coordinator(QC) is running alter session to enable the parallel dml:

KUPW:10:47:01.266: 0: ALTER SESSION ENABLE PARALLEL DML called.
KUPW:10:47:01.266: 0: ALTER SESSION ENABLE PARALLEL DML returned.


but as per PQ tracing the automatic degree of parallelism is disabled:

.....
  kkfdtParallel: parallel is possible (no statement type restrictions)
  kkfdPaPrm: use table level hint DOP(8) on table
  kkfdPaPrm:- The table : 154053
  kkfdPaPrm:DOP = 8 (computed from hint/dictionary/autodop)
  kkfdiPaPrm: dop:8 parallel(?)
          PDML disabled in session
  kkfdtpddldml: isParallel ? No

 
This information shows that the insert is being serialized as a result of PARALLEL DML being disabled and the performance issue is introduced by this behavior.

CHANGES

CAUSE

 This is an impdp performance issue where an import with external table access_method is not running the job in parallel even after starting the PQ slave correctly.
The performance issue is only hit when running the job with SYSDBA user. The root cause of this issue has been determined in Bug 19520061 - IMPDP: EXTREMELY SLOW IMPORT FOR A PARTITIONED TABLE.

SOLUTION

1. Upgrade the database to 12.2, when available, where issue is fixed.

- OR -

2. For earlier database releases please check Patch 19520061, if available for your platform and RDBMS version.

- OR -

3. Run the DataPump import job with an user other than SYS.


原文地址:https://blog.csdn.net/jnrjian/article/details/142309749

免责声明:本站文章内容转载自网络资源,如本站内容侵犯了原著者的合法权益,可联系本站删除。更多内容请关注自学内容网(zxcms.com)!