행위

Oracle dbms datapump 패키지

DB CAFE

thumb_up 추천메뉴 바로가기


1 dbms_datapump 패키지[편집]

  • 데이터펌프 실행시 expdp/impdp 명령은 디비서버 터미널에 직접접속하여 실행해야 함
  • dbms_datapump 패키지는 직접 접속하지 않고 오라클에서 제공하는 데이터펌프 패키지를 이용하여 펌프 작업 수향가능
  • 아마존 AWS 환경에서 주로 사용 하는 방법
  • 오라클 레퍼런스 참조

https://docs.oracle.com/en/database/oracle/oracle-database/19/arpls/DBMS_DATAPUMP.html#GUID-62324358-2F26-4A94-B69F-1075D53FA96D

1.1 프로시져[편집]

1.1.1 .ADD_FILE Procedure[편집]

Parameter Description
handle The handle of a job. The current session must have previously attached to the handle through a call to either the OPEN or ATTACH function.
filename The name of the file being added. filename must be a simple filename without any directory path information. For dump files, the filename can include a substitution variable. See the following table for a description of available substitution variables.
directory The name of a directory object within the database that is used to locate filename. A directory must be specified.
filesize The size of the dump file that is being added. It may be specified as the number of bytes, number of kilobytes (if followed by K), number of megabytes (if followed by M), number of gigabytes (if followed by G) or number of terabytes (if followed by T). An Export operation will write no more than the specified number of bytes to the file. Once the file is full, it will be closed. If there is insufficient space on the device to write the specified number of bytes, the Export operation will fail, but it can be restarted. If not specified, filesize will default to an unlimited size. For Import and SQL_FILE operations, filesize is ignored. The minimum value for filesize is ten times the default Data Pump block size, which is 4 kilobytes. A filesize can only be specified for dump files.
filetype The type of the file to be added. The legal values are as follows and must be preceded by DBMS_DATAPUMP.:

KU$_FILE_TYPE_DUMP_FILE (dump file for a job) KU$_FILE_TYPE_LOG_FILE (log file for a job) KU$_FILE_TYPE_SQL_FILE (output for SQL_FILE job)

reusefile If 0, a preexisting file will cause an error. If 1, a preexisting file will be overwritten. If NULL, the default action for the file type will be applied (that is, dump files will not be overwritten). This parameter should only be non-NULL for dump files. The reusefile parameter is restricted to export jobs.
  • 항목

1.1.2 .SET_PARAMETER Procedure[편집]

Parameter Name Datatype Supported Operations Meaning

CLIENT_COMMAND

TEXT

All

An opaque string used to describe the current operation from the client's perspective. The command-line procedures will use this string to store the original command used to invoke the job.

COMPRESSION

TEXT

Export

Allows you to trade off the size of the dump file set versus the time it takes to perform export and import operations.

The DATA_ONLY option compresses only user data in the dump file set.

The METADATA_ONLY option compresses only metadata in the dump file set.

The ALL option compresses both user data and metadata.

The NONE option stores the dump file set in an uncompressed format.

The METADATA_ONLY and NONE options require a job version of 10.2 or later. All other options require a job version of 11.1 or later.

Default=METADATA_ONLY

COMPRESSION_ALGORITHM

TEXT

Export

Indicates the compression algorithm is to be used when compressing dump file data. The choices are as follows:

BASIC--Offers a good combination of compression ratios and speed; the algorithm used is the same as in previous versions of Oracle Data Pump.

LOW---Least impact on backup throughput and suited for environments where CPU resources are the limiting factor.

MEDIUM---Recommended for most environments. This option, like the BASIC option, provides a good combination of compression ratios and speed, but it uses a different algorithm than BASIC.

HIGH--Best suited for exports over slower networks where the limiting factor is network speed.

To use this feature, the COMPATIBLE initialization parameter must be set to at least 12.0.0.

This feature requires that the Oracle Advanced Compression option be enabled.

DATA_ACCESS_METHOD

TEXT

Export and Import

Allows you to specify an alternative method to unload data if the default method does not work for some reason. The choices are AUTOMATIC, DIRECT_PATH, or EXTERNAL_TABLE. Oracle recommends that you use the default option (AUTOMATIC) whenever possible because it allows Data Pump to automatically select the most efficient method.

DATA_OPTIONS

Number

Export and Import A bitmask to supply special options for processing the job. The possible values are as follows:

KU$_DATAOPT_SKIP_CONST_ERR

KU$_DATAOPT_XMLTYPE_CLOB

KU$_DATAOPT_DISABL_APPEND_HINT

KU$_DATAOPT_GRP_PART_TAB

KU$_DATAOPT_TRUST_EXIST_TB_PAR

KU$_DATAOPT_CONT_LOAD_ON_FMT_ERR

Export supports the value KU$_DATAOPT_XMLTYPE_CLOB. This option stores compressed XMLType columns in the dump file in CLOB format rather than as XML documents.

Note: XMLType stored as CLOB is deprecated as of Oracle Database 12c Release 1 (12.1).

Import supports the value KU$_DATAOPT_SKIP_CONST_ERR. This option specifies that if constraint violations occur while data is being imported into user tables, the rows that cause the violations will be rejected and the load will continue. If this option is not set, a constraint error will abort the loading of the entire partition (or table for unpartitioned tables). Setting this option may affect performance, especially for pre-existing tables with unique indexes or constraints.

Import also supports the value KU$_DATAOPT_DISABL_APPEND_HINT. This option prevents the append hint from being applied to the data load. Disabling the APPEND hint can be useful if there is a small set of data objects to load that already exist in the database and some other application may be concurrently accessing one or more of the data objects.

Data Pump Export supports the value KU$_DATAOPT_GRP_PART_TAB. This option tells Data Pump to unload all table data in one operation rather than unload each table partition as a separate operation. As a result, the definition of the table will not matter at import time because Import will see one partition of data that will be loaded into the entire table.

Data Pump Import supports the value KU$_DATAOPT_TRUST_EXIST_TB_PAR. This option tells Data Pump to load partition data

in parallel into existing tables.

Use of the DATA_OPTIONS parameter requires that the version on the OPEN function be set to 11.1 or later.

Default=0

Data Pump Import supports the value KU$_DATAOPT_CONT_LOAD_ON_FMT_ERR. This option tells Data Pump to skip forward to the start of the next granule if a stream format error is encountered while loading table data. Most stream format errors are caused by corrupt dump files. This value can be used if Data Pump encounters a stream format error and the original export database is not available to export the table data again. If Data Pump skips over data, not all data from the source database is imported potentially skipping hundreds or thousands of rows.

ENCRYPTION

TEXT

Export

Specifies what to encrypt in the dump file set, as follows:

ALL enables encryption for all data and metadata in the export operation.

DATA_ONLY specifies that only data is written to the dump file set in encrypted format.

ENCRYPTED_COLUMNS_ONLY specifies that only encrypted columns are written to the dump file set in encrypted format.

METADATA_ONLY specifies that only metadata is written to the dump file set in encrypted format.

NONE specifies that no data is written to the dump file set in encrypted format.

This parameter requires a job version of 11.1 or later.

The default value depends upon the combination of encryption-related parameters that are used. To enable encryption, either ENCRYPTION or ENCRYPTION_PASSWORD or both, must be specified. If only ENCRYPTION_PASSWORD is specified, then ENCRYPTION defaults to ALL. If neither ENCRYPTION nor ENCRYPTION_PASSWORD is specified, then ENCRYPTION defaults to NONE.

To specify ALL, DATA_ONLY, or METADATA_ONLY, the COMPATIBLE initialization parameter must be set to at least 11.1.

NOTE: If the data being exported includes SecureFiles that you want to be encrypted, then you must specify ENCRYPTION=ALL to encrypt the entire dump file set. Encryption of the entire dump file set is the only way to achieve encryption security for SecureFiles during a Data Pump export operation.

ENCRYPTION_ALGORITHM

TEXT

Export

Identifies which cryptographic algorithm should be used to perform encryption. Possible values are AES128, AES192, and AES256.

The ENCRYPTION_ALGORITHM parameter requires that you also specify either ENCRYPTION or ENCRYPTION_PASSWORD; otherwise an error is returned. See Oracle Database Advanced Security Guide for information about encryption algorithms.

This parameter requires a job version of 11.1 or later.

Default=AES128

ENCRYPTION_MODE

TEXT

Export

Identifies the types of security used for encryption and decryption. The values are as follows:

PASSWORD requires that you provide a password when creating encrypted dump file sets. You will need to provide the same password when you import the dump file set. PASSWORD mode requires that you also specify the ENCRYPTION_PASSWORD parameter. The PASSWORD mode is best suited for cases in which the dump file set will be imported into a different or remote database, but which must remain secure in transit.

TRANSPARENT allows an encrypted dump file set to be created without any intervention from a database administrator (DBA), provided the required Oracle Encryption Wallet is available. Therefore, the ENCRYPTION_PASSWORD parameter is not required, and will in fact, cause an error if it is used in TRANSPARENT mode. This encryption mode is best suited for cases in which the dump file set will be imported into the same database from which it was exported.

DUAL creates a dump file set that can later be imported using either the Oracle Encryption Wallet or the password that was specified with the ENCRYPTION_PASSWORD parameter. DUAL mode is best suited for cases in which the dump file set will be imported on-site using the Oracle Encryption Wallet, but which may also need to be imported offsite where the Oracle Encryption Wallet is not available.

When you use the ENCRYPTION_MODE parameter, you must also use either the ENCRYPTION or ENCRYPTION_PASSWORD parameter. Otherwise, an error is returned.

To use DUAL or TRANSPARENT mode, the COMPATIBLE initialization parameter must be set to at least 11.1.

The default mode depends on which other encryption-related parameters are used. If only ENCRYPTION is specified, then the default mode is TRANSPARENT. If ENCRYPTION_PASSWORD is specified and the Oracle Encryption Wallet is open, then the default is DUAL. If ENCRYPTION_PASSWORD is specified and the Oracle Encryption Wallet is closed, then the default is PASSWORD.

ENCRYPTION_PASSWORD

TEXT

Export and Import

For export operations, this parameter is required if ENCRYPTION_MODE is set to either PASSWORD or DUAL. It is also required for transportable export/import operations (job mode=FULL and TRANSPORTABLE=ALWAYS) when the database includes either encrypted tablespaces or tables with encrypted columns.

ESTIMATE

TEXT

Export and Import

Specifies that the estimate method for the size of the tables should be performed before starting the job.

If BLOCKS, a size estimate for the user tables is calculated using the count of blocks allocated to the user tables.

If STATISTICS, a size estimate for the user tables is calculated using the statistics associated with each table. If no statistics are available for a table, the size of the table is estimated using BLOCKS.

The ESTIMATE parameter cannot be used in Transportable Tablespace mode.

Default=BLOCKS

ESTIMATE_ONLY

NUMBER

Export Specifies that only the estimation portion of an export job should be performed. This option is useful for estimating the size of dump files when the size of the export is unknown.

FLASHBACK_SCN

NUMBER

Export and network Import System change

number

(SCN) to serve as transactionally consistent point for reading user data. If neither FLASHBACK_SCN nor FLASHBACK_TIME is specified, there will be no transactional consistency between partitions, except for logical standby databases and Streams targets. FLASHBACK_SCN is not supported in Transportable mode.

For FLASHBACK_SCN, Data Pump supports the new 8–byte big SCNs used in Oracle Database 12c release 2 (12.2).

FLASHBACK_TIME

TEXT

Export and network Import

Either the date and time used to determine a consistent point for reading user data or a string of the form TO_TIMESTAMP(...).

If neither FLASHBACK_SCN nor FLASHBACK_TIME is specified, there will be no transactional consistency between partitions.

FLASHBACK_SCN and FLASHBACK_TIME cannot both be specified for the same job. FLASHBACK_TIME is not supported in Transportable mode.

INCLUDE_METADATA

NUMBER

Export and Import If nonzero, metadata for objects will be moved in addition to user table data.

If zero, metadata for objects will not moved. This parameter converts an Export operation into an unload of user data and an Import operation into a load of user data.

INCLUDE_METADATA is not supported in Transportable mode.

Default=1

KEEP_MASTER

NUMBER

Export and Import Specifies whether the master table should be deleted or retained at the end of a Data Pump job that completes successfully. The master table is automatically retained for jobs that do not complete successfully.

Default=0.

LOGTIME

TEXT

Export and Import

Specifies that messages displayed during export and import operations be timestamped. Valid options are as follows:

NONE--No timestamps on status or log file messages (this is the default)

STATUS--Timestamps on status messages only

LOGFILE--Timestamps on log file messages only

ALL--Timestamps on both status and log file messages

MASTER_ONLY

NUMBER

Import Indicates whether to import just the master table and then stop the job so that the contents of the master table can be examined.

Default=0.

METRICS

NUMBER

Export and Import Indicates whether additional information about the job should be reported to the Data Pump log file.

Default=0.

PARTITION_OPTIONS

TEXT

Import

Specifies how partitioned tables should be handled during an import operation. The options are as follows:

NONE means that partitioning is reproduced on the target database as it existed in the source database.

DEPARTITION means that each partition or subpartition that contains storage in the job is reproduced as a separate unpartitioned table. Intermediate partitions that are subpartitioned are not re-created (although their subpartitions are converted into tables). The names of the resulting tables are system-generated from the original table names and partition names unless the name is overridden by the REMAP_TABLE metadata transform.

MERGE means that each partitioned table is re-created in the target database as an unpartitioned table. The data from all of the source partitions is merged into a single storage segment. This option is not supported for transportable jobs or when the TRANSPORTABLE parameter is set to ALWAYS.

This parameter requires a job version of 11.1 or later.

Default=NONE

REUSE_DATAFILES

NUMBER

Import Specifies whether the import job should reuse existing data files for tablespace creation.

Default=0.

SKIP_UNUSABLE_INDEXES

NUMBER

Import If nonzero, rows will be inserted into tables having unusable indexes. SKIP_UNUSABLE_INDEXES is not supported in Transportable mode.

Default=1

SOURCE_EDITION

TEXT

Export and network Import

The application edition that will be used for determining the objects that will be unloaded for export and for network import.

STREAMS_CONFIGURATION

NUMBER

Import Specifies whether to import any Streams metadata that may be present in the export dump file.

Default=1.

TABLE_EXISTS_ACTION

TEXT

Import

Specifies the action to be performed when data is loaded into a preexisting table. The possible actions are: TRUNCATE, REPLACE, APPEND, and SKIP.

If INCLUDE_METADATA=0, only TRUNCATE and APPEND are supported.

If TRUNCATE, rows are removed from a preexisting table before inserting rows from the Import.

Note that if TRUNCATE is specified on tables referenced by foreign key constraints, the TRUNCATE will be modified into a REPLACE.

If REPLACE, preexisting tables are replaced with new definitions. Before creating the new table, the old table is dropped.

If APPEND, new rows are added to the existing rows in the table.

If SKIP, the preexisting table is left unchanged.

TABLE_EXISTS_ACTION is not supported in Transportable mode.

The default is SKIP if metadata is included in the import. The default is APPEND if INCLUDE_METADATA is set to 0.

TABLESPACE_DATAFILE

TEXT

Import

Specifies the full file specification for a data file in the transportable tablespace set. TABLESPACE_DATAFILE is only valid for transportable mode imports.

TABLESPACE_DATAFILE can be specified multiple times, but the value specified for each occurrence must be different.

TARGET_EDITION

TEXT

Import

The application edition that will be used for determining where the objects will be loaded for import and for network import.

TRANSPORTABLE

TEXT

Export (and network import or full-mode import)

This option is for export operations done in table mode, and also for full-mode imports and network imports. It allows the data to be moved using transportable tablespaces.

In table-mode storage segments in the moved tablespaces that are not associated with the parent schemas (tables) will be reclaimed at import time. If individual partitions are selected in a table-mode job, only the tablespaces referenced by those partitions will be moved. During import, the moved partitions can only be reconstituted as tables by using the PARTITION_OPTIONS=DEPARTITION parameter.

Use of the TRANSPORTABLE parameter prohibits the subsequent import of the dump file into a database at a lower version or using different character sets. Additionally, the data files may need to be converted if the target database is on a different platform.

In table-mode, the TRANSPORTABLE parameter is not allowed if a network link is supplied on the OPEN call.

The possible values for this parameter are as follows:

ALWAYS - data is always moved by moving data files. This option is valid only for table mode and full mode.

NEVER - data files are never used for copying user data

This parameter requires a job version of 11.1 or later

This parameter requires a job version of 12.1 or later when the job mode is FULL.

Default=NEVER

TTS_FULL_CHECK

NUMBER

Export If nonzero, verifies that a transportable tablespace set has no dependencies (specifically, IN pointers) on objects outside the set, and vice versa. Only valid for Transportable mode Exports.

Default=0

USER_METADATA

NUMBER

Export and network Import For schema-mode operations, specifies that the metadata to re-create the users' schemas (for example, privilege grants to the exported schemas) should also be part of the operation if set to nonzero. Users must be privileged to explicitly set this parameter.

The USER_METADATA parameter cannot be used in Table, Tablespace, or Transportable Tablespace mode.

Default=1 if user has DATAPUMP_EXP_FULL_DATABASE role; 0 otherwise.

1.1.3 .DATA_REMAP Procedure[편집]

This procedure specifies transformations to be applied to column data as it is exported from, or imported into, a database.

  • Syntax


DBMS_DATAPUMP.DATA_REMAP(
   handle          IN NUMBER,
   name            IN VARCHAR2,
   table_name      IN VARCHAR2,
   column          IN VARCHAR2,
   remap_function  IN VARCHAR2,
   schema          IN VARCHAR2 DEFAULT NULL);
Parameters


Parameter Description

handle

The handle of the current job. The current session must have previously attached to the handle through a call to an OPEN function.

name

The name of the remap

table_name

The table containing the column to be remapped

column

The name of the column to be remapped

remap_function

The meaning of remap_function is dependent upon the value of name. See Table 50-8 for a list of possible names.

schema

The schema containing the column to be remapped. If NULL, the remapping applies to all schemas moved in the job that contain the specified table.


  • Exceptions

INVALID_ARGVAL. The mode is transportable (which does not support data modifications) or it has specified that no data to be included in the job. An invalid remap name was supplied.

INVALID_OPERATION. Data remaps are only supported for Export and Import operations.

INVALID_STATE. The DATA_REMAP procedure was called after the job started (that is, it was not in the defining state).

NO_SUCH_JOB. The job handle is no longer valid.

Usage Notes

The DATA_REMAP procedure is only supported for Export and Import operations. It allows you to manipulate user data being exported or imported. The name of the remap determines the remap operation to be performed.

For export operations, you might wish to define a data remap to obscure sensitive data such as credit card numbers from a dump file, but leave the remainder of the data so that it can be read. To accomplish this, the remapping should convert each unique source number into a distinct generated number. So that the mapping is consistent across the dump file set, the same function should be called for every column that contains the credit card number.

For import operations, you might wish to define a data remap to reset the primary key when data is being merged into an existing table that contains colliding primary keys. A single remapping function should be provided for all columns defining or referencing the primary key to ensure that remapping is consistent.

1.1.4 .METADATA_REMAP Procedure[편집]

DBMS_DATAPUMP.METADATA_REMAP (
   handle      IN NUMBER,
   name        IN VARCHAR2,
   old_value   IN VARCHAR2,
   value       IN VARCHAR2,
   object_type IN VARCHAR2 DEFAULT NULL);
Name Datatype Object Type Meaning
REMAP_SCHEMA TEXT Schema objects Any schema object in the job that matches the object_typeparameter and was located in the old_value schema will be moved to the value schema.

Privileged users can perform unrestricted schema remaps. Nonprivileged users can perform schema remaps only if their schema is the target schema of the remap. For example, SCOTT can remap his BLAKE's objects to SCOTT, but SCOTT cannot remap SCOTT'sobjects to BLAKE.

REMAP_TABLESPACE TEXT TABLE, INDEX, ROLLBACK_SEGMENT, MATERIALIZED_VIEW, MATERIALIZED_VIEW_LOG,TABLE_SPACE Any storage segment in the job that matches the object_typeparameter and was located in the old_value tablespace will be relocated to the value tablespace.
REMAP_DATAFILE TEXT LIBRARY, TABLESPACE, DIRECTORY If old_value and value are both full file specifications, then any data file reference in the job that matches the object_typeparameter and that referenced the old_value data file will be redefined to use the value data file. If old_value and value are both directory paths, then any data file reference whose object path matches old_value will have its path substituted with value.
REMAP_TABLE TEXT TABLE Any reference to a table in the job that matches the old_value table name will be replaced with the value table name. The old_value parameter may refer to a partition such as employees.low. This allows names for tables constructed the by PARTITION_OPTIONS=DEPARTITION parameter to be specified by the user.

1.2 사용 예시[편집]