The gains and pains of dbms_datapump. Part III
Imran Khan, a visitor of my blog, asked me how to export a schema using dbms_datapump. While preparing a little example for him, I once again came around an ugly detail in dbms_datapump, which I didn’t mention in my earlier posts.
You might receive the following error message when you execute your code the second time.
DECLARE
*
ERROR at line 1:
ORA-39001: invalid argument value
ORA-06512: at line 42
If it happens the second time you execute your code, then it might be your DBMS_DATAPUMP.ADD_FILE statement which fails.
But what’s the reason? dbms_datapump fails to add the file, because it already exists. Unfortunately the error message “ORA-39001: invalid argument value” doesn’t help much to find the problem. It makes sense, that Oracle doesn’t just overwrite an existing file, but they could at least come up with an error message which leads towards the problem.
So in the following code snippet it would be line 3 causing the problem. To solve it simply delete the file which should be used as destination for the export. In my case it’s the file hr.dmp in the DATA_PUMP_DIR.
... handle := DBMS_DATAPUMP.OPEN('EXPORT','SCHEMA',NULL,'export'); DBMS_DATAPUMP.ADD_FILE(handle,'hr.dmp','DATA_PUMP_DIR'); ...
If you wrote a logfile and did this before adding the destination file, it’s quite easy to find the problem. The logfile will contain the following lines.
ORA-39000: bad dump file specification
ORA-31641: unable to create dump file "/u00/app/oracle/admin/DAR112/dmp/hr.dmp"
ORA-27038: created file already exists
Additional information: 1
To write a logfile, you can add the following lines in your script, directly after the job creation.
SELECT 'datapump_'||to_char(SYSDATE,'YYYYMMDD')||'.log' INTO logfileName FROM dual; DBMS_DATAPUMP.ADD_FILE ( handle => handle, filename => logfileName, directory => 'DATA_PUMP_DIR', filetype => dbms_datapump.ku$_file_type_log_file);
If your interested in more details about dbms_datapump, have a look at my other posts about dbms_datapump.
[…] 2. The gains and pains of dbms_datapump. […]
dbms_datapump examples | mkjay
July 22, 2013 at 09:09
Hey, thanks for great tutorials! It’s really hard to find constructive and well structured information on this topic, and these posts seem exactly like that. It would be great to see a tutorial on how to import from existing dump file filtering by schema and table names to specified schema (my case).
Mykolas
March 13, 2014 at 13:46
When having a dump file, I would usually just use impdp instead of dbms_datapump.
If you have a dump file, you most probably copy it around with a shell script or manually trigger the import. In this case it is easier to use impdp.
I think dbms_datapump is most interesting, if you use if over a database link and don’t involve anything outside the two databases.
danirey
March 13, 2014 at 14:02
Hello, Dani. Thanks very much for your great tutorial ! I also agree that it’s really very hard to find some effective and very well structured information on this topic as like as your post and maybe some of these posts should be done like exactly what you did. I’ve used it some days ago and finally (after tried other approach some years ago) was possible to update some schemas through a database link between two Oracle instances (test from production), without dealing with physical datapump files. Was possible to add some enhancements like adding support for parallelism and added more details in exception treatments, but was not possible to update the statistics right after data being imported (but was possible to disable statistics during datapump import, of course). And I noted that was possible to see the steps done of this process only at the end, even when having enabled the “set serveroutput on”. I have had some problems with a database trigger and after some research, I noted that i needed to disable it before using this procedure. Is there any way to add a step like this (“exec dbms_stats.GATHER_SCHEMA_STATS(ownname => destinationSchema, estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE);”) to update the statistics of the destination schema, after the datapump import ? How do you planned to use this procedure to update more that one schema (because this possibility is real, through datapump configuration files) ?
Sven Bleckwedel
June 16, 2020 at 02:17
Hi Sven,
Glad to hear that it was helpful.
You can combine multiple commands in one procedure (i.e. to add gather statistics). You can make the source and target schema or database a parameter. Then you can call the same code for multiple schemas. You might even put the parameters in a table and iterate over it.
Maybe my PL/SQL Basics article serie is of help https://danirey.wordpress.com/2009/06/06/plsql_basics_part1/
Cheers Dani
danirey
June 16, 2020 at 09:03
Hi, Dani,
Thanks for your comments but I need to leave an example of my own doubt about this, using your own sample:
…
dbms_output.put_line(‘Define schema remapping’);
dbms_datapump.metadata_remap(
handle => handle,
name => ‘REMAP_SCHEMA’,
old_value => sourceSchema,
value => destinationSchema);
— Showing what happens…
dbms_output.put_line(‘Excluding statistics during import.’);
— Filtering out the statistics of the source database…
dbms_datapump.metadata_filter(handle, ‘EXCLUDE_PATH_EXPR’, ‘IN (”STATISTICS”)’);
dbms_output.put_line(‘Setting parallelism to 4 threads.’);
dbms_datapump.set_parallel(handle => handle, degree => 4);
dbms_output.put_line(‘Start datapump job’);
dbms_datapump.start_job (handle);
dbms_output.put_line(‘Wait for job to finish’);
dbms_datapump.wait_for_job(handle, jobState);
dbms_output.put_line(‘Job has completed sucessfully’);
dbms_output.put_line(‘Final job state = ‘ || jobState);
dbms_datapump.detach(handle => handle);
–newPassword IN VARCHAR2,
–dbms_output.put_line(‘Set new password for destination schema ‘||destinationSchema);
–EXECUTE IMMEDIATE ‘ALTER USER ‘||destinationSchema||’ IDENTIFIED BY ‘||newPassword;
— Updating the statistics, in the target database…
exec dbms_stats.GATHER_SCHEMA_STATS(ownname => destinationSchema, estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE);
— Showing what happens…
dbms_output.put_line(‘Statistics of schema ‘||destinationSchema||’ updated, too.);
EXCEPTION
WHEN OTHERS THEN
…
When I tried to gather the statistics after datapump import using these commands (above), I received back a lot of compilation errors, like these:
Warning: Procedure created with compilation errors.
SQL> show errors
Errors for PROCEDURE COPYSCHEMABLOG:
LINE/COL ERROR
——– —————————————————————–
82/8 PLS-00103: Encountered the symbol “DBMS_STATS” when expecting one
of the following:
:= . ( @ % ;
The symbol “:=” was substituted for “DBMS_STATS” to continue.
93/3 PLS-00103: Encountered the symbol “DECLARE” when expecting one of
the following:
) , * & = – + at in Ú mod lembrete not rem =>
ou != ou ~= >= <= e ou como like2
like4 likec como entre de usando || membro submultiset
112/4 PLS-00103: Encountered the symbol “end-of-file” when expecting
one of the following:
end not pragma final instantiable ordem overriding static
member constructor mapa
Until now, was not possible to fix this. And, I tried to maintain this procedure in SYS schema, due to the inherent privileges and granted execution to SYSTEM. Worked well, as expected.
Please advice, when possible too.
Tks in adv,
Sven
Sven Bleckwedel
June 16, 2020 at 15:33
Hi Sven
I believe the “exec” in front of the “dbms_stats.GATHER_SCHEMA_STATS” has to be omitted.
Cheers Dani
danirey
July 10, 2020 at 08:51