清理DBA_DATAPUMP_JOBS中的失败数据泵作业

2023-10-29 15:48

本文主要是介绍清理DBA_DATAPUMP_JOBS中的失败数据泵作业,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

SYS_EXPORT_FULL_01、SYS_EXPORT_FULL_02是一个完整的数据库导出作业,状态是NOT RUNNING,意味着作业是暂时的停止,实际上作业失败了也是NOT RUNNING状态。

SELECT owner_name,job_name,rtrim(operation) "OPERATION",rtrim(job_mode) "JOB_MODE",state,attached_sessionsFROM dba_datapump_jobsWHERE job_name NOT LIKE 'BIN$%'ORDER BY 1, 2;

查看作业的master表

SELECT o.status,o.object_id,o.object_type,o.owner || '.' || object_name "OWNER.OBJECT"FROM dba_objects o, dba_datapump_jobs jWHERE o.owner = j.owner_nameAND o.object_name = j.job_nameAND j.job_name NOT LIKE 'BIN$%'ORDER BY 4, 2;

这表示以前(可能是很久以前)停止的作业,当然这些作业不可能重新启动,完全可以删除这些master表。

drop table SYSTEM.SYS_EXPORT_SCHEMA_01

metalink文章《How To Cleanup Orphaned DataPump Jobs In DBA_DATAPUMP_JOBS ? (文档 ID 336014.1)》

APPLIES TO:
Oracle Database - Standard Edition - Version 10.1.0.2 to 12.2.0.1 [Release 10.1 to 12.2]
Enterprise Manager for Oracle Database - Version 10.1.0.2 to 12.1.0.7.0 [Release 10.1 to 12.1]
Oracle Database - Personal Edition - Version 10.1.0.2 to 12.2.0.1 [Release 10.1 to 12.2]
Oracle Database - Enterprise Edition - Version 10.1.0.2 to 12.2.0.1 [Release 10.1 to 12.2]
Information in this document applies to any platform.
***Checked for relevance on 29-Apr-2014***GOAL
How to cleanup orphaned Data Pump jobs in DBA_DATAPUMP_JOBS ?SOLUTION
The jobs used in this example:
- Export job SCOTT.EXPDP_20051121 is a schema level export that is running
- Export job SCOTT.SYS_EXPORT_TABLE_01 is an orphaned table level export job
- Export job SCOTT.SYS_EXPORT_TABLE_02 is a table level export job that was stopped
- Export job SYSTEM.SYS_EXPORT_FULL_01 is a full database export job that is temporary stoppedStep 1. Determine in SQL*Plus which Data Pump jobs exist in the database:%sqlplus /nologCONNECT / as sysdba 
SET lines 200 
COL owner_name FORMAT a10; 
COL job_name FORMAT a20 
COL state FORMAT a12
COL operation LIKE state 
COL job_mode LIKE state 
COL owner.object for a50-- locate Data Pump jobs: SELECT owner_name, job_name, rtrim(operation) "OPERATION", rtrim(job_mode) "JOB_MODE", state, attached_sessionsFROM dba_datapump_jobsWHERE job_name NOT LIKE 'BIN$%'ORDER BY 1,2;OWNER_NAME JOB_NAME            OPERATION JOB_MODE  STATE       ATTACHED
---------- ------------------- --------- --------- ----------- --------
SCOTT      EXPDP_20051121      EXPORT    SCHEMA    EXECUTING          1
SCOTT      SYS_EXPORT_TABLE_01 EXPORT    TABLE     NOT RUNNING        0 
SCOTT      SYS_EXPORT_TABLE_02 EXPORT    TABLE     NOT RUNNING        0 
SYSTEM     SYS_EXPORT_FULL_01  EXPORT    FULL      NOT RUNNING        0
Step 2. Ensure that the listed jobs in dba_datapump_jobs are not export/import Data Pump jobs that are active: status should be 'NOT RUNNING'.Step 3. Check with the job owner that the job with status 'NOT RUNNING' in dba_datapump_jobs is not an export/import Data Pump job that has been temporary stopped, but is actually a job that failed. (E.g. the full database export job by SYSTEM is not a job that failed, but was deliberately paused with STOP_JOB).Step 4. Identify orphan DataPump external tables. To do this, run the following as SYSDBA in SQL*Plus:set linesize 200 trimspool on
set pagesize 2000
col owner form a30
col created form a25
col last_ddl_time form a25
col object_name form a30
col object_type form a25
select OWNER,OBJECT_NAME,OBJECT_TYPE, status,
to_char(CREATED,'dd-mon-yyyy hh24:mi:ss') created ,to_char(LAST_DDL_TIME , 'dd-mon-yyyy hh24:mi:ss') last_ddl_time
from dba_objects
where object_name like 'ET$%'
/select owner, TABLE_NAME, DEFAULT_DIRECTORY_NAME, ACCESS_TYPE
from dba_external_tables
order by 1,2
/Correlate the information from DBA_OBJECTS and DBA_EXTERNAL TABLES above to identify the temporary external tables that belong to the DataPump orphaned jobs.
Drop the temporary external tables that belong to the DataPump orphaned job. eg:SQL> drop table system.&1 purge;
Enter value for 1: ET$00654E1E0001
old 1: drop table system.&1 purge
new 1: drop table system.ET$00654E1E0001 purgeStep 5. Determine in SQL*Plus the related master tables:-- locate Data Pump master tables: SELECT o.status, o.object_id, o.object_type, o.owner||'.'||object_name "OWNER.OBJECT" FROM dba_objects o, dba_datapump_jobs j WHERE o.owner=j.owner_name AND o.object_name=j.job_name AND j.job_name NOT LIKE 'BIN$%' ORDER BY 4,2; STATUS   OBJECT_ID OBJECT_TYPE  OWNER.OBJECT 
------- ---------- ------------ ------------------------- 
VALID        85283 TABLE        SCOTT.EXPDP_20051121 
VALID        85215 TABLE        SCOTT.SYS_EXPORT_TABLE_02 
VALID        85162 TABLE        SYSTEM.SYS_EXPORT_FULL_01select table_name, owner from dba_external_tables;Step 6. For jobs that were stopped in the past and won't be restarted anymore, delete the master table. E.g.:DROP TABLE scott.sys_export_table_02;-- For systems with recycle bin additionally run:
purge dba_recyclebin;NOTE:
In case the table name is mixed case, you can get errors on the drop, e.g.:
SQL> drop table SYSTEM.impdp_schema_STGMDM_10202014_0;
drop table SYSTEM.impdp_schema_STGMDM_10202014_0*
ERROR at line 1:
ORA-00942: table or view does not existBecause the table has a mixed case, try using these statements with double quotes around the table name, for instance:
drop table SYSTEM."impdp_SCHEMA_STGMDM_04102015_1";
drop table SYSTEM."impdp_schema_STGMDM_10202014_0";Step 7. Re-run the query on dba_datapump_jobs and dba_objects (step 1 and 4). If there are still jobs listed in dba_datapump_jobs, and these jobs do not have a master table anymore, cleanup the job while connected as the job owner. E.g.:CONNECT scott/tiger SET serveroutput on 
SET lines 100 
DECLARE h1 NUMBER; 
BEGIN h1 := DBMS_DATAPUMP.ATTACH('SYS_EXPORT_TABLE_01','SCOTT'); DBMS_DATAPUMP.STOP_JOB (h1); 
END; 
/Note that after the call to the STOP_JOB procedure, it may take some time for the job to be removed. Query the view user_datapump_jobs to check whether the job has been removed:CONNECT scott/tiger SELECT * FROM user_datapump_jobs;Step 8. Confirm that the job has been removed:CONNECT / as sysdba 
SET lines 200  
COL owner_name FORMAT a10;  
COL job_name FORMAT a20  
COL state FORMAT a12  
COL operation LIKE state  
COL job_mode LIKE state  
COL owner.object for a50-- locate Data Pump jobs:  SELECT owner_name, job_name, rtrim(operation) "OPERATION", rtrim(job_mode) "JOB_MODE", state, attached_sessionsFROM dba_datapump_jobsWHERE job_name NOT LIKE 'BIN$%'ORDER BY 1,2;OWNER_NAME JOB_NAME            OPERATION JOB_MODE  STATE       ATTACHED 
---------- ------------------- --------- --------- ----------- -------- 
SCOTT      EXPDP_20051121      EXPORT    SCHEMA    EXECUTING          1 
SYSTEM     SYS_EXPORT_FULL_01  EXPORT    FULL      NOT RUNNING        0 -- locate Data Pump master tables: SELECT o.status, o.object_id, o.object_type, o.owner||'.'||object_name "OWNER.OBJECT" FROM dba_objects o, dba_datapump_jobs j WHERE o.owner=j.owner_name AND o.object_name=j.job_name AND j.job_name NOT LIKE 'BIN$%' ORDER BY 4,2; STATUS   OBJECT_ID OBJECT_TYPE  OWNER.OBJECT 
------- ---------- ------------ ------------------------- 
VALID        85283 TABLE        SCOTT.EXPDP_20051121 
VALID        85162 TABLE        SYSTEM.SYS_EXPORT_FULL_01Remarks:
1. Orphaned Data Pump jobs do not have an impact on new Data Pump jobs. The view dba_datapump_jobs is a view, based on gv$datapump_job, obj$, com$, and user$. The view shows the Data Pump jobs that are still running, or jobs for which the master table was kept in the database, or in case of an abnormal end of the Data Pump job (the orphaned job). If a new Data Pump job is started, a new entry will be created, which has no relation to the old Data Pump jobs.2. When starting the new Data Pump job and using a system generated name, we check the names of existing Data Pump jobs in the dba_datapump_job in order to obtain a unique new system generated jobname. Naturally, there needs to be enough free space for the new master table to be created in the schema that started the new Data Pump job.3. A Data Pump job is not the same as a job that is defined with DBMS_JOBS. Jobs created with DBMS_JOBS use there own processes. Data Pump jobs use a master process and worker process(es). In case a Data Pump still is temporary stopped (STOP_JOB while in interactive command mode), the Data Pump job still exists in the database (status: NOT RUNNING), while the master and worker process(es) are stopped and do not exist anymore. The client can attach to the job at a later time, and continue the job execution (START_JOB).4. The possibility of corruption when the master table of an active Data Pump job is deleted, depends on the Data Pump job.4.a. If the job is an export job, corruption is unlikely as the drop of the master table will only cause the Data Pump master and worker processes to abort. This situation is similar to aborting an export of the original export client.4.b. If the job is an import job then the situation is different. When dropping the master table, the Data Pump worker and master processes will abort. This will probably lead to an incomplete import: e.g. not all table data was imported, and/or table was imported incomplete, and indexes, views, etc. are missing. This situation is similar to aborting an import of the original import client.The drop of the master table itself, does not lead to any data dictionary corruption. If you keep the master table after the job completes (using the undocumented parameter: KEEP_MASTER=Y), then a drop of the master table afterwards, will not cause any corruption.5. Instead of the status 'NOT RUNNING' the status of a failed job could also be 'DEFINING'. When trying to attach to such a job, this would fail with:$ expdp system/manager attach=system.sys_export_schema_01Export: Release 11.2.0.4.0 - Production on Tue Jan 27 10:14:27 2015
Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
ORA-31626: job does not exist
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 79
ORA-06512: at "SYS.KUPV$FT", line 405
ORA-31638: cannot attach to job SYS_EXPORT_SCHEMA_01 for user SYSTEM
ORA-31632: master table "SYSTEM.SYS_EXPORT_SCHEMA_01" not found, invalid, or inaccessible
ORA-00942: table or view does not exist
The steps to cleanup these failed/orphaned jobs are the same as mentioned above.

 

参考

How To Cleanup Orphaned DataPump Jobs In DBA_DATAPUMP_JOBS ? (文档 ID 336014.1)

这篇关于清理DBA_DATAPUMP_JOBS中的失败数据泵作业的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/301667

相关文章

Python获取中国节假日数据记录入JSON文件

《Python获取中国节假日数据记录入JSON文件》项目系统内置的日历应用为了提升用户体验,特别设置了在调休日期显示“休”的UI图标功能,那么问题是这些调休数据从哪里来呢?我尝试一种更为智能的方法:P... 目录节假日数据获取存入jsON文件节假日数据读取封装完整代码项目系统内置的日历应用为了提升用户体验,

Java利用JSONPath操作JSON数据的技术指南

《Java利用JSONPath操作JSON数据的技术指南》JSONPath是一种强大的工具,用于查询和操作JSON数据,类似于SQL的语法,它为处理复杂的JSON数据结构提供了简单且高效... 目录1、简述2、什么是 jsONPath?3、Java 示例3.1 基本查询3.2 过滤查询3.3 递归搜索3.4

MySQL大表数据的分区与分库分表的实现

《MySQL大表数据的分区与分库分表的实现》数据库的分区和分库分表是两种常用的技术方案,本文主要介绍了MySQL大表数据的分区与分库分表的实现,文中通过示例代码介绍的非常详细,对大家的学习或者工作具有... 目录1. mysql大表数据的分区1.1 什么是分区?1.2 分区的类型1.3 分区的优点1.4 分

Mysql删除几亿条数据表中的部分数据的方法实现

《Mysql删除几亿条数据表中的部分数据的方法实现》在MySQL中删除一个大表中的数据时,需要特别注意操作的性能和对系统的影响,本文主要介绍了Mysql删除几亿条数据表中的部分数据的方法实现,具有一定... 目录1、需求2、方案1. 使用 DELETE 语句分批删除2. 使用 INPLACE ALTER T

Docker镜像pull失败两种解决办法小结

《Docker镜像pull失败两种解决办法小结》有时候我们在拉取Docker镜像的过程中会遇到一些问题,:本文主要介绍Docker镜像pull失败两种解决办法的相关资料,文中通过代码介绍的非常详细... 目录docker 镜像 pull 失败解决办法1DrQwWCocker 镜像 pull 失败解决方法2总

Python Dash框架在数据可视化仪表板中的应用与实践记录

《PythonDash框架在数据可视化仪表板中的应用与实践记录》Python的PlotlyDash库提供了一种简便且强大的方式来构建和展示互动式数据仪表板,本篇文章将深入探讨如何使用Dash设计一... 目录python Dash框架在数据可视化仪表板中的应用与实践1. 什么是Plotly Dash?1.1

Redis 中的热点键和数据倾斜示例详解

《Redis中的热点键和数据倾斜示例详解》热点键是指在Redis中被频繁访问的特定键,这些键由于其高访问频率,可能导致Redis服务器的性能问题,尤其是在高并发场景下,本文给大家介绍Redis中的热... 目录Redis 中的热点键和数据倾斜热点键(Hot Key)定义特点应对策略示例数据倾斜(Data S

Python实现将MySQL中所有表的数据都导出为CSV文件并压缩

《Python实现将MySQL中所有表的数据都导出为CSV文件并压缩》这篇文章主要为大家详细介绍了如何使用Python将MySQL数据库中所有表的数据都导出为CSV文件到一个目录,并压缩为zip文件到... python将mysql数据库中所有表的数据都导出为CSV文件到一个目录,并压缩为zip文件到另一个

pip无法安装osgeo失败的问题解决

《pip无法安装osgeo失败的问题解决》本文主要介绍了pip无法安装osgeo失败的问题解决,文中通过示例代码介绍的非常详细,对大家的学习或者工作具有一定的参考学习价值,需要的朋友们下面随着小编来一... 进入官方提供的扩展包下载网站寻找版本适配的whl文件注意:要选择cp(python版本)和你py

SpringBoot整合jasypt实现重要数据加密

《SpringBoot整合jasypt实现重要数据加密》Jasypt是一个专注于简化Java加密操作的开源工具,:本文主要介绍详细介绍了如何使用jasypt实现重要数据加密,感兴趣的小伙伴可... 目录jasypt简介 jasypt的优点SpringBoot使用jasypt创建mapper接口配置文件加密