清理DBA_DATAPUMP_JOBS中的失败数据泵作业

2023-10-29 15:48

本文主要是介绍清理DBA_DATAPUMP_JOBS中的失败数据泵作业,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

SYS_EXPORT_FULL_01、SYS_EXPORT_FULL_02是一个完整的数据库导出作业,状态是NOT RUNNING,意味着作业是暂时的停止,实际上作业失败了也是NOT RUNNING状态。

SELECT owner_name,job_name,rtrim(operation) "OPERATION",rtrim(job_mode) "JOB_MODE",state,attached_sessionsFROM dba_datapump_jobsWHERE job_name NOT LIKE 'BIN$%'ORDER BY 1, 2;

查看作业的master表

SELECT o.status,o.object_id,o.object_type,o.owner || '.' || object_name "OWNER.OBJECT"FROM dba_objects o, dba_datapump_jobs jWHERE o.owner = j.owner_nameAND o.object_name = j.job_nameAND j.job_name NOT LIKE 'BIN$%'ORDER BY 4, 2;

这表示以前(可能是很久以前)停止的作业,当然这些作业不可能重新启动,完全可以删除这些master表。

drop table SYSTEM.SYS_EXPORT_SCHEMA_01

metalink文章《How To Cleanup Orphaned DataPump Jobs In DBA_DATAPUMP_JOBS ? (文档 ID 336014.1)》

APPLIES TO:
Oracle Database - Standard Edition - Version 10.1.0.2 to 12.2.0.1 [Release 10.1 to 12.2]
Enterprise Manager for Oracle Database - Version 10.1.0.2 to 12.1.0.7.0 [Release 10.1 to 12.1]
Oracle Database - Personal Edition - Version 10.1.0.2 to 12.2.0.1 [Release 10.1 to 12.2]
Oracle Database - Enterprise Edition - Version 10.1.0.2 to 12.2.0.1 [Release 10.1 to 12.2]
Information in this document applies to any platform.
***Checked for relevance on 29-Apr-2014***GOAL
How to cleanup orphaned Data Pump jobs in DBA_DATAPUMP_JOBS ?SOLUTION
The jobs used in this example:
- Export job SCOTT.EXPDP_20051121 is a schema level export that is running
- Export job SCOTT.SYS_EXPORT_TABLE_01 is an orphaned table level export job
- Export job SCOTT.SYS_EXPORT_TABLE_02 is a table level export job that was stopped
- Export job SYSTEM.SYS_EXPORT_FULL_01 is a full database export job that is temporary stoppedStep 1. Determine in SQL*Plus which Data Pump jobs exist in the database:%sqlplus /nologCONNECT / as sysdba 
SET lines 200 
COL owner_name FORMAT a10; 
COL job_name FORMAT a20 
COL state FORMAT a12
COL operation LIKE state 
COL job_mode LIKE state 
COL owner.object for a50-- locate Data Pump jobs: SELECT owner_name, job_name, rtrim(operation) "OPERATION", rtrim(job_mode) "JOB_MODE", state, attached_sessionsFROM dba_datapump_jobsWHERE job_name NOT LIKE 'BIN$%'ORDER BY 1,2;OWNER_NAME JOB_NAME            OPERATION JOB_MODE  STATE       ATTACHED
---------- ------------------- --------- --------- ----------- --------
SCOTT      EXPDP_20051121      EXPORT    SCHEMA    EXECUTING          1
SCOTT      SYS_EXPORT_TABLE_01 EXPORT    TABLE     NOT RUNNING        0 
SCOTT      SYS_EXPORT_TABLE_02 EXPORT    TABLE     NOT RUNNING        0 
SYSTEM     SYS_EXPORT_FULL_01  EXPORT    FULL      NOT RUNNING        0
Step 2. Ensure that the listed jobs in dba_datapump_jobs are not export/import Data Pump jobs that are active: status should be 'NOT RUNNING'.Step 3. Check with the job owner that the job with status 'NOT RUNNING' in dba_datapump_jobs is not an export/import Data Pump job that has been temporary stopped, but is actually a job that failed. (E.g. the full database export job by SYSTEM is not a job that failed, but was deliberately paused with STOP_JOB).Step 4. Identify orphan DataPump external tables. To do this, run the following as SYSDBA in SQL*Plus:set linesize 200 trimspool on
set pagesize 2000
col owner form a30
col created form a25
col last_ddl_time form a25
col object_name form a30
col object_type form a25
select OWNER,OBJECT_NAME,OBJECT_TYPE, status,
to_char(CREATED,'dd-mon-yyyy hh24:mi:ss') created ,to_char(LAST_DDL_TIME , 'dd-mon-yyyy hh24:mi:ss') last_ddl_time
from dba_objects
where object_name like 'ET$%'
/select owner, TABLE_NAME, DEFAULT_DIRECTORY_NAME, ACCESS_TYPE
from dba_external_tables
order by 1,2
/Correlate the information from DBA_OBJECTS and DBA_EXTERNAL TABLES above to identify the temporary external tables that belong to the DataPump orphaned jobs.
Drop the temporary external tables that belong to the DataPump orphaned job. eg:SQL> drop table system.&1 purge;
Enter value for 1: ET$00654E1E0001
old 1: drop table system.&1 purge
new 1: drop table system.ET$00654E1E0001 purgeStep 5. Determine in SQL*Plus the related master tables:-- locate Data Pump master tables: SELECT o.status, o.object_id, o.object_type, o.owner||'.'||object_name "OWNER.OBJECT" FROM dba_objects o, dba_datapump_jobs j WHERE o.owner=j.owner_name AND o.object_name=j.job_name AND j.job_name NOT LIKE 'BIN$%' ORDER BY 4,2; STATUS   OBJECT_ID OBJECT_TYPE  OWNER.OBJECT 
------- ---------- ------------ ------------------------- 
VALID        85283 TABLE        SCOTT.EXPDP_20051121 
VALID        85215 TABLE        SCOTT.SYS_EXPORT_TABLE_02 
VALID        85162 TABLE        SYSTEM.SYS_EXPORT_FULL_01select table_name, owner from dba_external_tables;Step 6. For jobs that were stopped in the past and won't be restarted anymore, delete the master table. E.g.:DROP TABLE scott.sys_export_table_02;-- For systems with recycle bin additionally run:
purge dba_recyclebin;NOTE:
In case the table name is mixed case, you can get errors on the drop, e.g.:
SQL> drop table SYSTEM.impdp_schema_STGMDM_10202014_0;
drop table SYSTEM.impdp_schema_STGMDM_10202014_0*
ERROR at line 1:
ORA-00942: table or view does not existBecause the table has a mixed case, try using these statements with double quotes around the table name, for instance:
drop table SYSTEM."impdp_SCHEMA_STGMDM_04102015_1";
drop table SYSTEM."impdp_schema_STGMDM_10202014_0";Step 7. Re-run the query on dba_datapump_jobs and dba_objects (step 1 and 4). If there are still jobs listed in dba_datapump_jobs, and these jobs do not have a master table anymore, cleanup the job while connected as the job owner. E.g.:CONNECT scott/tiger SET serveroutput on 
SET lines 100 
DECLARE h1 NUMBER; 
BEGIN h1 := DBMS_DATAPUMP.ATTACH('SYS_EXPORT_TABLE_01','SCOTT'); DBMS_DATAPUMP.STOP_JOB (h1); 
END; 
/Note that after the call to the STOP_JOB procedure, it may take some time for the job to be removed. Query the view user_datapump_jobs to check whether the job has been removed:CONNECT scott/tiger SELECT * FROM user_datapump_jobs;Step 8. Confirm that the job has been removed:CONNECT / as sysdba 
SET lines 200  
COL owner_name FORMAT a10;  
COL job_name FORMAT a20  
COL state FORMAT a12  
COL operation LIKE state  
COL job_mode LIKE state  
COL owner.object for a50-- locate Data Pump jobs:  SELECT owner_name, job_name, rtrim(operation) "OPERATION", rtrim(job_mode) "JOB_MODE", state, attached_sessionsFROM dba_datapump_jobsWHERE job_name NOT LIKE 'BIN$%'ORDER BY 1,2;OWNER_NAME JOB_NAME            OPERATION JOB_MODE  STATE       ATTACHED 
---------- ------------------- --------- --------- ----------- -------- 
SCOTT      EXPDP_20051121      EXPORT    SCHEMA    EXECUTING          1 
SYSTEM     SYS_EXPORT_FULL_01  EXPORT    FULL      NOT RUNNING        0 -- locate Data Pump master tables: SELECT o.status, o.object_id, o.object_type, o.owner||'.'||object_name "OWNER.OBJECT" FROM dba_objects o, dba_datapump_jobs j WHERE o.owner=j.owner_name AND o.object_name=j.job_name AND j.job_name NOT LIKE 'BIN$%' ORDER BY 4,2; STATUS   OBJECT_ID OBJECT_TYPE  OWNER.OBJECT 
------- ---------- ------------ ------------------------- 
VALID        85283 TABLE        SCOTT.EXPDP_20051121 
VALID        85162 TABLE        SYSTEM.SYS_EXPORT_FULL_01Remarks:
1. Orphaned Data Pump jobs do not have an impact on new Data Pump jobs. The view dba_datapump_jobs is a view, based on gv$datapump_job, obj$, com$, and user$. The view shows the Data Pump jobs that are still running, or jobs for which the master table was kept in the database, or in case of an abnormal end of the Data Pump job (the orphaned job). If a new Data Pump job is started, a new entry will be created, which has no relation to the old Data Pump jobs.2. When starting the new Data Pump job and using a system generated name, we check the names of existing Data Pump jobs in the dba_datapump_job in order to obtain a unique new system generated jobname. Naturally, there needs to be enough free space for the new master table to be created in the schema that started the new Data Pump job.3. A Data Pump job is not the same as a job that is defined with DBMS_JOBS. Jobs created with DBMS_JOBS use there own processes. Data Pump jobs use a master process and worker process(es). In case a Data Pump still is temporary stopped (STOP_JOB while in interactive command mode), the Data Pump job still exists in the database (status: NOT RUNNING), while the master and worker process(es) are stopped and do not exist anymore. The client can attach to the job at a later time, and continue the job execution (START_JOB).4. The possibility of corruption when the master table of an active Data Pump job is deleted, depends on the Data Pump job.4.a. If the job is an export job, corruption is unlikely as the drop of the master table will only cause the Data Pump master and worker processes to abort. This situation is similar to aborting an export of the original export client.4.b. If the job is an import job then the situation is different. When dropping the master table, the Data Pump worker and master processes will abort. This will probably lead to an incomplete import: e.g. not all table data was imported, and/or table was imported incomplete, and indexes, views, etc. are missing. This situation is similar to aborting an import of the original import client.The drop of the master table itself, does not lead to any data dictionary corruption. If you keep the master table after the job completes (using the undocumented parameter: KEEP_MASTER=Y), then a drop of the master table afterwards, will not cause any corruption.5. Instead of the status 'NOT RUNNING' the status of a failed job could also be 'DEFINING'. When trying to attach to such a job, this would fail with:$ expdp system/manager attach=system.sys_export_schema_01Export: Release 11.2.0.4.0 - Production on Tue Jan 27 10:14:27 2015
Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
ORA-31626: job does not exist
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 79
ORA-06512: at "SYS.KUPV$FT", line 405
ORA-31638: cannot attach to job SYS_EXPORT_SCHEMA_01 for user SYSTEM
ORA-31632: master table "SYSTEM.SYS_EXPORT_SCHEMA_01" not found, invalid, or inaccessible
ORA-00942: table or view does not exist
The steps to cleanup these failed/orphaned jobs are the same as mentioned above.

 

参考

How To Cleanup Orphaned DataPump Jobs In DBA_DATAPUMP_JOBS ? (文档 ID 336014.1)

这篇关于清理DBA_DATAPUMP_JOBS中的失败数据泵作业的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/301667

相关文章

Pandas统计每行数据中的空值的方法示例

《Pandas统计每行数据中的空值的方法示例》处理缺失数据(NaN值)是一个非常常见的问题,本文主要介绍了Pandas统计每行数据中的空值的方法示例,具有一定的参考价值,感兴趣的可以了解一下... 目录什么是空值?为什么要统计空值?准备工作创建示例数据统计每行空值数量进一步分析www.chinasem.cn处

如何使用 Python 读取 Excel 数据

《如何使用Python读取Excel数据》:本文主要介绍使用Python读取Excel数据的详细教程,通过pandas和openpyxl,你可以轻松读取Excel文件,并进行各种数据处理操... 目录使用 python 读取 Excel 数据的详细教程1. 安装必要的依赖2. 读取 Excel 文件3. 读

Spring 请求之传递 JSON 数据的操作方法

《Spring请求之传递JSON数据的操作方法》JSON就是一种数据格式,有自己的格式和语法,使用文本表示一个对象或数组的信息,因此JSON本质是字符串,主要负责在不同的语言中数据传递和交换,这... 目录jsON 概念JSON 语法JSON 的语法JSON 的两种结构JSON 字符串和 Java 对象互转

C++如何通过Qt反射机制实现数据类序列化

《C++如何通过Qt反射机制实现数据类序列化》在C++工程中经常需要使用数据类,并对数据类进行存储、打印、调试等操作,所以本文就来聊聊C++如何通过Qt反射机制实现数据类序列化吧... 目录设计预期设计思路代码实现使用方法在 C++ 工程中经常需要使用数据类,并对数据类进行存储、打印、调试等操作。由于数据类

SpringBoot使用GZIP压缩反回数据问题

《SpringBoot使用GZIP压缩反回数据问题》:本文主要介绍SpringBoot使用GZIP压缩反回数据问题,具有很好的参考价值,希望对大家有所帮助,如有错误或未考虑完全的地方,望不吝赐教... 目录SpringBoot使用GZIP压缩反回数据1、初识gzip2、gzip是什么,可以干什么?3、Spr

SpringBoot集成Milvus实现数据增删改查功能

《SpringBoot集成Milvus实现数据增删改查功能》milvus支持的语言比较多,支持python,Java,Go,node等开发语言,本文主要介绍如何使用Java语言,采用springboo... 目录1、Milvus基本概念2、添加maven依赖3、配置yml文件4、创建MilvusClient

python logging模块详解及其日志定时清理方式

《pythonlogging模块详解及其日志定时清理方式》:本文主要介绍pythonlogging模块详解及其日志定时清理方式,具有很好的参考价值,希望对大家有所帮助,如有错误或未考虑完全的地... 目录python logging模块及日志定时清理1.创建logger对象2.logging.basicCo

SpringValidation数据校验之约束注解与分组校验方式

《SpringValidation数据校验之约束注解与分组校验方式》本文将深入探讨SpringValidation的核心功能,帮助开发者掌握约束注解的使用技巧和分组校验的高级应用,从而构建更加健壮和可... 目录引言一、Spring Validation基础架构1.1 jsR-380标准与Spring整合1

MySQL 中查询 VARCHAR 类型 JSON 数据的问题记录

《MySQL中查询VARCHAR类型JSON数据的问题记录》在数据库设计中,有时我们会将JSON数据存储在VARCHAR或TEXT类型字段中,本文将详细介绍如何在MySQL中有效查询存储为V... 目录一、问题背景二、mysql jsON 函数2.1 常用 JSON 函数三、查询示例3.1 基本查询3.2

SpringBatch数据写入实现

《SpringBatch数据写入实现》SpringBatch通过ItemWriter接口及其丰富的实现,提供了强大的数据写入能力,本文主要介绍了SpringBatch数据写入实现,具有一定的参考价值,... 目录python引言一、ItemWriter核心概念二、数据库写入实现三、文件写入实现四、多目标写入