insert append nologging 对Dataguard 影响 DG的同步修复

2024-03-09 17:36

本文主要是介绍insert append nologging 对Dataguard 影响 DG的同步修复,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

----Force Logging 对dataguard 没有影响,所以日志大小也没有多大影响

 


若是该库在有备库的状况下,由于主库的 nologging 插入操做不会生成 redo ,因此不会在备库上传输和应用,这会致使备库的数据出现问题。 

在一个具备主备关系的主库上将 force_logging 设置为 nologging 模式,随后建立一张表,设置为 nologging 模式:数据库

SQL> alter database no force logging;

SQL> create table DEMO tablespace users pctfree 99 as select rownum n from xmltable('1 to 100');

SQL> alter table DEMO nologging;

以后使用 /* +append*/ 插入数据并提交ide

SQL> insert /*+ append */ into DEMO select rownum n from xmltable('1 to 1000');

SQL> commit;

这时候在备库对该表进行查询会看到以下报错信息spa

SQL>select count(1) from demo;3d

select count(1) from demo

                 *

ERROR at line 1:

ORA-01578: ORACLE data block corrupted (file # 4, block # 819)

ORA-01110: data file 4: '/data/data1/ORCL2/datafile/o1_mf_users_3ft1e9qb_.dbf'

ORA-26040: Data block was loaded using the NOLOGGING option

1.1   11g
在 Oracle 11g 中 ,若是遇到这样的问题,能够经过在备库恢复有问题的数据文件来解决问题 。 而要修复这个问题,须要将包含缺乏的数据的数据文件从主库复制到物理备库。

1 、查询主库

SQL> SELECT NAME, UNRECOVERABLE_CHANGE# FROM V$DATAFILE;

NAME                                        UNRECOVERABLE_CHANGE#

-------------------------------------------- ---------------------

+DATADG/orcl/datafile/system.270.972381717                            0

+DATADG/orcl/datafile/sysaux.265.972381717                            0

+DATADG/orcl/datafile/undotbs1.261.972381717                            0

+DATADG/orcl/datafile/users.259.972381717                          6252054

+DATADG/orcl/datafile/example.264.972381807                            0

+DATADG/orcl/datafile/undotbs2.258.972381927                            0

+DATADG/orcl/datafile/example.266.972400297                            0

+DATADG/orcl/datafile/ax.268.973612569                                0

2 、查询备库

sys@ORCL>SELECT NAME, UNRECOVERABLE_CHANGE# FROM V$DATAFILE;

NAME                                        UNRECOVERABLE_CHANGE#

--------------------------------------------- ---------------------

/data/data1/ORCL2/datafile/o1_mf_system_3dt1e9op_.dbf                       0

/data/data1/ORCL2/datafile/o1_mf_sysaux_3ct1e9nb_.dbf                       0

/data/data1/ORCL2/datafile/o1_mf_undotbs1_3gt1e9qq_.dbf                     0

/data/data1/ORCL2/datafile/o1_mf_users_3ft1e9qb_.dbf                     5383754

/data/data1/ORCL2/datafile/o1_mf_example_3et1e9ps_.dbf                      0

/data/data1/ORCL2/datafile/o1_mf_undotbs2_3ht1e9r1_.dbf                     0

/data/data1/ORCL2/datafile/o1_mf_example_3at1e9nb_.dbf                      0

/data/data1/ORCL2/datafile/o1_mf_ax_3bt1e9nb_.dbf                       0

3 、比较主数据库和备用数据库的查询结果

在两个查询结果中比较 UNRECOVERABLE_CHANGE# 列的值。若是主库中 UNRECOVERABLE_CHANGE #列的值大于备库中的同一列,则须要将这些数据文件在备库恢复。

将主库对应的数据文件拷贝至备库 :

SQL> alter tablespace users begin backup ;

SQL> exit

ASMCMD>cp +DATADG/orcl/datafile/users.259.972381717 /tmp

$ scp /tmp/users.259.972381717 10.10.60.123:/data/data1/ORCL2/datafile/

SQL> alter tablespace users end backup ;

备库将旧的数据文件 RENAME 至新的数据文件 :

SQL> startup mount force

SQL> alter database recover managed standby database cancel;

SQL> alter system set standby_file_management=manual; # 在备库执行 rename 操做时,须要此参数为manual

SQL> alter database  rename file '/data/data1/ORCL2/datafile/o1_mf_users_3ft1e9qb_.dbf' to '/data/data1/ORCL2/datafile/users.259.972381717';

SQL> alter system set standby_file_management=auto;

SQL> alter database recover managed standby database using current logfile disconnect from session;

以后就能够在备库查询到实例表 DEMO

SQL> select count(1) from demo;

  COUNT(1)

----------

    1 1 00

 
1.2   12.1
对于这种状况,在 12.1 版本中, RMAN 提供了一种便捷的方式让咱们不须要在主库上进行数据文件的备份传输而能够在备库使用 restore database (or datafile ) from service 去从主库进行恢复。

固然, Oracle 的 RMAN 是足够聪明的:若是数据文件是正常的状态, RMAN 能够根据它们的数据文件头进行跳跃恢复。若是,因为 nologging 操做致使某些块被标记为损坏的,那么这部分数据文件就是须要恢复的,而后怎么办?在恢复命令中有 FORCE 选项。但咱们可能并不须要它。由于有些时候数据文件是同步的,实时日志应用进程仍是在运行的。这个时候,为了恢复,咱们须要中止应用。

一旦咱们中止了应用,那么就不须要执行 RESOTORE DATABASE FORCE 操做,由于如今数据文件的状态是过旧的,就算你不加 FORCE 选项 RMAN 也是不会跳过这些数据文件的。

备库关掉实时日志应用,并开启至 mount 状态。

SQL> alter database recover managed standby database cancel;

SQL> shutdown immediate

Database closed.

Database dismounted.

ORACLE instance shut down.

SQL> startup mount

ORACLE instance started

备库登录 RMAN, 使用 restore database (or datafile ) from service 进行恢复

RMAN> restore database from service 'primary_db'; # 这里的 primary_db, 为备库至主库的 tns 链接串的别名------database情况下其实也是恢复有问题的datafile?


固然要记得去起库并开启实时日志应用进程!

1.3   12.2 
在 12.2 中, Oracle 提供了一种更方便的方式去进行恢复主库会将未记录的块的列表发送至备库,并记录在备库控制文件中,咱们能够从备库的 v$nonlogged_block 这个视图查看到相关信息。不须要发送主库的整个数据文件,而是在 RMAN执行一个简单的命令来恢复它们:

RECOVER DATABASE NONLOGGED BLOCK

中止备库实时日志应用

SQL> alter database recover managed standby database cancel;

备库登录 RMAN 执行

RECOVER DATABASE NONLOGGED BLOCK

注意:执行此步骤前请确认主备库的 log_archive_config 参数已经设置

RMAN> recover database nonlogged block;


最后别忘了开启实时日志应用进程。

综上来看, 12.2 中这个特性在数据仓库等一些场景是能够尝试的。以往咱们开启 force logging 形成大量的 redo 日志而且影响一部分 dml 语句的执行效率。在 12.2 咱们能够尝试使用 nonlogging 操做去节省大量数据插入的时间,而后在系统空闲时间进行备库恢复操做。可是注意这种操做也存在弊端,这样你的备库的可用性就大大下降了。凡事总有取舍!

https://docs.oracle.com/cd/B28359_01/server.111/b28294/scenarios.htm#i1015738

13.4.2  Recovery Steps for Physical Standby Databases
When the archived redo log file is copied to the standby site and applied to the physical standby database, a portion of the datafile is unusable and is marked as being unrecoverable. When you either fail over to the physical standby database, or open the standby database for read-only access, and attempt to read the range of blocks that are marked as  UNRECOVERABLE , you will see error messages similar to the following:

ORA-01578: ORACLE data block corrupted (file # 1, block # 2521)
ORA-01110: data file 1: '/oracle/dbs/stdby/tbs_1.dbf'
ORA-26040: Data block was loaded using the NOLOGGING option
To recover after the  NOLOGGING  clause is specified, you need to copy the datafile that contains the missing redo data from the primary site to the physical standby site. Perform the following steps:

Step 1   Determine which datafiles should be copied.

Follow these steps:

Query the primary database:

SQL> SELECT NAME, UNRECOVERABLE_CHANGE# FROM V$DATAFILE;
NAME                                                  UNRECOVERABLE
----------------------------------------------------- -------------
/oracle/dbs/tbs_1.dbf                                       5216
/oracle/dbs/tbs_2.dbf                                          0
/oracle/dbs/tbs_3.dbf                                          0
/oracle/dbs/tbs_4.dbf                                          0
4 rows selected.
Query the standby database:

SQL> SELECT NAME, UNRECOVERABLE_CHANGE# FROM V$DATAFILE;
NAME                                                  UNRECOVERABLE
----------------------------------------------------- -------------
/oracle/dbs/stdby/tbs_1.dbf                                 5186
/oracle/dbs/stdby/tbs_2.dbf                                    0
/oracle/dbs/stdby/tbs_3.dbf                                    0
/oracle/dbs/stdby/tbs_4.dbf                                    0
4 rows selected.
Compare the query results of the primary and standby databases.

Compare the value of the  UNRECOVERABLE_CHANGE#  column in both query results. If the value of the UNRECOVERABLE_CHANGE#  column in the primary database is greater than the same column in the standby database, then the datafile needs to be copied from the primary site to the standby site.

In this example, the value of the  UNRECOVERABLE_CHANGE#  in the primary database for the  tbs_1.dbf  datafile is greater, so you need to copy the  tbs_1.dbf  datafile to the standby site.

Step 2   On the primary site, back up the datafile you need to copy to the standby site.

Issue the following SQL statements:

SQL> ALTER TABLESPACE system BEGIN BACKUP;
SQL> EXIT;
% cp tbs_1.dbf /backup
SQL> ALTER TABLESPACE system END BACKUP;
Step 3   Copy the datafile to the standby database.

Copy the datafile that contains the missing redo data from the primary site to location on the physical standby site where files related to recovery are stored.

Step 4   On the standby database, restart Redo Apply.

Issue the following SQL statement:

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;
You might get the following error messages (possibly in the alert log) when you try to restart Redo Apply:

ORA-00308: cannot open archived log 'standby1'
ORA-27037: unable to obtain file status
SVR4 Error: 2: No such file or directory
Additional information: 3
ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
ORA-01152: file 1 was not restored from a sufficiently old backup
ORA-01110: data file 1: '/oracle/dbs/stdby/tbs_1.dbf'
If you get the  ORA-00308  error and Redo Apply does not terminate automatically, you can cancel recovery by issuing the following statement from another terminal window:

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
These error messages are returned when one or more log files in the archive gap have not been successfully applied. If you receive these errors, manually resolve the gaps, and repeat Step  4 . See  Section 6.3.3.1  for information about manually resolving an archive gap.

13.4.3  Determining If a Back up Is Required After Unrecoverable Operations
If you performed unrecoverable operations on your primary database, determine if a new backup operation is required by following these steps:

Query the  V$DATAFILE  view on the primary database to determine the  system change number (SCN)  or the time at which the Oracle database generated the most recent invalidated redo data.

Issue the following SQL statement on the primary database to determine if you need to perform another backup:

SELECT UNRECOVERABLE_CHANGE#, 
       TO_CHAR(UNRECOVERABLE_TIME, 'mm-dd-yyyy hh:mi:ss') 
FROM   V$DATAFILE;
If the query in the previous step reports an unrecoverable time for a datafile that is more recent than the time when the datafile was last backed up, then make another backup of the datafile in question.

See  Oracle Database Reference  for more information about the  V$DATAFILE  view.

https://docs.oracle.com/cd/B28359_01/server.111/b28294/manage_ls.htm#i1016645
 

10.5.5  Adding  or Re-Creating Tables On a Logical Standby Database
Typically, you use the  DBMS_LOGSTDBY.INSTANTIATE_TABLE  procedure to  re-create a table after an unrecoverable operation. You can also use this procedure to enable SQL Apply on a table that was formerly skipped.

Before you can create a table, it must meet the requirements described in  Section 4.1.2, "Ensure Table Rows in the Primary Database Can Be Uniquely Identified" . Then, you can use the following steps to re-create a table named  HR.EMPLOYEES  and resume SQL Apply. The directions assume that there is already a database link  BOSTON  defined to access the primary database.

The following list shows how to re-create a table and restart SQL Apply on that table:

Stop SQL Apply:

SQL> ALTER DATABASE STOP LOGICAL STANDBY APPLY;
Ensure no operations are being skipped for the table in question by querying the  DBA_LOGSTDBY_SKIP  view:

SQL> SELECT * FROM DBA_LOGSTDBY_SKIP;
ERROR  STATEMENT_OPT        OWNER          NAME                PROC
-----  -------------------  -------------  ----------------    -----
N      SCHEMA_DDL           HR             EMPLOYEES
N      DML                  HR             EMPLOYEES
N      SCHEMA_DDL           OE             TEST_ORDER
N      DML                  OE             TEST_ORDER
Because you already have skip rules associated with the table that you want to re-create on the logical standby database, you must first delete those rules. You can accomplish that by calling the  DBMS_LOGSTDBY.UNSKIP procedure. For example:

SQL> EXECUTE DBMS_LOGSTDBY.UNSKIP(stmt => 'DML', -
     schema_name => 'HR', -
     object_name => 'EMPLOYEES');
SQL> EXECUTE DBMS_LOGSTDBY.UNSKIP(stmt => 'SCHEMA_DDL', -
     schema_name => 'HR', -
     object_name => 'EMPLOYEES');
Re-create the table  HR.EMPLOYEES  with all its data in the logical standby database by using the DBMS_LOGSTDBY.INSTANTIATE_TABLE  procedure. For example:

SQL> EXECUTE DBMS_LOGSTDBY.INSTANTIATE_TABLE(schema_name => 'HR', -
     object_name => 'EMPLOYEES', -
     dblink => 'BOSTON');
Start SQL Apply:

SQL> ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE;
See Also:

Oracle Database PL/SQL Packages and Types Reference  for information about the DBMS_LOGSTDBY.UNSKIP  and the  DBMS_LOGSTDBY.INSTANTIATE_TABLE  procedures

To ensure a consistent view across the newly instantiated table and the rest of the database, wait for SQL Apply to catch up with the primary database before querying this table. You can do this by performing the following steps:

On the primary database, determine the current SCN by querying the  V$DATABASE  view:

SQL> SELECT CURRENT_SCN FROM V$DATABASE@BOSTON;
CURRENT_SCN
---------------------
345162788
Make sure SQL Apply has applied all transactions committed before the  CURRENT_SCN  returned in the previous query:

SQL> SELECT APPLIED_SCN FROM V$LOGSTDBY_PROGRESS;
APPLIED_SCN
--------------------------
345161345
When the  APPLIED_SCN  returned in this query is greater than the  CURRENT_SCN  returned in the first query, it is safe to query the newly re-created table.

Rolling a Standby Forward using an RMAN Incremental Backup To Fix The Nologging Changes (文档 ID 958181.1)
In this Document 

  Purpose
  Scope
  Details
  STEPS
  1. Follow this step-by-step procedure to roll forward a physical standby database for which nologging changes have been  applied to a small subset of the database:
   
   
  2. Follow this step-by-step procedure to roll forward a physical standby database for which nologging changes have been applied to a large portion of the database:
  References
 

APPLIES TO:
Oracle Database - Enterprise Edition - Version 10.2.0.1 to 12.1.0.2 [Release 10.2 to 12.1] 
Information in this document applies to any platform. 
***Checked for relevance on 16-July-2015***  
***Checked for relevance on 27-Oct-2016***

PURPOSE
This document describes a method of rolling forward a standby database using incremental backups to fix the ORA-1578 and the ORA-26040 errors that were cuased due to Nologging/Unrecoverable operation.

SCOPE
When a segment is defined with the NOLOGGING attribute and if a NOLOGGING/UNRECOVERABLE operation updates the segment, the online redo log file is updated with minimal information to invalidate the affected blocks when a RECOVERY is later performed.

This kind of NOLOGGING/UNRECOVERABLE will mark the affected blocks as corrupt during the media recovery on the standby database.Now, when you either activate the standby database, or open the standby database with the read-only option, and attempt to read the range of blocks that are marked as "UNRECOVERABLE," you see error messages similar to the following:

ORA-01578: ORACLE data block corrupted (file # 1, block # 2521) 
ORA-01110: data file 1: '/vobs/oracle/dbs/stdby/tbs_1.f' 
ORA-26040: Data block was loaded using the NOLOGGING option

In this article we will be checking the steps to fix the nologging changes have been applied to a small subset of the database and the  nologging changes have been applied to a large portion of the database:

A look-a-like procedure is documented in :

   Oracle® Data Guard Concepts and Administration 11g Release 1 (11.1) Part Number B28294-03

   Section 13.4 Recovering After the NOLOGGING Clause Is Specified

   http://download.oracle.com/docs/cd/B28359_01/server.111/b28294/scenarios.htm#i1015738

DETAILS
 

STEPS
 

1. Follow this step-by-step procedure to roll forward a physical standby database for which nologging changes have been  applied to a small subset of the database:
1. List the files that have had nologging changes applied by querying the V$DATAFILE view on the standby database. For example:

SQL> SELECT FILE#, FIRST_NONLOGGED_SCN FROM V$DATAFILE WHERE FIRST_NONLOGGED_SCN > 0;

FILE#      FIRST_NONLOGGED_SCN 
---------- ------------------- 
         4              225979 
         5              230184


2. Stop Redo Apply on the standby database:

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;

3. On the standby database, offline the datafiles (recorded in step 0) that have had nologging changes. Taking these datafiles offline ensures redo data is not skipped for the corrupt blocks while the incremental backups are performed.

SQL> ALTER DATABASE DATAFILE 4 OFFLINE FOR DROP; 
SQL> ALTER DATABASE DATAFILE 5 OFFLINE FOR DROP;

4. Start Redo Apply on the standby database:

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT;

5. While connected to the primary database as the RMAN target, create an incremental backup for each datafile listed in the FIRST_NONLOGGED_SCN column (recorded in step 0). For example:

RMAN> BACKUP INCREMENTAL FROM SCN 225979 DATAFILE 4 FORMAT '/tmp/ForStandby_%U' TAG 'FOR STANDBY'; 
RMAN> BACKUP INCREMENTAL FROM SCN 230184 DATAFILE 5 FORMAT '/tmp/ForStandby_%U' TAG 'FOR STANDBY';

6. Transfer all backup sets created on the primary system to the standby system. (Note that there may be more than one backup file created.)

% scp /tmp/ForStandby_* standby:/tmp

7. While connected to the physical standby database as the RMAN target, catalog all incremental backup pieces. For example:

RMAN> CATALOG START WITH '/tmp/ForStandby_';

8. Stop Redo Apply on the standby database:

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;

9. Online the datafiles on the standby database

SQL> ALTER DATABASE DATAFILE 4 ONLINE; 
SQL> ALTER DATABASE DATAFILE 5 ONLINE;

10. While connected to the physical standby database as the RMAN target, apply the incremental backup sets:

RMAN> RECOVER DATAFILE 4, 5 NOREDO;

11. Query the V$DATAFILE view on the standby database to verify there are no datafiles with nologged changes. The following query should return zero rows

SQL> SELECT FILE#, FIRST_NONLOGGED_SCN FROM V$DATAFILE WHERE FIRST_NONLOGGED_SCN > 0;

12. Recreate the Standby Controlfile following:

Note 459411.1  Steps to recreate a Physical Standby Controlfile

13. Remove the incremental backups from the standby system:

RMAN> DELETE BACKUP TAG 'FOR STANDBY';

14. Manually remove the incremental backups from the primary system. For example, the following example uses the Linux rm command:

% rm /tmp/ForStandby_*

15. Start Redo Apply on the standby database:

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT;

 
Note: Starting from 12c we can use RECOVER DATABASE...FROM SERVICE clause in RMAN to generate, transfer and apply the incremental backup in a single step. Please refer below document for examples: 

Note 1987763.1  ROLLING FORWARD A PHYSICAL STANDBY USING RECOVER FROM SERVICE COMMAND IN 12C 

 
 

2. Follow this step-by-step procedure to roll forward a physical standby database for which nologging changes have been applied to a large portion of the database:
1. Query the V$DATAFILE view on the standby database to record the lowest FIRST_NONLOGGED_SCN:

SQL> SELECT MIN(FIRST_NONLOGGED_SCN) FROM V$DATAFILE WHERE FIRST_NONLOGGED_SCN>0;

MIN(FIRST_NONLOGGED_SCN) 
------------------------ 
                  223948

2.Stop Redo Apply on the standby database:

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;

3.While connected to the primary database as the RMAN target, create an incremental backup from the lowest FIRST_NONLOGGED_SCN (recorded in step 0)

RMAN> BACKUP INCREMENTAL FROM SCN 223948 DATABASE FORMAT '/tmp/ForStandby_%U' tag 'FOR STANDBY';

4.Transfer all backup sets created on the primary system to the standby system. (Note that more than one backup file may have been created.) The following example uses the scp command to copy the files:

% scp /tmp/ForStandby_* standby:/tmp

5.While connected to the standby database as the RMAN target, catalog all incremental backup piece(s)

RMAN> CATALOG START WITH '/tmp/ForStandby_';

6.While connected to the standby database as the RMAN target, apply the incremental backups:

RMAN> RECOVER DATABASE NOREDO;

7.Query the V$DATAFILE view to verify there are no datafiles with nologged changes. The following query on the standby database should return zero rows:

SQL> SELECT FILE#, FIRST_NONLOGGED_SCN FROM V$DATAFILE WHERE FIRST_NONLOGGED_SCN > 0;

8. Recreate the Standby Controlfile following:

Note 459411.1  Steps to recreate a Physical Standby Controlfile

9.Remove the incremental backups from the standby system:

RMAN> DELETE BACKUP TAG 'FOR STANDBY';

10.Manually remove the incremental backups from the primary system. For example, the following removes the backups using the Linux rm command:

% rm /tmp/ForStandby_*

11.Start Redo Apply on the standby database:

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT;

Note:

If the affected files belong to a READ ONLY tablespace, those files will be ignored during backup. To bypass the issue, at Primary Database, switch the tablespace from read only to read write and back to read only again : 
 

SQL> alter tablespace <tablespace_name> read write ; 
SQL> alter tablespace <tablespace_name> read only ;

REFERENCES
NOTE:794505.1  - ORA-1578 / ORA-26040 Corrupt blocks by NOLOGGING - Error explanation and solution

Rolling Forward a Physical Standby Using Recover From Service Command in 12c (文档 ID 1987763.1)


 

APPLIES TO:
Oracle Database - Enterprise Edition - Version 12.1.0.1 and later 
Information in this document applies to any platform. 

GOAL
  Rolling Forward a Physical Standby Database Using the RECOVER FROM SERVICE Command

A standby database is a transactionally-consistent copy of the production database. It enables production Oracle database to survive disasters and data corruption. If the production database becomes unavailable because of a planned or an unplanned outage, Data Guard can switch a standby database to the production role, minimizing the downtime associated with the outage. Moreover, performance of production database can be improved by offloading resource-intensive backup and reporting operations to standby systems. As you can see, it’s always desirable to have standby database synchronized with the primary database.

Prior to 12c, in order to roll forward the standby database using incremental backups you would need to:

Create a control file for the standby database on the primary database.

Take an incremental backup on the primary starting from the SCN# of the standby database.

Copy the incremental backup to the standby host and catalog it with RMAN.

Mount the standby database with newly created standby control file.

Cancel managed recovery of the standby database and apply incremental backup to the standby database.

Start managed recovery of standby database.

In 12c, this procedure has been dramatically simplified. Now you can use the RECOVER … FROM SERVICE command to synchronize the physical standby database with the primary database.  This command does the following:

Creates an incremental backup containing the changes to the primary database. All changes to data files on the primary database, beginning with the SCN in the standby data file header, are included in the incremental backup.

Transfers the incremental backup over the network to the physical standby database.

Applies the incremental backup to the physical standby database.

This results in rolling forward the standby data files to the same point-in-time as the primary. However, the standby control file still contains old SCN values which are lower than the SCN values in the standby data files. Therefore, to complete the synchronization of the physical standby database, the standby control file needs to be refreshed to update the SCN#.

SOLUTION
Steps to Refresh a Physical Standby Database with Changes Made to the Primary Database 

Environment: 

Primary Database: 
DB_UNIQUE_NAME: prim ( net service name 'PRIM') 

Standby Database: 
DB_UNIQUE_NAME:clone( net service name 'CLONE') 

Use the following steps to refresh the physical standby database with changes made to the primary database: 

Prerequisites : 
 

Oracle Net connectivity is established between the physical standby database and the primary database.

You can do this by adding an entry corresponding to the primary database in the tnsnames.ora file of the physical standby database.

The password files on the primary database and the physical standby database are the same.

The COMPATIBLE parameter in the initialization parameter file of the primary database and physical standby database is set to 12.0.

Start RMAN and connect as target to the physical standby database.

Check the existing size of the Primary database and compare with the standby existing size as we need at-least the difference in size (free space)  since standby is behind ,if the datafile on primary has autoextended then standby file would be same in size compared to prod,so when you do the incremental rollforward it would apply the blocks and add any new one to match the size of standby file.

1. Place the physical standby database in MOUNT mode.


    SHUTDOWN IMMEDIATE; 
    STARTUP MOUNT;

 2. Stop the managed recovery processes on the physical standby database.

 ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;

NOTE:  If using broker, you will need to stop MRP through DGMGRL.  I.e.: 

DGMGRL> edit database '<Standby db_unique_name>' set STATE='APPLY-OFF' ;

3. Let us identify the datafiles on standby database which are out of sync with respect to primary.


Primary: 

SQL> select HXFIL File_num,substr(HXFNM,1,40),fhscn from x$kcvfh; 

  FILE_NUM SUBSTR(HXFNM,1,40)                       FHSCN 
---------- ---------------------------------------- ---------------- 
         1 /u01/app/oracle/oradata/prim/system01.db 1984501 
         3 /u01/app/oracle/oradata/prim/sysaux01.db 1984501 
         4 /u01/app/oracle/oradata/prim/undotbs01.d 1984501 
         5 /u01/app/oracle/oradata/prim/pdbseed/sys  1733076 
         6 /u01/app/oracle/oradata/prim/users01.dbf 1984501 
         7 /u01/app/oracle/oradata/prim/pdbseed/sys  1733076 
         8 /u01/app/oracle/oradata/prim/pdb1/system 1984501 
         9 /u01/app/oracle/oradata/prim/pdb1/sysaux 1984501 
        10 /u01/app/oracle/oradata/prim/pdb1/pdb1_u 1984501 
        16 /u01/app/oracle/oradata/prim/pdb3/system 1984501 
        17 /u01/app/oracle/oradata/prim/pdb3/sysaux 1984501 
        18 /u01/app/oracle/oradata/prim/pdb3/pdb1_u 1984501 
        19 /u01/app/oracle/oradata/prim/pdb3/test.d 1984501 

13 rows selected. 

STANDBy: 

SQL>  select HXFIL File_num,substr(HXFNM,1,40),fhscn from x$kcvfh; 

  FILE_NUM SUBSTR(HXFNM,1,40)                       FHSCN 
---------- ---------------------------------------- ---------------- 
         1 /u01/app/oracle/oradata/clone/system01.d 1980995 
         3 /u01/app/oracle/oradata/clone/sysaux01.d 1980998 
         4 /u01/app/oracle/oradata/clone/undotbs01. 1981008 
         5 /u01/app/oracle/oradata/clone/pdbseed/sy  1733076 
         6 /u01/app/oracle/oradata/clone/users01.db 1981012 
         7 /u01/app/oracle/oradata/clone/pdbseed/sy  1733076 
         8 /u01/app/oracle/oradata/clone/pdb1/syste 1981015 
         9 /u01/app/oracle/oradata/clone/pdb1/sysau 1981021 
        10 /u01/app/oracle/oradata/clone/pdb1/pdb1_ 1981028 
        16 /u01/app/oracle/oradata/clone/pdb3/syste 1981030 
        17 /u01/app/oracle/oradata/clone/pdb3/sysau 1981036 
        18 /u01/app/oracle/oradata/clone/pdb3/pdb1_ 1981043 
        19 /u01/app/oracle/oradata/clone/pdb3/test. 1981044 

13 rows selected. 

On  checking SCN in datafile headers on primary (prim) and standby (clone), we note that whereas SCN 

of datafiles 5,7 match on primary and standby, for rest of the  datafiles (1,3,4,6,8,9,10,16,17) standby is lagging behind  primary.

4. Note the current SCN of the physical standby database. This is required to determine, in a later step, if new data files were added to the primary database.

Query the V$DATABASE view to obtain the current SCN using the following command:

SELECT CURRENT_SCN FROM V$DATABASE;

5.  The RECOVER … FROM SERVICE command refreshes the standby data files and rolls them forward to the same point-in-time as the primary. 

[oracle@localhost ~]$ rman target/ 

Recovery Manager: Release 12.1.0.1.0 - Production on Mon Mar 9 18:22:52 2015 

Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved. 

connected to target database: PRIM (DBID=4165840403, not open)

RMAN> recover database from service prim noredo using compressed backupset;

  Log:


Starting recover at 09-MAR-15 
using target database control file instead of recovery catalog 
allocated channel: ORA_DISK_1 
channel ORA_DISK_1: SID=32 device type=DISK 
skipping datafile 5; already restored to SCN 1733076 
skipping datafile 7; already restored to SCN 1733076 
channel ORA_DISK_1: starting incremental datafile backup set restore 
channel ORA_DISK_1: using compressed network backup set from service prim 
destination for restore of datafile 00001: /u01/app/oracle/oradata/clone/system01.dbf 
channel ORA_DISK_1: restore complete, elapsed time: 00:00:07 
channel ORA_DISK_1: starting incremental datafile backup set restore 
channel ORA_DISK_1: using compressed network backup set from service prim 
destination for restore of datafile 00003: /u01/app/oracle/oradata/clone/sysaux01.dbf 
channel ORA_DISK_1: restore complete, elapsed time: 00:00:25 
channel ORA_DISK_1: starting incremental datafile backup set restore 
channel ORA_DISK_1: using compressed network backup set from service prim 
destination for restore of datafile 00004: /u01/app/oracle/oradata/clone/undotbs01.dbf 
channel ORA_DISK_1: restore complete, elapsed time: 00:00:07 
channel ORA_DISK_1: starting incremental datafile backup set restore 
channel ORA_DISK_1: using compressed network backup set from service prim 
destination for restore of datafile 00006: /u01/app/oracle/oradata/clone/users01.dbf 
channel ORA_DISK_1: restore complete, elapsed time: 00:00:03 
channel ORA_DISK_1: starting incremental datafile backup set restore 
channel ORA_DISK_1: using compressed network backup set from service prim 
destination for restore of datafile 00008: /u01/app/oracle/oradata/clone/pdb1/system01.dbf 
channel ORA_DISK_1: restore complete, elapsed time: 00:00:07 
channel ORA_DISK_1: starting incremental datafile backup set restore 
channel ORA_DISK_1: using compressed network backup set from service prim 
destination for restore of datafile 00009: /u01/app/oracle/oradata/clone/pdb1/sysaux01.dbf 
channel ORA_DISK_1: restore complete, elapsed time: 00:00:15 
channel ORA_DISK_1: starting incremental datafile backup set restore 
channel ORA_DISK_1: using compressed network backup set from service prim 
destination for restore of datafile 00010: /u01/app/oracle/oradata/clone/pdb1/pdb1_users01.dbf 
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01 
channel ORA_DISK_1: starting incremental datafile backup set restore 
channel ORA_DISK_1: using compressed network backup set from service prim 
destination for restore of datafile 00016: /u01/app/oracle/oradata/clone/pdb3/system01.dbf 
channel ORA_DISK_1: restore complete, elapsed time: 00:00:15 
channel ORA_DISK_1: starting incremental datafile backup set restore 
channel ORA_DISK_1: using compressed network backup set from service prim 
destination for restore of datafile 00017: /u01/app/oracle/oradata/clone/pdb3/sysaux01.dbf 
channel ORA_DISK_1: restore complete, elapsed time: 00:00:15 
channel ORA_DISK_1: starting incremental datafile backup set restore 
channel ORA_DISK_1: using compressed network backup set from service prim 
destination for restore of datafile 00018: /u01/app/oracle/oradata/clone/pdb3/pdb1_users01.dbf 
channel ORA_DISK_1: restore complete, elapsed time: 00:00:03 
channel ORA_DISK_1: starting incremental datafile backup set restore 
channel ORA_DISK_1: using compressed network backup set from service prim 
destination for restore of datafile 00019: /u01/app/oracle/oradata/clone/pdb3/test.dbf 
channel ORA_DISK_1: restore complete, elapsed time: 00:00:02 

Finished recover at 09-MAR-15


6. Lets check the SCNs of the datafiles at primary and standby now.

Primary 
-------- 
SQL> select HXFIL File_num,substr(HXFNM,1,40),fhscn from x$kcvfh; 

  FILE_NUM SUBSTR(HXFNM,1,40)                       FHSCN 
---------- ---------------------------------------- ---------------- 
         1 /u01/app/oracle/oradata/prim/system01.db 1985174 
         3 /u01/app/oracle/oradata/prim/sysaux01.db 1985183 
         4 /u01/app/oracle/oradata/prim/undotbs01.d 1985194 
         5 /u01/app/oracle/oradata/prim/pdbseed/sys 1733076 
         6 /u01/app/oracle/oradata/prim/users01.dbf 1985203 
         7 /u01/app/oracle/oradata/prim/pdbseed/sys 1733076 
         8 /u01/app/oracle/oradata/prim/pdb1/system 1985206 
         9 /u01/app/oracle/oradata/prim/pdb1/sysaux 1985212 
        10 /u01/app/oracle/oradata/prim/pdb1/pdb1_u 1985218 
        16 /u01/app/oracle/oradata/prim/pdb3/system 1985221 
        17 /u01/app/oracle/oradata/prim/pdb3/sysaux 1985343 
        18 /u01/app/oracle/oradata/prim/pdb3/pdb1_u 1985350 
        19 /u01/app/oracle/oradata/prim/pdb3/test.d 1985354 

Standby 
-------- 
RMAN> select HXFIL File_num,substr(HXFNM,1,40),fhscn from x$kcvfh; 

  FILE_NUM SUBSTR(HXFNM,1,40)                       FHSCN 
---------- ---------------------------------------- ---------------- 
         1 /u01/app/oracle/oradata/clone/system01.d 1985174 
         3 /u01/app/oracle/oradata/clone/sysaux01.d 1985183 
         4 /u01/app/oracle/oradata/clone/undotbs01. 1985194 
         5 /u01/app/oracle/oradata/clone/pdbseed/sy 1733076 
         6 /u01/app/oracle/oradata/clone/users01.db 1985203 
         7 /u01/app/oracle/oradata/clone/pdbseed/sy 1733076 
         8 /u01/app/oracle/oradata/clone/pdb1/syste 1985206 
         9 /u01/app/oracle/oradata/clone/pdb1/sysau 1985212 
        10 /u01/app/oracle/oradata/clone/pdb1/pdb1_ 1985218 
        16 /u01/app/oracle/oradata/clone/pdb3/syste 1985221 
        17 /u01/app/oracle/oradata/clone/pdb3/sysau 1985343 
        18 /u01/app/oracle/oradata/clone/pdb3/pdb1_ 1985350 
        19 /u01/app/oracle/oradata/clone/pdb3/test. 1985354 

13 rows selected

From above,we can see primary and standby SCNs matching now.


However, the standby control file still contains old SCN values which are lower than the SCN values in the standby data files.  
Therefore, to complete the synchronization of the physical standby database, we must refresh the standby control file to update the SCN#. 

7. Use the following commands to shut down the standby database and then start it in NOMOUNT mode.


    SHUTDOWN IMMEDIATE; 
    STARTUP NOMOUNT;


8. Restore the standby control file by using the control file on the primary database using service prim.

The following command restores the control file on the physical standby database by using the primary database control file. 

RESTORE STANDBY CONTROLFILE FROM SERVICE <primary_tns_service>; 

RMAN> restore standby controlfile from service prim; 

Starting restore at 09-MAR-15 
allocated channel: ORA_DISK_1 
channel ORA_DISK_1: SID=20 device type=DISK 

channel ORA_DISK_1: starting datafile backup set restore 
channel ORA_DISK_1: using network backup set from service prim 
channel ORA_DISK_1: restoring control file 
channel ORA_DISK_1: restore complete, elapsed time: 00:00:03 
output file name=/u01/app/oracle/oradata/clone/control01.ctl 
output file name=/u01/app/oracle/fast_recovery_area/clone/control02.ctl 
Finished restore at 09-MAR-15

After this step, the names of files in the standby control file are the names that were used in the primary database.

NOTE:  Depending on the configuration, the path and/or names of the standby datafiles after the standby controlfile refresh may be correct and thus steps #9 and #10 can be skipped.  

Mount the standby database using the following command: 

RMAN> alter database mount; 

Statement processed 
released channel: ORA_DISK_1 


RMAN> report schema; 

Starting implicit crosscheck backup at 09-MAR-15 
allocated channel: ORA_DISK_1 
channel ORA_DISK_1: SID=21 device type=DISK 
Crosschecked 9 objects 
Finished implicit crosscheck backup at 09-MAR-15 

Starting implicit crosscheck copy at 09-MAR-15 
using channel ORA_DISK_1 
Crosschecked 2 objects 
Finished implicit crosscheck copy at 09-MAR-15 

searching for all files in the recovery area 
cataloging files... 
cataloging done 

List of Cataloged Files 
======================= 
File Name: /u01/app/oracle/fast_recovery_area/CLONE/archivelog/2015_03_05/o1_mf_1_17_11q13dm8_.arc 
File Name: /u01/app/oracle/fast_recovery_area/CLONE/archivelog/2015_03_05/o1_mf_1_16_10q13dm8_.arc 
File Name: /u01/app/oracle/fast_recovery_area/CLONE/archivelog/2015_03_05/o1_mf_1_2_bhk1ctcz_.arc 
File Name: /u01/app/oracle/fast_recovery_area/CLONE/archivelog/2015_03_05/o1_mf_1_1_bhk17cw8_.arc 

RMAN-06139: WARNING: control file is not current for REPORT SCHEMA 
Report of database schema for database with db_unique_name CLONE 

List of Permanent Datafiles 
=========================== 
File Size(MB) Tablespace           RB segs Datafile Name 
---- -------- -------------------- ------- ------------------------ 
1    780      SYSTEM               ***     /u01/app/oracle/oradata/prim/system01.dbf 
3    730      SYSAUX               ***     /u01/app/oracle/oradata/prim/sysaux01.dbf 
4    90       UNDOTBS1             ***     /u01/app/oracle/oradata/prim/undotbs01.dbf 
5    250      PDB$SEED:SYSTEM      ***     /u01/app/oracle/oradata/prim/pdbseed/system01.dbf
6    5        USERS                ***     /u01/app/oracle/oradata/prim/users01.dbf 
7    590      PDB$SEED:SYSAUX      ***     /u01/app/oracle/oradata/prim/pdbseed/sysaux01.dbf
8    260      PDB1:SYSTEM          ***     /u01/app/oracle/oradata/prim/pdb1/system01.dbf 
9    620      PDB1:SYSAUX          ***     /u01/app/oracle/oradata/prim/pdb1/sysaux01.dbf 
10   5        PDB1:USERS           ***     /u01/app/oracle/oradata/prim/pdb1/pdb1_users01.dbf 
16   260      PDB3:SYSTEM          ***     /u01/app/oracle/oradata/prim/pdb3/system01.dbf 
17   620      PDB3:SYSAUX          ***     /u01/app/oracle/oradata/prim/pdb3/sysaux01.dbf 
18   5        PDB3:USERS           ***     /u01/app/oracle/oradata/prim/pdb3/pdb1_users01.dbf 
19   50       PDB3:TEST            ***     /u01/app/oracle/oradata/prim/pdb3/test.dbf 

List of Temporary Files 
======================= 
File Size(MB) Tablespace           Maxsize(MB) Tempfile Name 
---- -------- -------------------- ----------- -------------------- 
1    60       TEMP                 32767       /u01/app/oracle/oradata/prim/temp01.dbf 
2    20       PDB$SEED:TEMP        32767       /u01/app/oracle/oradata/prim/pdbseed/pdbseed_temp01.dbf 
3    373      PDB1:TEMP            32767       /u01/app/oracle/oradata/prim/pdb1/temp01.dbf 
4    71       PDB3:TEMP            32767       /u01/app/oracle/oradata/prim/pdb3/temp01.dbf


9. Update the names of the data files and the temp files in the standby control file. 

   Use the CATALOG command and the SWITCH command to update all the data file names. 

        RMAN> catalog start with '<path where the actual standby datafile existed>'; 

        In this case 

        RMAN> Catalog start with '/u01/app/oracle/oradata/clone/'; 

searching for all files that match the pattern /u01/app/oracle/oradata/clone 

List of Files Unknown to the Database 
===================================== 
File Name: /u01/app/oracle/oradata/clone/pdb1/pdb1_users01.dbf 
File Name: /u01/app/oracle/oradata/clone/pdb1/sysaux01.dbf 
File Name: /u01/app/oracle/oradata/clone/pdb1/system01.dbf 
File Name: /u01/app/oracle/oradata/clone/pdbseed/sysaux01.dbf 
File Name: /u01/app/oracle/oradata/clone/pdbseed/system01.dbf 
File Name: /u01/app/oracle/oradata/clone/sysaux01.dbf 
File Name: /u01/app/oracle/oradata/clone/system01.dbf 
File Name: /u01/app/oracle/oradata/clone/undotbs01.dbf 
File Name: /u01/app/oracle/oradata/clone/users01.dbf 
File Name: /u01/app/oracle/oradata/clone/pdb3/pdb1_users01.dbf 
File Name: /u01/app/oracle/oradata/clone/pdb3/sysaux01.dbf 
File Name: /u01/app/oracle/oradata/clone/pdb3/system01.dbf 
File Name: /u01/app/oracle/oradata/clone/pdb3/test.dbf 

Do you really want to catalog the above files (enter YES or NO)? yes 
cataloging files... 
cataloging done 

List of Cataloged Files 
======================= 
File Name: /u01/app/oracle/oradata/clone/pdb1/pdb1_users01.dbf 
File Name: /u01/app/oracle/oradata/clone/pdb1/sysaux01.dbf 
File Name: /u01/app/oracle/oradata/clone/pdb1/system01.dbf 
File Name: /u01/app/oracle/oradata/clone/pdbseed/sysaux01.dbf 
File Name: /u01/app/oracle/oradata/clone/pdbseed/system01.dbf 
File Name: /u01/app/oracle/oradata/clone/sysaux01.dbf 
File Name: /u01/app/oracle/oradata/clone/system01.dbf 
File Name: /u01/app/oracle/oradata/clone/undotbs01.dbf 
File Name: /u01/app/oracle/oradata/clone/users01.dbf 
File Name: /u01/app/oracle/oradata/clone/pdb3/pdb1_users01.dbf 
File Name: /u01/app/oracle/oradata/clone/pdb3/sysaux01.dbf 
File Name: /u01/app/oracle/oradata/clone/pdb3/system01.dbf 
File Name: /u01/app/oracle/oradata/clone/pdb3/test.dbf

10. Switch to cataloged copy.

RMAN> SWITCH DATABASE TO COPY; 

datafile 1 switched to datafile copy "/u01/app/oracle/oradata/clone/system01.dbf" 
datafile 3 switched to datafile copy "/u01/app/oracle/oradata/clone/sysaux01.dbf" 
datafile 4 switched to datafile copy "/u01/app/oracle/oradata/clone/undotbs01.dbf" 
datafile 5 switched to datafile copy "/u01/app/oracle/oradata/clone/pdbseed/system01.dbf" 
datafile 6 switched to datafile copy "/u01/app/oracle/oradata/clone/users01.dbf" 
datafile 7 switched to datafile copy "/u01/app/oracle/oradata/clone/pdbseed/sysaux01.dbf" 
datafile 8 switched to datafile copy "/u01/app/oracle/oradata/clone/pdb1/system01.dbf" 
datafile 9 switched to datafile copy "/u01/app/oracle/oradata/clone/pdb1/sysaux01.dbf" 
datafile 10 switched to datafile copy "/u01/app/oracle/oradata/clone/pdb1/pdb1_users01.dbf" 
datafile 16 switched to datafile copy "/u01/app/oracle/oradata/clone/pdb3/system01.dbf" 
datafile 17 switched to datafile copy "/u01/app/oracle/oradata/clone/pdb3/sysaux01.dbf" 
datafile 18 switched to datafile copy "/u01/app/oracle/oradata/clone/pdb3/pdb1_users01.dbf" 
datafile 19 switched to datafile copy "/u01/app/oracle/oradata/clone/pdb3/test.dbf"


Here, /u01/app/oracle/oradata/clone is the location of the data files on the physical standby database. 
All data files must be stored in this location.

11. Use the current SCN returned in Step 4 to determine if new data files were added to the primary database since the standby database was last refreshed. If yes, these data files need to be restored on the standby from the primary database.

The following example assumes that the CURRENT_SCN returned in Step 6 is 1984232 and lists the data files that were created on the primary after the timestamp represented by this SCN:

SELECT file# FROM V$DATAFILE WHERE creation_change# >= 1984232;

If no files are returned in Step 11, then go to Step 13. If one or more files are returned in Step 11, then restore these data files from the primary database as in step 12.

12. If you are not connected to a recovery catalog, then use the following commands to restore data files that were added to the primary after the standby was last refreshed ( assuming datafile 21  added to the primary):

RUN 

SET NEWNAME FOR DATABASE TO '/u01/app/oracle/oradata/clone'; 
RESTORE DATAFILE 21 FROM SERVICE prim; 
}

If you are connected to a recovery catalog, then use the following command to restore data files that were added to the primary after the standby was last refreshed (assuming data file 21 added to the primary):

RESTORE DATAFILE 21 FROM SERVICE prim;

13. Update the names of the online redo logs and standby redo logs in the standby control file using one of the following methods: 

 - Use the ALTER DATABASE CLEAR command to clear the log files in all redo log groups of the standby database. RMAN then recreates all the standby redo logs and the online redo log files.

 Note:

Clearing log files is recommended only if the standby database does not have access to the online redo log files and standby redo log 
 files of the primary database( for ex: standby and primary at same server or using same ASM disk group). If the standby database has access to the redo log files of the primary database and the redo log file 
 names of the primary database are OMF names, then the ALTER DATABASE command will delete log files on the primary database.

 - Use the ALTER DATABASE RENAME FILE command to rename the redo log files. 
   Use a separate command to rename each log file. 

   To rename log files, the STANDBY_FILE_MANAGEMENT initialization parameter must be set to MANUAL.  
   Renaming log files is recommended when the number of online redo logs files and standby redo log files is the same 
   in the primary database and the physical standby database. 

   (Oracle Active Data Guard only) Perform the following steps to open the physical standby database:

On the primary database, switch the archived redo log files using the following command: 

       

ALTER SYSTEM ARCHIVE LOG CURRENT;

  
On the physical standby database, run the following commands:

        RECOVER DATABASE; 
        ALTER DATABASE OPEN READ ONLY;

     Start the managed recovery processes on the physical standby database by using the following command:


     ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;

REFERENCES
NOTE:1646232.1  - ORA-19573 when trying to restore to standby with incremental backup From Primary or During any RMAN restore operation

这篇关于insert append nologging 对Dataguard 影响 DG的同步修复的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/791417

相关文章

基于MySQL Binlog的Elasticsearch数据同步实践

一、为什么要做 随着马蜂窝的逐渐发展,我们的业务数据越来越多,单纯使用 MySQL 已经不能满足我们的数据查询需求,例如对于商品、订单等数据的多维度检索。 使用 Elasticsearch 存储业务数据可以很好的解决我们业务中的搜索需求。而数据进行异构存储后,随之而来的就是数据同步的问题。 二、现有方法及问题 对于数据同步,我们目前的解决方案是建立数据中间表。把需要检索的业务数据,统一放到一张M

服务器集群同步时间手记

1.时间服务器配置(必须root用户) (1)检查ntp是否安装 [root@node1 桌面]# rpm -qa|grep ntpntp-4.2.6p5-10.el6.centos.x86_64fontpackages-filesystem-1.41-1.1.el6.noarchntpdate-4.2.6p5-10.el6.centos.x86_64 (2)修改ntp配置文件 [r

SWAP作物生长模型安装教程、数据制备、敏感性分析、气候变化影响、R模型敏感性分析与贝叶斯优化、Fortran源代码分析、气候数据降尺度与变化影响分析

查看原文>>>全流程SWAP农业模型数据制备、敏感性分析及气候变化影响实践技术应用 SWAP模型是由荷兰瓦赫宁根大学开发的先进农作物模型,它综合考虑了土壤-水分-大气以及植被间的相互作用;是一种描述作物生长过程的一种机理性作物生长模型。它不但运用Richard方程,使其能够精确的模拟土壤中水分的运动,而且耦合了WOFOST作物模型使作物的生长描述更为科学。 本文让更多的科研人员和农业工作者

基于51单片机的自动转向修复系统的设计与实现

文章目录 前言资料获取设计介绍功能介绍设计清单具体实现截图参考文献设计获取 前言 💗博主介绍:✌全网粉丝10W+,CSDN特邀作者、博客专家、CSDN新星计划导师,一名热衷于单片机技术探索与分享的博主、专注于 精通51/STM32/MSP430/AVR等单片机设计 主要对象是咱们电子相关专业的大学生,希望您们都共创辉煌!✌💗 👇🏻 精彩专栏 推荐订阅👇🏻 单片机

【经验交流】修复系统事件查看器启动不能时出现的4201错误

方法1,取得『%SystemRoot%\LogFiles』文件夹和『%SystemRoot%\System32\wbem』文件夹的权限(包括这两个文件夹的所有子文件夹的权限),简单点说,就是使你当前的帐户拥有这两个文件夹以及它们的子文件夹的绝对控制权限。这是最简单的方法,不少老外说,这样一弄,倒是解决了问题。不过对我的系统,没用; 方法2,以不带网络的安全模式启动,运行命令行,输入“ne

MySQL主从同步延迟原理及解决方案

概述 MySQL的主从同步是一个很成熟的架构,优点为: ①在从服务器可以执行查询工作(即我们常说的读功能),降低主服务器压力; ②在从主服务器进行备份,避免备份期间影响主服务器服务; ③当主服务器出现问题时,可以切换到从服务器。 相信大家对于这些好处已经非常了解了,在项目的部署中也采用这种方案。但是MySQL的主从同步一直有从库延迟的问题,那么为什么会有这种问题。这种问题如何解决呢? MyS

Qt中window frame的影响

window frame 在创建图形化界面的时候,会创建窗口主体,上面会多出一条,周围多次一圈细边,这就叫window frame窗口框架,这是操作系统自带的。 这个对geometry的一些属性有一定影响,主要体现在Qt坐标系体系: 窗口当中包含一个按钮,这个按钮的坐标系是以父元素为参考,那么这个参考是widget本体作为参考,还是window frame作为参考,这两种参考体系都存在

使用条件变量实现线程同步:C++实战指南

使用条件变量实现线程同步:C++实战指南 在多线程编程中,线程同步是确保程序正确性和稳定性的关键。条件变量(condition variable)是一种强大的同步原语,用于在线程之间进行协调,避免数据竞争和死锁。本文将详细介绍如何在C++中使用条件变量实现线程同步,并提供完整的代码示例和详细的解释。 什么是条件变量? 条件变量是一种同步机制,允许线程在某个条件满足之前进入等待状态,并在条件满

mysql创建新表,同步数据

import os import argparse import glob import cv2 import numpy as np import onnxruntime import tqdm import pymysql import time import json from datetime import datetime os.environ[“CUDA_VISIBLE_DEVICE

三.海量数据实时分析-FlinkCDC实现Mysql数据同步到Doris

FlinkCDC 同步Mysql到Doris 参考:https://nightlies.apache.org/flink/flink-cdc-docs-release-3.0/zh/docs/get-started/quickstart/mysql-to-doris/ 1.安装Flink 下载 Flink 1.18.0,下载后把压缩包上传到服务器,使用tar -zxvf flink-xxx-