This was first published on https://blog.dbi-services.com/oracle-12cr2-pluggable-database-relocation (2016-11-11)
Republishing here for new followers. The content is related to the the versions available at the publication date
Here is, in my opinion, the most beautiful feature of the multitenant architecture. You know how I love Transportable Tablespaces. But here:
I am in Standard Edition here in both source and target, no option required for this:
SQL*Plus: Release 12.2.0.1.0 Production on Thu Nov 10 13:40:05 2016 Copyright (c) 1982, 2016, Oracle. All rights reserved. Connected to: Oracle Database 12c Standard Edition Release 12.2.0.1.0 - 64bit Production
On server opc1 I have a container database CDB1 with one pluggable database PDB1 where I create a new table:
23:40:20 (opc1)CDB1 SQL>alter session set container=PDB1; Session altered. 23:40:20 (opc1)CDB1 SQL>create table DEMO as select current_timestamp insert_timestamp,instance_name from v$instance; Table created. 23:40:21 (opc1)CDB1 SQL>insert into DEMO select current_timestamp,instance_name from v$instance; 1 row created. 23:40:21 (opc1)CDB1 SQL>select * from DEMO; INSERT_TIMESTAMP INSTANCE_NAME ----------------------------------- ---------------- 10-NOV-16 11.40.20.902761 PM +00:00 CDB1 10-NOV-16 11.40.21.966815 PM +00:00 CDB1
I’m in Oracle Public Cloud where tablespaces are encrypted. To ship a pluggable database I must export the keys. Here is the query to get them:
23:40:23 (opc1)CDB1 SQL>select key_id from v$encryption_keys where creator_pdbname='PDB1'; KEY_ID ------------------------------------------------------------------------------ AWlnBaUXG0/gv4evS9Ywu8EAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAnd I can filter with this query to export it:
23:40:23 (opc1)CDB1 SQL>administer key management export encryption keys with secret "oracle" to '/tmp/cdb2pdb1.p12' identified by "Ach1z0#d" with identifier in (select key_id from v$encryption_keys where creator_pdbname='PDB1'); administer key management export encryption keys with secret "oracle" to '/tmp/cdb2pdb1.p12' identified by "Ach1z0#d" with identifier in (select key_id from v$encryption_keys where creator_pdbname='PDB1') * ERROR at line 1: ORA-28417: password-based keystore is not openI can’t do that with auto-login wallet.
23:40:23 (opc1)CDB1 SQL>select wrl_type,wrl_parameter,wallet_type from v$encryption_wallet; WRL_TYPE WRL_PARAMETER WALLET_TY -------- -------------------------------------- --------- FILE /u01/app/oracle/admin/CDB1/tde_wallet/ AUTOLOGINLet’s open the wallet with password:
23:40:23 (opc1)CDB1 SQL>administer key management set keystore close; keystore altered. 23:40:23 (opc1)CDB1 SQL>administer key management set keystore open identified by "Ach1z0#d"; keystore altered. 23:40:23 (opc1)CDB1 SQL>select wrl_type,wrl_parameter,wallet_type from v$encryption_wallet; WRL_TYPE WRL_PARAMETER WALLET_TY -------- -------------------------------------- --------- FILE /u01/app/oracle/admin/CDB1/tde_wallet/ PASSWORDand re-try my export:
23:40:23 (opc1)CDB1 SQL>administer key management export encryption keys with secret "oracle" to '/tmp/cdb2pdb1.p12' identified by "Ach1z0#d" with identifier in (select key_id from v$encryption_keys where creator_pdbname='PDB1'); keystore altered.This file must be copied to the destination server. I did it with scp. You can also use dbms_file_transfer as you will need a database link anyway for the remote clone.
On the destination server, where I have no CDB (I’m limited to one PDB here without the multitenant option)
23:40:31 (opc2)CDB2 SQL>show pdbs CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED READ ONLY NOI have to import the encryption key:
23:40:31 (opc2)CDB2 SQL>administer key management set keystore open identified by "Ach1z0#d"; keystore altered. 23:40:31 (opc2)CDB2 SQL>administer key management import encryption keys with secret "oracle" from '/tmp/cdb2pdb1.p12' identified by "Ach1z0#d"; keystore altered.I’m now ready to relocate my PDB as I’m sure I’ll be ready to open it.
The remote clone is done through a DB link. I’ve a TNS entry named CDB1:
23:40:31 (opc2)CDB2 SQL>select dbms_tns.resolve_tnsname('CDB1') from dual; DBMS_TNS.RESOLVE_TNSNAME('CDB1') -------------------------------------------------------------------------------- (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=opc1)(PORT=1521))(CONNECT_DAT A=(SERVER=DEDICATED)(SERVICE_NAME=CDB1.opcoct.oraclecloud.internal)(CID=(PROGRAM =oracle)(HOST=SE222.compute-opcoct.oraclecloud.internal)(USER=oracle)))) 23:40:31 (opc2)CDB2 SQL>create database link CDB1 connect to C##DBA identified by oracle using 'CDB1'; Database link created.
In order to show that the source doesn’t have to be read only as in previous release, I’m running the following inserts every 5 minutes:
23:40:44 (opc1)CDB1 SQL>commit; Commit complete. 23:40:44 (opc1)CDB1 SQL>insert into DEMO select current_timestamp,instance_name from v$instance; 1 row created. 23:40:44 (opc1)CDB1 SQL>select * from DEMO; INSERT_TIMESTAMP INSTANCE_NAME ----------------------------------- ---------------- 10-NOV-16 11.40.20.902761 PM +00:00 CDB1 10-NOV-16 11.40.21.966815 PM +00:00 CDB1 10-NOV-16 11.40.29.136529 PM +00:00 CDB1 10-NOV-16 11.40.34.214467 PM +00:00 CDB1 10-NOV-16 11.40.39.304515 PM +00:00 CDB1 10-NOV-16 11.40.44.376796 PM +00:00 CDB1 6 rows selected.
Here is the syntax. I need to provide the masterkey of the source wallet. The RELOCATE is this new feature where the source PDB will be relocated to the destination when the clone is opened.
23:40:48 (opc2)CDB2 SQL>create pluggable database PDB1 from PDB1@CDB1 keystore identified by "Ach1z0#d" relocate; Pluggable database created. 23:41:08 (opc2)CDB2 SQL>It took some time, shipping the datafiles through the DB link, but this is online. I was still inserting during this time:
23:41:04 (opc1)CDB1 SQL>select * from DEMO; INSERT_TIMESTAMP INSTANCE_NAME ----------------------------------- ---------------- 10-NOV-16 11.40.20.902761 PM +00:00 CDB1 10-NOV-16 11.40.21.966815 PM +00:00 CDB1 10-NOV-16 11.40.29.136529 PM +00:00 CDB1 10-NOV-16 11.40.34.214467 PM +00:00 CDB1 10-NOV-16 11.40.39.304515 PM +00:00 CDB1 10-NOV-16 11.40.44.376796 PM +00:00 CDB1 10-NOV-16 11.40.49.454661 PM +00:00 CDB1 10-NOV-16 11.40.54.532699 PM +00:00 CDB1 10-NOV-16 11.40.59.614745 PM +00:00 CDB1 10-NOV-16 11.41.04.692784 PM +00:00 CDB1 10 rows selected.Note that you need to be in ARCHIVELOG and LOCAL UNDO to be able to do this because syncronisation will be made by media recovery when we open the clone.
Now, the theory is that when we open the clone, DML is quiesced on source during the recovery of the target and sessions can continue on the target once opened.
23:41:09 (opc2)CDB2 SQL>alter pluggable database PDB1 open; alter pluggable database PDB1 open * ERROR at line 1: ORA-00060: deadlock detected while waiting for resource 23:41:26 (opc2)CDB2 SQL>Bad luck. Every time I tested this scenario, the first open after the relocate fails in deadlock and the session on the source crashes:
23:41:09 (opc1)CDB1 SQL>select * from DEMO; INSERT_TIMESTAMP INSTANCE_NAME ----------------------------------- ---------------- 10-NOV-16 11.40.20.902761 PM +00:00 CDB1 10-NOV-16 11.40.21.966815 PM +00:00 CDB1 10-NOV-16 11.40.29.136529 PM +00:00 CDB1 10-NOV-16 11.40.34.214467 PM +00:00 CDB1 10-NOV-16 11.40.39.304515 PM +00:00 CDB1 10-NOV-16 11.40.44.376796 PM +00:00 CDB1 10-NOV-16 11.40.49.454661 PM +00:00 CDB1 10-NOV-16 11.40.54.532699 PM +00:00 CDB1 10-NOV-16 11.40.59.614745 PM +00:00 CDB1 10-NOV-16 11.41.04.692784 PM +00:00 CDB1 10-NOV-16 11.41.09.773300 PM +00:00 CDB1 11 rows selected. 23:41:14 (opc1)CDB1 SQL> commit; ERROR: ORA-03114: not connected to ORACLE
It’s a good occasion to look at the traces. We can see some messages about the recovery:
*** 2016-11-10T23:41:12.660402+00:00 (PDB1(3)) Media Recovery Log /u03/app/oracle/fast_recovery_area/CDB1/foreign_archivelog/PDB1/2016_11_10/o1_mf_1_24_2025109931_.arc Log read is SYNCHRONOUS though disk_asynch_io is enabled!Those FOREIGN ARCHIVED LOG is a new type of file that you will see in the FRA in 12.2.
So I lost my session on source and now if I try again it works:
23:42:20 (opc2)CDB2 SQL>alter pluggable database PDB1 open; Pluggable database altered. 23:42:24 (opc2)CDB2 SQL>select * from DEMO; INSERT_TIMESTAMP INSTANCE_NAME ----------------------------------- ---------------- 10-NOV-16 11.40.20.902761 PM +00:00 CDB1 10-NOV-16 11.40.21.966815 PM +00:00 CDB1 10-NOV-16 11.40.29.136529 PM +00:00 CDB1 10-NOV-16 11.40.34.214467 PM +00:00 CDB1 10-NOV-16 11.40.39.304515 PM +00:00 CDB1 10-NOV-16 11.40.44.376796 PM +00:00 CDB1 10-NOV-16 11.40.49.454661 PM +00:00 CDB1 10-NOV-16 11.40.54.532699 PM +00:00 CDB1 10-NOV-16 11.40.59.614745 PM +00:00 CDB1 10-NOV-16 11.41.04.692784 PM +00:00 CDB1 10 rows selected.All the inserts that were commited on the source are there. Even with this deadlock bug (SR 3-13618219421), it’s the easiest and fastest way to migrate a database, with the minimum of downtime. Especially in Standard Edition where transportable tablespaces import is not enabled. Without the deadlock bug, the sessions on the source are supposed to be still running , only paused during the recovery, and then continue on the destination.
Hi Franck. Thank you for sharing this. Did you get this foreign_archivelog in the traces of the destination site? As I understand it is something that lets the dest pdb to be in synch with source without readonly mode. Right?
Regards
Hi Maciej, Yes, the foreign archivelogs are those that are needed to recover the PDB that was copied in a fuzzy state. They are in the destination, shipped from the source. Regards, Franck.