This was first published on https://blog.dbi-services.com/pdb-snapshot-copy-for-continuous-integration-testing (2016-01-01)
Republishing here for new followers. The content is related to the the versions available at the publication date
How do you manage your continuous integration tests on the database? You need to restart at the same state for probably a hundred of tests. Recreating the schema and test data takes too long. Oracle is not optimized for DDL. Restoring the database takes too long. Even if you data is small, there is those SYSTEM, SYSAUX tablespaces. Flashback database can help, but it still takes time because it requires to restart the instance. Let’s see how multitenant can help here.
The flashback database solution is a good solution, and I’ve implemented that for continuous integration tests. It seems that having thousands of incarnations is not a problem (at least for a test database) and in current versions I’ve not seen many bugs where flashback logs or archived logs remain without becoming obsolete. But there is a problem: you need to close the database, which means that you need to restart the instance to open it again. This takes time. There are a lot of processes in current versions and linux is not optimized for process creation.
But now multitenant is there. 12c Multitenant is a major change of oracle architecture. If you don’t share my opinion about it, just look at the first paragraph of Tom Kyte’s ‘Expert Oracle’ about architecture overview: An instance can mount and open at most one database in its life. This statement is not true anymore with multitenant: you can re-open a closed pluggable database very quickly, without having to restart the instance. This makes flashback database a very nice solution for our problem, as it can be done in few seconds, but for the moment, in 12.1, the problem is that you cannot flashback database at PDB level. So we need a workaround.
Here is the most simple workaround: once you have your reference PDB with your test data in the state you want at the beginning of each tests, you can make it read only, and then create new PDB from it for the tests. However, I don’t want to copy the SYSTEM and SYSAUX tablespaces each time, so I’ll use thin provisioning.
Here is the simple command to do that:
SQL> create pluggable database PDBSNAP from PDB snapshot copy file_name_convert=('/PDB/','/PDBSNAP/');but you need a storage that allow thin provisioning to use that or you get the following error:
create pluggable database PDBSNAP from PDB snapshot copy file_name_convert=('/PDB/','/PDBSNAP/') * ERROR at line 1: ORA-65169: error encountered while attempting to copy file /u02/app/oracle/oradata/CDB/PDB/system01.dbf ORA-17525: Database clone using storage snapshot not supported on file /u02/app/oracle/oradata/CDB/PDB/system01.dbfThe solution is:
$ oerr ora 17525 17525, 00000, "Database clone using storage snapshot not supported on file %s" // *Cause: Cloning a database using storage level snapshot is not supported on the underlying storage. // *Action: Use storage product that has snapshot based cloning support for Oracle.
But you can do it also without any special storage as long as your filesystem supports sparse files. Then, the original files will not be touched and only the modified blocks will be written in the new files (it’s copy-on-write). Which means two things:
In order to use that, you need to set the instance parameter clonedb=true (needs restart):
SQL> alter system set clonedb=true scope=spfile; System altered. SQL> shutdown immediate Database closed. Database dismounted. SQL> startup ORACLE instance started ... Database mounted. Database opened. SQL> show parameter clonedb NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ clonedb boolean TRUEand then I can thin clone my reference PDB in few seconds:
17:16:21 SQL> create pluggable database PDBSNAP from PDB snapshot copy file_name_convert=('/PDB/','/PDBSNAP/'); Pluggable database created. 17:16:31 SQL>and if you look at the files you see that the new ones are very small:
17:16:31 SQL> host du -ka /u02/app/oracle/oradata/CDB/PDB*/* 276488 /u02/app/oracle/oradata/CDB/PDB/system01.dbf 614412 /u02/app/oracle/oradata/CDB/PDB/sysaux01.dbf 1273612 /u02/app/oracle/oradata/CDB/PDB/example01.dbf 15368 /u02/app/oracle/oradata/CDB/PDB/SAMPLE_SCHEMA_users01.dbf 172 /u02/app/oracle/oradata/CDB/PDBSNAP/system01.dbf 16 /u02/app/oracle/oradata/CDB/PDBSNAP/sysaux01.dbf 16 /u02/app/oracle/oradata/CDB/PDBSNAP/example01.dbf 16 /u02/app/oracle/oradata/CDB/PDBSNAP/SAMPLE_SCHEMA_users01.dbf16k (two blocks) only are different for the application datafiles. SYSTEM had a few more change, but it’s only few blocks here. It took 10 seconds here on 1GB datafiles and it will still take 10 seconds if you have 100GB. That makes the thin provisioning very fast.
Then you open it and can use it in few seconds:
17:16:31 SQL> alter pluggable database PDBSNAP open; Pluggable database altered. 17:16:34 SQL> connect hr/hr@//localhost/PDBSNAP Connected.Do everything you want here and then drop it, which takes only few seconds again.
17:16:35 SQL> connect / as sysdba Connected. 17:16:35 SQL> alter pluggable database PDBSNAP close; Pluggable database altered. 17:16:36 SQL> drop pluggable database PDBSNAP including datafiles; Pluggable database dropped. 17:16:37 SQL>You can see that I’ve dropped it ‘including datafiles’ and it drops only the sparse files:
17:16:37 SQL> host du -ka /u02/app/oracle/oradata/CDB/PDB*/* 276488 /u02/app/oracle/oradata/CDB/PDB/system01.dbf 614412 /u02/app/oracle/oradata/CDB/PDB/sysaux01.dbf 1273612 /u02/app/oracle/oradata/CDB/PDB/example01.dbf 15368 /u02/app/oracle/oradata/CDB/PDB/SAMPLE_SCHEMA_users01.dbfThe files from my reference PDB are still there, ready for another provisioning.
CREATE PLUGGABLE DATABASE … SNAPSHOT COPY is a good alternative to flashback database here. However, keep in mind that you need to have more than one PDB in your CDB which means that you need to have multitenant option for that. You may find that expensive, but think about what it can bring you: you can provision multiple test environments and run your test in parallel, you same lot of storage, you can provision one database for each developer without a big overhead,…
The Oracle cloud service is a good solution for those environments. You probably don’t need the same number of databases for the whole application lifecycle. In the cloud you can provision those environments when a project needs it. And the cost of additional options is not very high there: the ‘high performance’ service price is 30% higher than the ‘enterprise edition’ one and brings nearly all options (except In-Memory and Active Data Guard). But take care: if you give that environment to your developers, they may use a lot of options that you don’t want to buy for production. Keep an eye on DBA_FEATURE_USAGE_STATISTICS.
If you don’t want to pay for the multitenant option yet you can have only one pluggable database (in addition to seed). And you can do snapshot copy from remote CDB through db link, so you keep one PDB per CDB. It seems to take a few more seconds (in ‘remote operation’ and ‘external table read’ to read opatch inventory from the remote database).
Even without multitenant option, not having to re-start the instance, and to possibility to easily use thin provisioning is a very good reason to start to look at pluggable databases.
In a new blog post I’ve detailed another solution in single-tenant with only one instance, using dbms_dnfs.clonedb_renamefile: https://blog.dbi-services.com/pdb-snapshot-using-dbms_dnfs-clonedb_renamefile/
Hi Franck,
I’m aware it’s a little late to post a comment, but hopefully it might be useful to somebody. According to the documentation http://docs.oracle.com/database/121/SQLRF/statements_6010.htm#SQLRF56536 , one of the limitations of cloning PDBs on local file systems with CLONEDB initialization parameter set to TRUE, is that “the source PDB must remain in open read-only mode as long as any clones exist”.
While testing on Oracle 12.1.0.2 on RedHat 6.7, filesystem ext4, it seems that this restriction is enforced only when creating a clone PDB – if the source PBD is open read-write while creating the clone, the error “ORA-65081: database or pluggable database is not open in read only mode” is returned as expected. However, opening the source PDB in read write mode after that doesn’t return any error. That is not to say that it should be used that way, only that it’s probably better to be careful not to open the source PDB in read-write mode “by mistake”. I don’t know what the consequences might be – from a short test I did, both the source PDB and the clone worked fine, there were no errors in the alert log, but I guess that’s not the way it’s meant to be used.
Short test case:
The documentation additionally says: “When you use the SNAPSHOT COPY clause to create a clone of a source PDB, the following restrictions apply to the source PDB as long as any clones exist: * It cannot be unplugged. * It cannot be dropped.”However:
Regards, Jure Bratina
Hi Jure, Thanks for your comment. So, from your test nothing prevents to do bad things on source, and that can lead to very nice corruptions. But I can understand that there is no check after the creation as the files may come from another database. I’ll test it and maybe open an SR. Regards, Franck.
Interesting. Im assuming this can only be done locally …… meaning when people talk about a remote snapshot clone …. remote is just a secondary node on same underlying FS in order to leverage the snapshot capability.
Also ….. I know oracle has a pluginto OEM I think or through the independent app to monitor subordinates of snapshot clones ……. wondering how hard to mange when you stand up many many clones ….. and then need to rebuild the master (and wipe away all the clones) after certain amount of data COW changes.
Hi John, Here when I say ‘remote’ I mean from another CDB. Just because the whole purpose of this is for Standard Edition where you can have only one PDB in a CBD. Regards, Franck.