This was first published on https://blog.dbi-services.com/single-tenant-vs-non-cdb-no-reason-to-refuse-it (2016-04-09)
Republishing here for new followers. The content is related to the the versions available at the publication date
When non-CDB has been declared deprecated, I was a bit upset because multitenant with a lone PDB just looks like an overhead of 3 containers instead of one. But with experience I changed my mind. First because the multitenant architecture brings some features that are available even without the option. And second, because this overhead is not a big problem.Let’s put numbers on that last point. A CDB has 3 containers: CDB$ROOT, PDB$SEED, and your pluggable database. Each one has SYSTEM and SYSAUX tablespaces. Even if the pluggable database system tablespaces are smaller thanks to metadata links, it’s still more datafiles and more space. There are also more processes.
I’ve created a non-CDB and a CDB with same configuration and same virtual machines. Oracle Cloud Services is very nice for that.
Here is the storage after the database creation. First on the Multitenant one:
[oracle@vicdb ~]$ df -h Filesystem Size Used Avail Use% Mounted on /dev/xvdb3 25G 17G 7.7G 68% / tmpfs 7.3G 0 7.3G 0% /dev/shm /dev/xvdb1 477M 148M 300M 34% /boot [oracle@vicdb ~]$ [oracle@vinon ~]$ du -xh /u01/app/oracle/oradata | sort -h 9.6M /u01/app/oracle/oradata/NON/controlfile 151M /u01/app/oracle/oradata/NON/onlinelog 1.6G /u01/app/oracle/oradata/NON/datafile 1.8G /u01/app/oracle/oradata 1.8G /u01/app/oracle/oradata/NON
And in the non-CDB one:
[oracle@vinon ~]$ df -h Filesystem Size Used Avail Use% Mounted on /dev/xvdb3 25G 15G 9.1G 62% / tmpfs 7.3G 0 7.3G 0% /dev/shm /dev/xvdb1 477M 148M 300M 34% /boot [oracle@vicdb ~]$ du -xh /u01/app/oracle/oradata | sort -h 18M /u01/app/oracle/oradata/CDB/controlfile 151M /u01/app/oracle/oradata/CDB/onlinelog 741M /u01/app/oracle/oradata/CDB/2F81FDFC05495272E053CE46C40ABDCF 741M /u01/app/oracle/oradata/CDB/2F81FDFC05495272E053CE46C40ABDCF/datafile 2.4G /u01/app/oracle/oradata/CDB/datafile 3.3G /u01/app/oracle/oradata 3.3G /u01/app/oracle/oradata/CDBThe CDB system needs additional 1.4GB for the additional tablespaces. If you think about it, the overhead is minimal when you compare it with the size of your database.
That’s for storage. Let’s have a look at memory.
Here is the Multitenant system
top - 10:20:00 up 2:26, 2 users, load average: 0.00, 0.04, 0.13 Tasks: 164 total, 1 running, 163 sleeping, 0 stopped, 0 zombie Cpu(s): 0.1%us, 0.0%sy, 0.0%ni, 99.8%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 15138468k total, 14783080k used, 355388k free, 125580k buffers Swap: 4194300k total, 0k used, 4194300k free, 13845468k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 20854 oracle -2 0 4790m 19m 17m S 2.4 0.1 0:19.75 ora_vktm_cdb 20890 oracle 20 0 4810m 129m 112m S 0.1 0.9 0:01.70 ora_mmon_cdb 20870 oracle 20 0 4793m 25m 21m S 0.1 0.2 0:00.71 ora_dia0_cdb 20892 oracle 20 0 4791m 40m 38m S 0.1 0.3 0:00.69 ora_mmnl_cdb 21306 oracle 20 0 4791m 26m 24m S 0.0 0.2 0:00.03 ora_j000_cdb 20852 oracle 20 0 4790m 19m 17m S 0.0 0.1 0:00.20 ora_psp0_cdb 20876 oracle 20 0 4791m 51m 48m S 0.0 0.3 0:00.23 ora_ckpt_cdb 20957 oracle 20 0 4801m 69m 57m S 0.0 0.5 0:00.49 ora_cjq0_cdb 12 root RT 0 0 0 0 S 0.0 0.0 0:00.05 watchdog/1 20866 oracle 20 0 4793m 44m 40m S 0.0 0.3 0:00.27 ora_dbrm_cdb 20872 oracle 20 0 4800m 76m 67m S 0.0 0.5 0:00.21 ora_dbw0_cdb 20911 oracle 20 0 4790m 19m 17m S 0.0 0.1 0:00.04 ora_tt00_cdb
and the CDB one:
top - 10:21:04 up 2:10, 2 users, load average: 0.00, 0.01, 0.10 Tasks: 154 total, 1 running, 153 sleeping, 0 stopped, 0 zombie Cpu(s): 1.8%us, 0.6%sy, 0.2%ni, 96.5%id, 0.6%wa, 0.0%hi, 0.0%si, 0.3%st Mem: 15138468k total, 12102812k used, 3035656k free, 167644k buffers Swap: 4194300k total, 0k used, 4194300k free, 11260644k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 21539 oracle -2 0 4790m 19m 17m S 2.0 0.1 0:32.69 ora_vktm_non 22095 oracle 20 0 15088 1164 856 R 2.0 0.0 0:00.01 top 1 root 20 0 19408 1540 1232 S 0.0 0.0 0:00.69 init 2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kthreadd 3 root 20 0 0 0 0 S 0.0 0.0 0:00.05 ksoftirqd/0 5 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kworker/0:0H 6 root 20 0 0 0 0 S 0.0 0.0 0:00.00 kworker/u:0 7 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kworker/u:0H 8 root RT 0 0 0 0 S 0.0 0.0 0:00.25 migration/0 9 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcu_bh 10 root 20 0 0 0 0 S 0.0 0.0 0:01.02 rcu_sched
With same SGA sizing the memory footprint is the similar. And you don’t need to setup a larger SGA for single-tenant: buffer cache is the same (depends on your data), library cache is the same (depends on your code), dictionary cache may be a bit larger but it’s still small. Basically it run exactly the same except that objects have a container id (which is only one byte in 12.1).
I think it’s enough to clear out the myth that single-tenant has a big overhead over non-CDB.
For sure, it’s a bit strange to have to store a PDB$SEED, which is used only to create new pluggable databases, when we cannot create additional pluggable databases. In single-tenant, you will probably have one CDB with your own seed in read-only, and you can remote clone from it. And it’s right that multitenant architecture has been implemented for the multitenant option. But there is no reason to refuse it. With minimal overhead, you can benefit from lot of features that are possible with the dictionary separation. Let’s take a single example: in Standard Edition you can move a whole database physically by unplug/plug or remote clone. When you realize that transportable tablespaces have never been available in Standard Edition can see unplug/plug as a great enhancement for Standard Edition. Easier than duplicate, and cross-version, cross-platform. Perfect for migrations.
Hi Franck,
Since you’ve done so many blog posts and presentations about multitenant architecture (also on the upcoming OOW), and I’ve learned quite a lot from them, I’d like to share two testcases with PDBs which I think might be interesting and potentially problematic in production. I hope I can post this as a comment here (although the blog post is not directly related to multitenant and Data Guard), since I don’t know how else can I let you know about it. If that’s not OK, feel free to delete my comment. Maybe that’s a known issue although I haven’t found any notes/bugs on MOS about it and I’m planning to open a SR.
I actually came across this anomalies when testing the dataguard configuration on a soon-to-be production system – one host with a primary and another with a physical standby database. However the commands below were performed on a VM on my laptop, just to prove the concept, where both the primary and physical standby run from the same Oracle Home which is a 12.1.0.2.160719 on x86_64 Linux. Since I want to keep the comment short, I’ll try to write as concise as possible. If you think it would be interesting to post a more detailed output, please let me know.
Anomaly #1: —————— – create a PDB from seed in container “orcl2″, unplug it and drop it with the “keep datafiles” option – copy the unplugged PDB’s datafile to the directory where the standby instance “dg01orcl” will find them when they’ll be plugged in the primary – plug the PDB into container “orcl” which has a physical standby “dg01orcl” using:
– the PDB gets successfully plugged in the primary. The standby is also ok, the PDB is in the mount state (no Active DG). Also, the standby’s alert log shows that the datafiles are recognized. – perform a switchover (using the Broker) to “dg01orcl” and then back to “orcl”. So far everything is OK. – unplug the PDB the same way as before, using “alter pluggable database testplug unplug into ‘/tmp/test_plug.xml';” and drop it with “drop pluggable database testplug keep datafiles” – and here’s the problem – on the primary the PDB is dropped: however on standby it’s not: There’s also no sign of any drop operation in the standby’s alert log. If I try to plug the PDB in the primary “orcl”, I get the following error on the standby: ORA-01537: cannot add file ‘/u01/app/oracle/oradata/orcl/testplug/system01.dbf’ – file already part of databaseWhat’s interesting is that this problem (PDB not getting deleted on standby) doesn’t occur if: – I don’t perform the switchover – I perform the switchover, but the PDB in the “orcl” container is created from seed, and not plugged from another CDB.
Anomaly #2: —————— – create a PDB from seed in the container “orcl”. The container has a physical standby named “dg01orcl” where the PDB gets also automatically created:
– unplug and drop the PDB with “keep datafiles” option. The PDB is removed also from standby. That’s also confirmed in the alert log, e.g. – plug the PDB back in the same primary “orcl” container. The standby also recognizes the PDB. Everything is OK till now. The datafiles of the primary PDB are on disk: – And now the anomaly. Using the Broker, I perform a “switchover to dg01orcl” and the PDB’s system and sysaux datafiles get physically deleted: The alert log from “orcl” reports: – if I try to perform a switchover back to “orcl”, the instance of course reports the missing datafiles when starting:Regards, Jure Bratina
And sorry for not formatting the output with fixed width fonts, not sure how to do it on this blog.
Regards, Jure Bratina
Hi Jure, That’s very interesting. Thanks. I observed some issues on test databases after lot of testing involving PDB operations and switchover. But I didn’t take the time to reproduce them. Did you open a SR for that? I’ll try to reproduce it but I don’t know when I have time for it. No pb for the formatting. I’ve added the <code> and </code> and it’s fine. Regards, Franck.
Hi Franck,
> Did you open a SR for that? Not yet, I’m planning to do that in the upcoming week. Will report back what will be the response.
Regards, Jure