This was first published on https://blog.dbi-services.com/12c-online-datafile-move-and-ongoing-changes (2016-02-12)
Republishing here for new followers. The content is related to the the versions available at the publication date
Here is a long answer for a question on our forum about online move: what is internal mechanism to store/keep ongoing changes in that particular datafile while copying is in progress. The short answer is that there is absolutely no lock, just small synchronization and messaging between sessions. And a very small overhead while the move is running: sessions that write to a part that has already been copied must double write until the end of the operation. Let’s take an example.
I create a tablespace
create tablespace TESTMOVE datafile '/tmp/TESTMOVE1.dbf' size 2000M; Tablespace created.
Then I move it online
alter database move datafile '/tmp/TESTMOVE1.dbf' to '/tmp/TESTMOVE2.dbf';
While it is running, I can see the progress from V$SESSION_LONGOPS
select * from v$session_longops where (sid,serial#) in ( select sid,serial# from v$session where sid=sys_context('userenv','sid') ) order by last_update_time desc;Here is the result (sorry for the format – html table from ancient testcase, converted to simple monospaced text)
SID SERIAL# OPNAME TARGET TARGET_DESC SOFAR TOTALWORK UNITS START_TIME LAST_UPDATE_TIME TIMESTAMP TIME_REMAINING ELAPSED_SECONDS CONTEXT MESSAGE USERNAME SQL_ADDRESS SQL_HASH_VALUE SQL_ID SQL_PLAN_HASH_VALUE SQL_EXEC_START SQL_EXEC_ID SQL_PLAN_LINE_ID SQL_PLAN_OPERATION SQL_PLAN_OPTIONS QCSID CON_ID 17 45 Online data file move 2 data file 2097152000 2097152000 bytes 05-jul-13 14:24:26 05-jul-13 14:26:55 0 138 0 Online data file move: data file 2: 2097152000 out of 2097152000 bytes done FRANCK 000000007A68C930 2874426962 5qgz3pqpp8jkk 0 05-jul-13 14:24:26 16777216
Now I’m creating a table in that tablespace and a procedure do generated some DML on it:
create table TEST (num ,txt) tablespace TESTMOVE as select rownum,to_char( date'-4712-01-01'+rownum-1,'Jsp') from (select * from dual connect by level <=1000),(select * from dual connect by level <=1000) order by 2; Table created. create table TEST2 tablespace TESTMOVE as select * from TEST; Table created. create or replace procedure TESTACTIVITY as begin commit; execute immediate 'truncate table TEST2 reuse storage'; for i in 1..1 loop lock table TEST in exclusive mode; insert /*+ APPEND */ into TEST2 select -num,txt from TEST; commit; delete from TEST2; commit; end loop; end; / Procedure created.I’m doing an APPEND insert so that my session is doing the writes (direct-write to the datafile)
I run my procedure and query my session statistics
exec TESTACTIVITY; PL/SQL procedure successfully completed. select name,value from v$mystat join v$statname using(statistic#) where value>0 and name like '%physical%bytes%' order by 1; NAME VALUE cell physical IO interconnect bytes 220946432 physical read bytes 148291584 physical read total bytes 148291584 physical write bytes 72654848 physical write total bytes 72654848
The procedure writes 70 MB and reads 140 MB
Let’s run the datafile move in background:
variable job number exec dbms_job.submit(:job,q'[ begin execute immediate 'set role all'; execute immediate q'(alter database move datafile '/tmp/TESTMOVE2.dbf' to '/tmp/TESTMOVE1.dbf' )'; end; ]'); commit; dbms_lock.sleep(3); PL/SQL procedure successfully completed.If I query DBA_DATA_FILES, it’s not yet moved:
select * from dba_data_files where tablespace_name='TESTMOVE'; &nnsp FILE_NAME FILE_ID TABLESPACE_NAME BYTES BLOCKS STATUS RELATIVE_FNO AUT MAXBYTES MAXBLOCKS INCREMENT_BY USER_BYTES USER_BLOCKS ONLINE_ /tmp/TESTMOVE2.dbf 20 TESTMOVE 2097152000 256000 AVAILABLE 20 NO 0 0 0 2096103424 255872 ONLINEwhich mean that my query will still read from the original location.
However, when I run my procedure again:
exec TESTACTIVITY; PL/SQL procedure successfully completed. select name,value from v$mystat join v$statname using(statistic#) where value>0 and name like '%physical%bytes%' order by 1; NAME VALUE cell physical IO interconnect bytes 292413440 physical read bytes 147103744 physical read total bytes 147103744 physical write bytes 72654848 physical write total bytes 145309696
You see the difference: ‘physical write total bytes’ is the double of the 70M. The write is actually writing to both files. This is how it works. During the move, all sessions read from the original files and write to the original file, which is consistent. In addition to that, when the write is done on a block that has already been copied to the destination, then the session also writes to the destination. This is the double write. And the end, the destination file is consistent: all blocks are copied and maintained up to date.
Actually, I’ve run my procedure several times and see the same statistics, and after several executions I get:
NAME VALUE cell physical IO interconnect bytes 249946112 physical read bytes 146915328 physical read total bytes 147423232 physical write bytes 72654848 physical write total bytes 102522880and my online move has finished during that time. Once the move is done, all session will read from the new file and then the double write is not needed.
At the end, the dictionary is updated to address the new file, and the old one can be deleted:
select * from dba_data_files where tablespace_name='TESTMOVE'; FILE_NAME FILE_ID TABLESPACE_NAME BYTES BLOCKS STATUS RELATIVE_FNO AUT MAXBYTES MAXBLOCKS INCREMENT_BY USER_BYTES USER_BLOCKS ONLINE_ /tmp/TESTMOVE1.dbf 20 TESTMOVE 2097152000 256000 AVAILABLE 20 NO 0 0 0 2096103424 255872 ONLINE
Note that when you trace, you see the write wait events on both files, but they have the same FILE_ID
In the alert.log you can see the different phases:
Moving datafile /tmp/TESTMOVE1.dbf (2) to /tmp/TESTMOVE2.dbfFrom there, the move has created the second file and has signaled to the other sessions (including DBWR) that they must write to both, and the the copy starts (‘db file sequential read’ and ‘db file single write’ in 1MB chunks)
Then when all blocks are copied:
Move operation committed for file /tmp/TESTMOVE2.dbfand the sessions can read and write on the new file
Completed: alter database move datafile '/tmp/TESTMOVE1.dbf' to '/tmp/TESTMOVE2.dbf'
This is a very nice feature. Oracle introduced it for the ILM option (in order to move cold data to cheaper disks automatically – and automatically means that it must be online) but we can also use it to migrate to new storage, to ASM, or to balance the datafile on multiple filesystems.
It’s possible in Enterprise Edition without options and it’s my preferred 12c new feature for database administration. We start with it when teaching our 12c new feature training. This year, because everybody waits for 12.2 we have not planned fixed dates for that workshop, but deliver it on demand: http://www.dbi-services.com/trainings/oracle-12c-new-features-workshop/
Hello Franck,
Its really a nice feature from oracle and you demonstrated it in a very informative way. I have some queries on the same. 1. As you said during the move write will happen to both the datafiles ( Old and New) , Suppose i am moving the datafile to different mount point and the old data file mount point gets full to 100%. Then what will happen to the move. 2. what will happen to new file on new mount point. 3. What information will be contained in Control file for this datafile move.
Hi, To follow-up on your questions, I’ve tested the resize of datafile while it is moving and it works without any problem: http://blog.dbi-services.com/12c-online-datafile-move-and-resize/ About controlfile, the info is stored which makes the cleanup automatic in case of crash (except if you had to restore a controlfile backup) Regards, Franck.
Hi Saurabh, Thanks for your feedback. I don’t think there is something special if the source mount point is 100%. The writes are modifying blocks that are already allocated, so no problem here. If we try to resize the file while it is being moved, we get an error (ORA-63000) but automatic resize may happen concurrently for autoextensible datafiles. In the controlfile, you have both files but you see only the old one in ‘backup controlfile to trace’ until the move is completed. Regards, Franck.
Hi Frank ,
Please clear my confusion, If we are doing online without taking tbs or datafile down tym in oracle 12c , alter database move will work on the database level but how about the OS level renaming or relocating of the datafiles will happen.. will it change on both database and OS level ?
Thanks Anjan
Hi Anjan, It takes care of OS level move and you have the possibility to KEEP the old file.
Thanks a lot Sir for quick response I am very eager to learn on every minute things on DBA. If any queries i will post on it.
Thanks Again.
Hi Franck ,
Need to know what is the advantage of sqlplus /nolog and how it works. Can please briefly explain on this command. I tried to search on this and didnt get proper information.
Thanks Anjan
Hi, very simple with /nolog sqlplus does not connect immediately. You connect later with the “connect” command.