So, shall we dig back in? The last few weeks at work I have had the need to use the Data Dictionary in new ways. As we have touched on before, you can use the Data Dictionary to manage large tasks and let the database do some of the work for you. And you can use SQL to write SQL. We will cover SQL Writing SQL first. A basic (and silly) example would be:
Select ‘select x from dual;’ from
dual;
Your output here would be:
'SELECTXFROMDUAL;'
-------------------
select x from dual;
A
more practical example would be this: let’s say that we have a key/value table with some
data in one schema and we want to put it in a different schema but those
schemas reside on different hosts. Here is our statement:
SELECT 'insert into my_table select seq_my_table_id.nextval, my_source.my_source_id, '''
|| KEY
|| ''', '''
|| value
|| ''' from target_schema.my_source where description = ''My New Bad Ass my_source'';' mySqlWritingSql
FROM my_table
WHERE my_source_id = 1521;
What the above select statement returns is this:
MYSQLWRITINGSQL
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
insert into my_table select seq_my_table_id.nextval, my_source.my_source_id, 'key1', 'value1' from target_schema.my_source where description = 'My New Bad Ass my_source';
insert into my_table select seq_my_table_id.nextval, my_source.my_source_id, 'key2', 'value2' from target_schema.my_source where description = 'My New Bad Ass my_source';
insert into my_table select seq_my_table_id.nextval, my_source.my_source_id, 'key3', 'value3' from target_schema.my_source where description = 'My New Bad Ass my_source';
insert into my_table select seq_my_table_id.nextval, my_source.my_source_id, 'key4', 'value4' from target_schema.my_source where description = 'My New Bad Ass my_source';
insert into my_table select seq_my_table_id.nextval, my_source.my_source_id, 'key5', 'value5' from target_schema.my_source where description = 'My New Bad Ass my_source';
As you can probably tell, the double pipe operator (||) is the concatenate operator in SQL. Each of these returned rows is a syntactically
correct insert statement that we can run on the target host. When writing these for yourself, you can test an
insert and rollback. If you do this and want to test, be sure you are not doing
an autocommit type of transaction. DDL’s are autocommit, DML’s are not. DDL’s
are Create, Alter, Drop, etc. DML’s are Insert, Update, Delete, etc.
Another recent scenario where this type of
statement can help is this: I have been developing Unit Tests for some of our products, however one of them lacks the stored procedures to handle look ups and to insert and remove data since we use a tool for this. At the database level where my tests will be running, I need these so I have been developing a package that contains the add and delete procedures and the functions to look up necessary values. I have also had to modify my schema to match this product's model since I do the unit testing in my own schema. Over the years I have acquired some of the structures that I need but not all. I could have dropped everything and pulled in the whole schema but I thought it would be a fun exercise to pull the delta. This
touches on the Data Dictionary at this point, however, so let me give you some
background on this. You may remember some of this from an earlier post.
The Oracle
Data Dictionary is a set of views owned by SYS that is automatically installed
with Oracle. These are views into the objects that store all of the database
users and all of the objects owned by those users. There are three types of
views and generally each view has a counterpart in the other two types. The
types are:
USER_%
ALL_%
DBA_%
The
distinction between these are: the USER_ views deal only with the objects owned
by the logged on USER so if you are logged in as schema1 the objects in the
USER_% views are all owned by schema1 whereas the ALL_% views deal with ALL of
the objects that the logged on USER has access to see so if schema1 as granted
SELECT to schema2 (a read only user) then when you are logged in as schema2 you
should be able to access some details about schema1 objects from the ALL_%
views; and finally, the DBA_% views are for use by DBA’s and those with higher
levels of access – they show other system information that most users do not
require.
So,
let’s take a look at a real world example. My schema and the project schema are co-located on the host so it makes this a bit easier. I had a portion of - let's call this project XYZ - the XYZ data model in my schema and needed to isolate
the objects I was missing. I wrote the following:
SELECT 'create table '
|| table_name
|| ' as select * from xyzstage.'
|| table_name
|| ' where 1 = 2;'
it
FROM all_tables
WHERE owner = 'XYZSTAGE'
AND table_name IN (SELECT table_name
FROM all_tables
WHERE owner = 'XYZSTAGE'
MINUS
SELECT table_name FROM user_tables);
This
statement uses ALL_TABLES and USER_TABLES. I select the table name from all
tables where owner is xyzstage and subtract the tables that I own. Then I use
sql writing sql to generate the statements to create all of the table objects I
am missing. The results look like this:
create table table1 as select * from xyzstage.table1 where 1 = 2;
create table table2 as select * from xyzstage.table2 where 1 = 2;
create table table3 as select * from xyzstage.table3 where 1 = 2;
create table table4 as select * from xyzstage.table4 where 1 = 2;
create table table5 as select * from xyzstage.table5 where 1 = 2;
I added the where 1 = 2 clause so that I don't migrate data. I really only need the seed data for the look up tables and I will acquire that later. This is just about grabbing the structures that are missing. Now, my schema looks almost exactly like the XYZStage schema. Let's call the one holdout I’ve found so far CONFIG. This table in XYZ has a counterpart in ABC also called CONFIG. However, these were developed in different silos and diverged on column names. The CONFIG table in my schema is the one for ABC. My solution is to append the columns from XYZ onto the one for ABC. I use different Data Dictionary tables for this:
SELECT 'alter table '
|| table_name
|| ' add '
|| column_name
|| ' '
|| data_type
|| DECODE (data_type,
'NUMBER', ';',
'DATE', ';',
'(' || data_length || ');')
it
FROM all_tab_cols
WHERE owner = 'XYZSTAGE'
AND table_name = 'CONFIG'
AND column_name IN
(SELECT column_name
FROM all_tab_cols
WHERE owner = 'XYZSTAGE' AND table_name = 'CONFIG'
MINUS
SELECT column_name
FROM user_tab_cols
WHERE table_name = 'CONFIG');
As we have discussed before, the || is used to concatenate strings and column values and build a SQL statement. The DECODE function is a native SQL function that we have covered before but will revisit briefly. In this case, we look at the value of the DATA_TYPE column for each row and include the DATA_LENGTH information for those that need it. The dictionary views we use are ALL_TAB_COLS and USER_TAB_COLS. These list the column information for tables for the schema owner and for all of the other schemas that the schema owner has access to reach. The results look like this:
alter table CONFIG add CREATE_DATE DATE;
alter table CONFIG add LAST_UPDATE DATE;
alter table CONFIG add LAST_UPDATE_ID NUMBER;
alter table CONFIG add CONFIG_TYPE VARCHAR2(64);
You
might be thinking, if the output of the statements is an executable statement,
can you execute the executable statements in an executable statement executable
sort of executable way? Executably. Here is an anonymous block I wrote to do
that very thing but for a different table. I had already ran the above to alter
the table so I created CONFIG2 based off of the XYZSTAGE schema and ran the below.
Here it is:
DECLARE
CURSOR col_cur
IS
SELECT 'alter table '
|| table_name
|| '2 add '
|| column_name
|| ' '
|| data_type
|| DECODE (data_type,
'NUMBER', ' ',
'DATE', ' ',
'(' || data_length || ')')
it
FROM all_tab_cols
WHERE owner = 'XYZSTAGE'
AND table_name = 'CONFIG'
AND column_name IN
(SELECT column_name
FROM all_tab_cols
WHERE owner = 'XYZSTAGE' AND table_name = 'CONFIG'
MINUS
SELECT column_name
FROM user_tab_cols
WHERE table_name = 'CONFIG2');
BEGIN
FOR fix_config2 IN col_cur
LOOP
EXECUTE IMMEDIATE fix_config2.it;
END LOOP;
END;
The purpose
of this particular solution is to effectively write and execute the fix all at
the same time. The real lesson here is how to wrap the SQL Writing SQL statement in a Cursor in an anonymous block and execute the results in a loop.
The one caveat here is that the semi-colon cannot be in the statement. This has
a few things we haven’t covered yet or perhaps recently.
First,
an anonymous block is like an un-stored and unnamed procedure. If you are just
calling another procedure, you can write an anonymous block like BEGIN myproc();
END; or if you have to declare variables it looks like DECLARE var1 integer := 1;
BEGIN myproc(var1); END; or something like that.
Next, we
have the cursor. Most of us know the cursor as the little blinky thing when
typing but in Oracle a cursor is a pointer to a private SQL area that stores
information about the processing of a DML statement or in this case a DDL
statement. In this case, we are going to access that area via a for loop. The
for loop goes like this: for <row_name> in <cursor_name> loop <do
your stuff> end loop; and in this case the stuff we are doing is executing
the SQL returned for each row via an EXECUTE IMMEDIATE
<row_name>.<cursor column alias>; which uses EXECUTE IMMEDIATE to
implement native dynamic SQL.
All of
this combines to let us do some really powerful stuff. I am not saying you can
take over the world with this but I am not saying you can’t either. But, with
great power comes great responsibility. You can do some serious damage with
this stuff. Practice locally and copy tables as backups so you can revert to a
working database if things go south. Cover your ass.
Alright, I think that is more than enough information for now. Take a stab at this stuff and see how it works for you.