Craig Shallahamer is wondering whether a commit always has a blank SQL_ID and comes to the conclusion that it sometimes does and sometimes doesn't. Unsurprisingly, it turns out that Tanel Põder has already explained the mystery in great detail here.
It depends.
01 July, 2015
01 December, 2014
A nasty gotcha in DBCA templates 12c
I want to create four databases on a Linux host. The databases are of two kinds ("big DB" and "small DB"), but otherwise very similar to one another, and I will want to create a new set of four twice a year - a perfect opportunity to use DBCA templates! So, I click through the wizard, starting with the "Custom Database" template, adjusting paths and sizes and character sets and I am soon ready to create the first database. I choose to create an appropriately named template and let DBCA do its magic.
As soon as DBCA has finished creating my "big DB" database, I proceed to create a template for the "small DB" databases, this time starting with my saved template. I then use the newly created "small DB" template to create the smallish databases. So far so good.
I switch the instances in /etc/oratab to autostart and go about doing other important stuff which requires me to restart the machine.
Bang!
Of the four databases, only one is open - the rest are merely mounted. I run a search for the alert.log files (12c appears to have moved them outside of background_dump_dest) and my head nearly explodes: I find five alert logs; moreover, the paths to some(!) of them include directories named after different databases, as in $ORACLE_BASE/diag/rdbms/dbnameX/sidY/trace/alert_sidY.log!
$ORACLE_HOME/dbs contains one SPFILE and three initSID.ora files!
...
It turns out that when you create a template from a template (with or without creating a database at the same time), the new template references the original's SPFILE literally. Something like this:
Built in template "Custom Database" ($ORACLE_BASE/product/12.1.0/dbhome_1/assistants/dbca/templates/New_Database.dbt):
{ORACLE_HOME}/dbs/spfile{SID}.ora
Template BIG derived from "Custom Database" ($ORACLE_BASE/product/12.1.0/dbhome_1/assistants/dbca/templates/BIG.dbt):
{ORACLE_HOME}/dbs/spfileBIG.ora
Template SMALL derived from "BIG" ($ORACLE_BASE/product/12.1.0/dbhome_1/assistants/dbca/templates/SMALL.dbt):
{ORACLE_HOME}/dbs/spfileBIG.ora
See the problem? The creation procedure of each subsequent database has overwritten the first instance's SPFILE with its own parameters (including - critically - CONTROL_FILES). Even deleting the databases using DBCA is no longer an option - I had to do it manually.
The solution? Copy the line about the SPFILE from New_Database.dbt to BIG.dbt manually - afterwards all templates based on BIG.dbt will behave correctly.
The moral of the story: always let DBCA generate scripts, but do not let it create the database for you before you have checked the scripts SID.sh (SID.bat) and SID.sql.
As soon as DBCA has finished creating my "big DB" database, I proceed to create a template for the "small DB" databases, this time starting with my saved template. I then use the newly created "small DB" template to create the smallish databases. So far so good.
I switch the instances in /etc/oratab to autostart and go about doing other important stuff which requires me to restart the machine.
Bang!
Of the four databases, only one is open - the rest are merely mounted. I run a search for the alert.log files (12c appears to have moved them outside of background_dump_dest) and my head nearly explodes: I find five alert logs; moreover, the paths to some(!) of them include directories named after different databases, as in $ORACLE_BASE/diag/rdbms/dbnameX/sidY/trace/alert_sidY.log!
$ORACLE_HOME/dbs contains one SPFILE and three initSID.ora files!
...
It turns out that when you create a template from a template (with or without creating a database at the same time), the new template references the original's SPFILE literally. Something like this:
Built in template "Custom Database" ($ORACLE_BASE/product/12.1.0/dbhome_1/assistants/dbca/templates/New_Database.dbt):
Template BIG derived from "Custom Database" ($ORACLE_BASE/product/12.1.0/dbhome_1/assistants/dbca/templates/BIG.dbt):
Template SMALL derived from "BIG" ($ORACLE_BASE/product/12.1.0/dbhome_1/assistants/dbca/templates/SMALL.dbt):
See the problem? The creation procedure of each subsequent database has overwritten the first instance's SPFILE with its own parameters (including - critically - CONTROL_FILES). Even deleting the databases using DBCA is no longer an option - I had to do it manually.
The solution? Copy the line about the SPFILE from New_Database.dbt to BIG.dbt manually - afterwards all templates based on BIG.dbt will behave correctly.
The moral of the story: always let DBCA generate scripts, but do not let it create the database for you before you have checked the scripts SID.sh (SID.bat) and SID.sql.
27 November, 2014
ORA-27107: Farewell, automatic huge pages...
I've just run into this problem head first and without warning while creating a new database:
ORA-27107: AUTO value for USE_LARGE_PAGES parameter is no longer supported
Introduced in 11.2.0.3, the value was a quick and somewhat dirty way to use huge pages on Linux machines without the hassle of having to reboot after each change in SGA usage (creation/deletion of instances, changing sga_target, etc.)
While my 11.2.0.3 and 12.1.0.1 instances are happily running with use_large_pages=auto, an attempt to create a fully patched (PSU 12.1.0.2.1) instance with this value failed.
MetaLink shows a relevant internal enhancement request (Bug 14500387 : PROVIDE A MECHANISM TO DISABLE USE_LARGE_PAGES=AUTO), but there is no evidence it has been implemented. It is not in the list of bugs fixed by the October 2014 PSU.
The only reference I found was in the documentation - who would have thought the docs can get ahead of MetaLink!
Now, if you will excuse me, I need to do some calculating and prealocating of huge pages. What was that root password again?
ORA-27107: AUTO value for USE_LARGE_PAGES parameter is no longer supported
Introduced in 11.2.0.3, the value was a quick and somewhat dirty way to use huge pages on Linux machines without the hassle of having to reboot after each change in SGA usage (creation/deletion of instances, changing sga_target, etc.)
While my 11.2.0.3 and 12.1.0.1 instances are happily running with use_large_pages=auto, an attempt to create a fully patched (PSU 12.1.0.2.1) instance with this value failed.
MetaLink shows a relevant internal enhancement request (Bug 14500387 : PROVIDE A MECHANISM TO DISABLE USE_LARGE_PAGES=AUTO), but there is no evidence it has been implemented. It is not in the list of bugs fixed by the October 2014 PSU.
The only reference I found was in the documentation - who would have thought the docs can get ahead of MetaLink!
Now, if you will excuse me, I need to do some calculating and prealocating of huge pages. What was that root password again?
15 November, 2012
Building applications that require users to log in
Just posted this as a comment on a LinkedIn thread:
Let me try to clear things a bit by giving an example:
I can make an application that requires users to log in. I will use hashing for checking passwords, that is, I will only store the last character of whatever password the user gave me during registration and then compare the last character of the password to the stored "hash". So, if Alice chooses the password "love" and Bob chooses "hate" I will store the same "hash value" - "e" - for both users. This is called a hash collision and may be the weak spot of this design - all Eve needs to do to log in as either Alice or Bob is to try at most 256 one-character passwords (assuming single-byte charset here for simplicity). That is, Eve can find collisions with a complexity of 28.
Mind you, finding a hash collision is not the same as decrypting an encrypted password - Eve still doesn't know the actual passwords, so she cannot login as Alice or Bob into other applications that use other hash functions, even if Alice and Bob used the same passwords there.
Now, my application obviously has a security problem - it is too easy to log in as a different user. But the only thing I need to do to secure it is to use a better hash function - SHA-1 for example - and tell Alice and Bob to set their passwords again, so I can store the new hash values. Now Eve's task of finding collisions is much more complex - 261. She probably has better chance of guessing the passwords using a dictionary instead of using a brute force approach - that is, unless Alice and Bob choose sufficiently strong passwords (random strings of sufficient length).
As you see, there is no encryption involved, there is no key or other secret to store securely. The security of the application hinges on the complexity of the passwords and the quality of the hash function. And a hash function that has withstood the attacks of the world's best cryptographers (to a degree - SHA-1's 261 complexity of collision finding is considered too low for critical applications - see the Wikipedia article) is most likely better than any function I can come up with.
Just a final note: for my application, I might find it inconvenient to store the result of SHA-1 directly in the database, because it is a large number and I'd have to use a column of type RAW. So I might decide to convert it to a string and use VARCHAR2. But now I have to worry about character set conversions between client and server, so I cannot just cast the raw result to VARCHAR2. I have to encode it somehow. One way would be to convert the raw (base256) number to a string of hexadecimal digits (base16), but that is quite wasteful: I'd need two bytes of storage for every byte of the original. So I might decide to use the base64 encoding instead, since it is widely used on the Internet and is supported by just about any server or client programming environment, and it has 64 "digits" instead of hex's 16.
I hope this clears the situation a bit and there will be no more offers to decrypt a hash value.
28 September, 2010
Use Statspack for snapping stats in logon/logoff triggers
Just posted a comment on Arup Nanda's blog
Just posted a comment on Arup Nanda's blog
29 April, 2010
Useless, but fun
The code handles negative numbers correctly, checks entry conditions (base must be integer and at least two, alphabet must have at least base characters), and looks reasonably cute. Unfortunately, I can't think of a use for it other than as an exhibit.
Incidentally, I ended up using base64-encoding for my unique IDs:
And I just had to test it.
Why would anyone want to use this instead of the handy programs from the DBMS_ROWID package is beyound me. But it was fun to write :-)
And, marginally less useless, here's a script to find a row from a 6-byte restricted rowid (gotten e.g. from an index leaf block dump), assuming you know the data object ID of the table, of course:
For extra meaningless workout, try getting the file# and block# from the first four bytes (Data Block Address) without the help of DBMS_UTILITY. I did.
Update (April 30): Richard Foote finds this nice :-)
- Integer base conversion in Oracle SQL
select decode(sign(:what), -1, '-')||reverse(replace(res,' ','')) to_base from ( select sys_connect_by_path(substr(:alphabet, 1+mod(trunc(abs(:what)/power(:base,level-1)), :base), 1),' ') res, trunc(abs(:what)/power(:base,(level-1))) rem from dual start with :base>1 and trunc(:base)=:base and length(:alphabet)>=:base connect by trunc(abs(:what)/power(:base,(level-1)))>0) where rem<:base;
The code handles negative numbers correctly, checks entry conditions (base must be integer and at least two, alphabet must have at least base characters), and looks reasonably cute. Unfortunately, I can't think of a use for it other than as an exhibit.
Incidentally, I ended up using base64-encoding for my unique IDs:
substr(lpad(rtrim(utl_raw.cast_to_varchar2(utl_encode.base64_encode(hextoraw(to_char(myID.Nextval,'fmXXXXXXXXXXXXXXXXXXX')))),'='),11,'-'), -11)
- Parsing extended ROWIDs
And I just had to test it.
with t1 as (select rowid rid, substr(rowid,1,6) obj, substr(rowid,7,3) fil, substr(rowid,10,6) blk, substr(rowid,16,3) rw from t) select to_number(utl_encode.base64_decode(utl_raw.cast_to_raw('AA'||obj)),'XXXXXXXXXXXX') obj#, to_number(utl_encode.base64_decode(utl_raw.cast_to_raw('A'||fil)),'XXXXXXXXXXXX') fil#, to_number(utl_encode.base64_decode(utl_raw.cast_to_raw('AA'||blk)),'XXXXXXXXXXXX') blk#, to_number(utl_encode.base64_decode(utl_raw.cast_to_raw('A'||rw)),'XXXXXXXXXXXX') rw#, rid from t1 order by rid;
Why would anyone want to use this instead of the handy programs from the DBMS_ROWID package is beyound me. But it was fun to write :-)
And, marginally less useless, here's a script to find a row from a 6-byte restricted rowid (gotten e.g. from an index leaf block dump), assuming you know the data object ID of the table, of course:
with s as (select replace('04 c0 00 0a 01 c0', ' ') rid from dual), rrid as (select dbms_utility.data_block_address_file(to_number(substr(rid,1,8),'XXXXXXXX')) file#, dbms_utility.data_block_address_block(to_number(substr(rid,1,8),'XXXXXXXX')) block#, to_number(substr(rid,-4),'XXXX') row# from s) select t.*, t.rowid, dbms_rowid.rowid_create(0, 0, file#,block#,row#) restricted_rid from t, rrid where t.rowid=dbms_rowid.rowid_create(1, &data_object_id., file#,block#,row#);
For extra meaningless workout, try getting the file# and block# from the first four bytes (Data Block Address) without the help of DBMS_UTILITY. I did.
Update (April 30): Richard Foote finds this nice :-)
Subscribe to:
Posts (Atom)