One day my oracle database server gives me below error message and my database running in no-archivelog mode.
ERROR at line 1:
ORA-01113: file 12 needs media recovery
ORA-01110: data file 12: 'D:\ORACLE\ORADATA\URMI\XDB01.DBF'
i am just searching for solution on tom kyte site and finally i get it which is as below.
ERROR at line 1:
ORA-01113: file 11 needs media recovery
ORA-01110: data file 11: 'C:\ORACLE\ORADATA\TEST\TEST01.DBF'
SQL> archive log list
Database log mode No Archive Mode <------------- NOARCHIVELOG MODE!
Automatic archival Disabled
Archive destination C:\oracle\ora92\RDBMS
Oldest online log sequence 2
Current log sequence 4
SQL> recover tablespace test; <---- hOW IS IT POSSIBLE IN NOARCHIVELOG MODE?
Media recovery complete.
SQL> select * from scott.t;
X
----------
10
Followup:
it just used the online redo logs. its always been "possible"
Saturday, December 10, 2005
Thursday, October 20, 2005
Saturday, October 01, 2005
LOGMINER.
I am just testing logminer I have Oracle EE 9.2.0.1 installed on WinXP with noarchive mode.
connected with scott user.
scott@NIKUNJ> select count(*) from test;
COUNT(*)
----------
0
scott@NIKUNJ> insert into test select * from tab;
5 rows created.
scott@NIKUNJ> insert into test select * from tab;
5 rows created.
scott@NIKUNJ> insert into test select * from tab;
5 rows created.
scott@NIKUNJ> commit;
Commit complete.
scott@NIKUNJ> select count(*) from test;
COUNT(*)
----------
15
scott@NIKUNJ> delete from test;
15 rows deleted.
scott@NIKUNJ> commit;
Commit complete.
scott@NIKUNJ> select count(*) from test;
COUNT(*)
----------
0
scott@NIKUNJ>
Now I would like to get rows back.
connected with sys user and run below command.
sys@NIKUNJ> EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => -
> 'C:\ORACLE\ORADATA\NIKUNJ\REDO03.LOG',OPTIONS => DBMS_LOGMNR.NEW);
PL/SQL procedure successfully completed.
Here I had given my current online log file, you will get list of current log file through
sys@NIKUNJ> select * from v$logfile;
GROUP# STATUS TYPE
---------- ------- -------
MEMBER
--------------------------------------------------------------------------------
3 ONLINE
C:\ORACLE\ORADATA\NIKUNJ\REDO03.LOG
2 STALE ONLINE
C:\ORACLE\ORADATA\NIKUNJ\REDO02.LOG
1 STALE ONLINE
C:\ORACLE\ORADATA\NIKUNJ\REDO01.LOG
sys@NIKUNJ>
sys@NIKUNJ> EXECUTE DBMS_LOGMNR.START_LOGMNR(OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);
PL/SQL procedure successfully completed.
Above need for below reasons.
To fully translate the contents of redo logs, LogMiner requires access to a database
dictionary.
LogMiner uses the dictionary to translate internal object identifiers and datatypes to
object names and external data formats. Without a dictionary, LogMiner returns
internal object IDs and presents data as hex bytes.
For example, instead of the SQL statement:
INSERT INTO emp(name, salary) VALUES ('John Doe', 50000);
LogMiner will display:
insert into Object#2581(col#1, col#2) values (hextoraw('4a6f686e20446f65'),
hextoraw('c306'));"
LogMiner gives you three choices for your source dictionary:
1.Extracting the Dictionary to a Flat File
2.Extracting a Dictionary to the Redo Logs
3.Using the Online Catalog
sys@NIKUNJ> execute dbms_logmnr.start_logmnr;
PL/SQL procedure successfully completed.
sys@NIKUNJ> select sql_redo,sql_undo from v$logmnr_contents where username='SCOTT';
Then for time base you have to execute as below.
execute dbms_logmnr.start_logmnr(STARTTIME => TO_DATE('01-OCT-2005 10:00:00', 'DD-MON-YYYY HH:MI:SS'), -
ENDTIME => TO_DATE('01-OCT-2005 12:00:00', 'DD-MON-YYYY HH:MI:SS'));
might be you get error message like this which i get
ERROR at line 1:
ORA-01291: missing logfile
ORA-06512: at "SYS.DBMS_LOGMNR", line 53
ORA-06512: at line 1
then i add another log file
EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -
LOGFILENAME => 'C:\ORACLE\ORADATA\NIKUNJ\REDO02.LOG', -
OPTIONS => DBMS_LOGMNR.ADDFILE);
and executed once again and i get sucess.
execute dbms_logmnr.start_logmnr(STARTTIME => TO_DATE('01-OCT-2005 10:00:00', 'DD-MON-YYYY HH:MI:SS'), -
ENDTIME => TO_DATE('01-OCT-2005 12:00:00', 'DD-MON-YYYY HH:MI:SS'));
EXECUTE DBMS_LOGMNR.END_LOGMNR;
LogMiner information is contained in the following views. You can use SQL to query them as you would any other view.
V$LOGMNR_CONTENTS
Shows changes made to user and table information.
V$LOGMNR_DICTIONARY
Shows information about the LogMiner dictionary file, provided the dictionary was created using the STORE_IN_FLAT_FILE option. The information show includes the database name and status information.
V$LOGMNR_LOGS
Shows information about specified redo logs. There is one row for each redo log.
V$LOGMNR_PARAMETERS
Shows information about optional LogMiner parameters, including starting and ending system change numbers (SCNs) and starting and ending times.
connected with scott user.
scott@NIKUNJ> select count(*) from test;
COUNT(*)
----------
0
scott@NIKUNJ> insert into test select * from tab;
5 rows created.
scott@NIKUNJ> insert into test select * from tab;
5 rows created.
scott@NIKUNJ> insert into test select * from tab;
5 rows created.
scott@NIKUNJ> commit;
Commit complete.
scott@NIKUNJ> select count(*) from test;
COUNT(*)
----------
15
scott@NIKUNJ> delete from test;
15 rows deleted.
scott@NIKUNJ> commit;
Commit complete.
scott@NIKUNJ> select count(*) from test;
COUNT(*)
----------
0
scott@NIKUNJ>
Now I would like to get rows back.
connected with sys user and run below command.
sys@NIKUNJ> EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => -
> 'C:\ORACLE\ORADATA\NIKUNJ\REDO03.LOG',OPTIONS => DBMS_LOGMNR.NEW);
PL/SQL procedure successfully completed.
Here I had given my current online log file, you will get list of current log file through
sys@NIKUNJ> select * from v$logfile;
GROUP# STATUS TYPE
---------- ------- -------
MEMBER
--------------------------------------------------------------------------------
3 ONLINE
C:\ORACLE\ORADATA\NIKUNJ\REDO03.LOG
2 STALE ONLINE
C:\ORACLE\ORADATA\NIKUNJ\REDO02.LOG
1 STALE ONLINE
C:\ORACLE\ORADATA\NIKUNJ\REDO01.LOG
sys@NIKUNJ>
sys@NIKUNJ> EXECUTE DBMS_LOGMNR.START_LOGMNR(OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);
PL/SQL procedure successfully completed.
Above need for below reasons.
To fully translate the contents of redo logs, LogMiner requires access to a database
dictionary.
LogMiner uses the dictionary to translate internal object identifiers and datatypes to
object names and external data formats. Without a dictionary, LogMiner returns
internal object IDs and presents data as hex bytes.
For example, instead of the SQL statement:
INSERT INTO emp(name, salary) VALUES ('John Doe', 50000);
LogMiner will display:
insert into Object#2581(col#1, col#2) values (hextoraw('4a6f686e20446f65'),
hextoraw('c306'));"
LogMiner gives you three choices for your source dictionary:
1.Extracting the Dictionary to a Flat File
2.Extracting a Dictionary to the Redo Logs
3.Using the Online Catalog
sys@NIKUNJ> execute dbms_logmnr.start_logmnr;
PL/SQL procedure successfully completed.
sys@NIKUNJ> select sql_redo,sql_undo from v$logmnr_contents where username='SCOTT';
Then for time base you have to execute as below.
execute dbms_logmnr.start_logmnr(STARTTIME => TO_DATE('01-OCT-2005 10:00:00', 'DD-MON-YYYY HH:MI:SS'), -
ENDTIME => TO_DATE('01-OCT-2005 12:00:00', 'DD-MON-YYYY HH:MI:SS'));
might be you get error message like this which i get
ERROR at line 1:
ORA-01291: missing logfile
ORA-06512: at "SYS.DBMS_LOGMNR", line 53
ORA-06512: at line 1
then i add another log file
EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -
LOGFILENAME => 'C:\ORACLE\ORADATA\NIKUNJ\REDO02.LOG', -
OPTIONS => DBMS_LOGMNR.ADDFILE);
and executed once again and i get sucess.
execute dbms_logmnr.start_logmnr(STARTTIME => TO_DATE('01-OCT-2005 10:00:00', 'DD-MON-YYYY HH:MI:SS'), -
ENDTIME => TO_DATE('01-OCT-2005 12:00:00', 'DD-MON-YYYY HH:MI:SS'));
EXECUTE DBMS_LOGMNR.END_LOGMNR;
LogMiner information is contained in the following views. You can use SQL to query them as you would any other view.
V$LOGMNR_CONTENTS
Shows changes made to user and table information.
V$LOGMNR_DICTIONARY
Shows information about the LogMiner dictionary file, provided the dictionary was created using the STORE_IN_FLAT_FILE option. The information show includes the database name and status information.
V$LOGMNR_LOGS
Shows information about specified redo logs. There is one row for each redo log.
V$LOGMNR_PARAMETERS
Shows information about optional LogMiner parameters, including starting and ending system change numbers (SCNs) and starting and ending times.
Saturday, September 17, 2005
Thursday, September 15, 2005
Tuesday, September 13, 2005
I asked to tom abt. db_2k_cache_size
Hi Tom,
If my database server have below
DB_BLOCK_SIZE=4096
DB_CACHE_SIZE=12M
DB_2K_CACHE_SIZE=8M
DB_8K_CACHE_SIZE=4M
and if my database server have
DB_BLOCK_SIZE=4096
DB_CACHE_SIZE=12M
DB_2K_CACHE_SIZE=0
DB_8K_CACHE_SIZE=0
what are the proc and cros between this two ?
Regards,
Nikunj
Followup:
if you don't have any 2k or 8k tablespaces - you would not configure the 2k/8k
caches - they only get used if you have them.
If you "unconfigure" them and have tablespaces with those blocksizes - you'll
not be able to use them.
If my database server have below
DB_BLOCK_SIZE=4096
DB_CACHE_SIZE=12M
DB_2K_CACHE_SIZE=8M
DB_8K_CACHE_SIZE=4M
and if my database server have
DB_BLOCK_SIZE=4096
DB_CACHE_SIZE=12M
DB_2K_CACHE_SIZE=0
DB_8K_CACHE_SIZE=0
what are the proc and cros between this two ?
Regards,
Nikunj
Followup:
if you don't have any 2k or 8k tablespaces - you would not configure the 2k/8k
caches - they only get used if you have them.
If you "unconfigure" them and have tablespaces with those blocksizes - you'll
not be able to use them.
Monday, September 12, 2005
How to get Block size base on OS
For Windows on Intel.
1. For window on Intel default size is 512 byte, but you can change it by pre-formatting hard drive.
2. Use fsutil fsinfo ntfsinfo c: to get
c:\>fsutil fsinfo ntfsinfo c:
NTFS Volume Serial Number : 0x1e345b64345b3e49
Version : 3.1
Number Sectors : 0x0000000004a852c0
Total Clusters : 0x0000000000950a58
Free Clusters : 0x00000000003ae433
Total Reserved : 0x0000000000000000
Bytes Per Sector : 512
Bytes Per Cluster : 4096 -- this is answer
Bytes Per FileRecord Segment : 1024
Clusters Per FileRecord Segment : 0
Mft Valid Data Length : 0x000000000d72a000
Mft Start Lcn : 0x00000000000c0000
Mft2 Start Lcn : 0x00000000004a852c
Mft Zone Start : 0x0000000000800020
Mft Zone End : 0x000000000083b520
3. To get all possible OS block sizes use
c:\ format /?
4. In explorer select your hard drive, click right button, select "Format..."
from
context menu, and see your OS block size as "Allocation unit size".
Redhat linux on Intel
dump2fs /dev/hda
1. For window on Intel default size is 512 byte, but you can change it by pre-formatting hard drive.
2. Use fsutil fsinfo ntfsinfo c: to get
c:\>fsutil fsinfo ntfsinfo c:
NTFS Volume Serial Number : 0x1e345b64345b3e49
Version : 3.1
Number Sectors : 0x0000000004a852c0
Total Clusters : 0x0000000000950a58
Free Clusters : 0x00000000003ae433
Total Reserved : 0x0000000000000000
Bytes Per Sector : 512
Bytes Per Cluster : 4096 -- this is answer
Bytes Per FileRecord Segment : 1024
Clusters Per FileRecord Segment : 0
Mft Valid Data Length : 0x000000000d72a000
Mft Start Lcn : 0x00000000000c0000
Mft2 Start Lcn : 0x00000000004a852c
Mft Zone Start : 0x0000000000800020
Mft Zone End : 0x000000000083b520
3. To get all possible OS block sizes use
c:\ format /?
4. In explorer select your hard drive, click right button, select "Format..."
from
context menu, and see your OS block size as "Allocation unit size".
Redhat linux on Intel
dump2fs /dev/hda
Saturday, September 10, 2005
set your sql prompt
please create login.sql with as below and put it in to ORACLE_HOME/bin.
define gname=idle
connect &connect_to
column global_name new_value gname
select lower(user) || '@' || substr( global_name,1,
decode( dot, 0, length(global_name), dot-1) ) global_name
from (select global_name, instr(global_name,'.') dot from global_name );
set sqlprompt '&gname> '
now when ever you connect to database use @login instead of connect so you get idea where you are.
i.e.
varakh@NIKUNJ> @login
Enter value for connect_to: ex2005/test@test
Connected.
GLOBAL_NAME
--------------------------------------------------------------------------------
ex2005@TEST
Input truncated to 24 characters
ex2005@TEST>
define gname=idle
connect &connect_to
column global_name new_value gname
select lower(user) || '@' || substr( global_name,1,
decode( dot, 0, length(global_name), dot-1) ) global_name
from (select global_name, instr(global_name,'.') dot from global_name );
set sqlprompt '&gname> '
now when ever you connect to database use @login instead of connect so you get idea where you are.
i.e.
varakh@NIKUNJ> @login
Enter value for connect_to: ex2005/test@test
Connected.
GLOBAL_NAME
--------------------------------------------------------------------------------
ex2005@TEST
Input truncated to 24 characters
ex2005@TEST>
Friday, September 09, 2005
Worked example for using Excel through ODBC
I had read it on asktom i thing its good artical i had not tested but its good so i had posted here
Click Here
Click Here
Tuesday, September 06, 2005
SPFILE
I have Book of Thomas Kyte Effective Oracle by Design. He had write good things on SPFILE which I want to share with you.
The Problem with PFILEs
PFILE resided on the database server. To start and shutdown the server everyone need to access the database server physically to ensure the correct init.ora file being used.
Also, if you used system management tools OEM, OMS to start and stop the database, the PFILEs needed to be physically wherever the system management tools was, and this generally, was not the machine where your database ran. In short you great confusion and “missing parameters” from time to time, If you simply started the database from a different machine, an entirely different set of parameters could be used, since a different PFILE would be used.
Another problem was that changes where made to the instance via the ALTER SYSTEM command, you would need to remember to update each and every copy of the PFILE out there in order to have the new setting take effect after next restart of the database. If you forgot, the system would run file until someone restarted the database, and then you would discover that the new settings you made were no longer there.
How SPFILEs Work
As SPFILE is stored in binary format on the server itself. You no longer need to have a local copy to start the database remotely. The binary file format serves two purposes:
• Change made via the ALTER SYSTEM command may now persist across server restarts. If is no longer necessary to update the init.ora text file with every little change. You can use SQL to maintain your parameter settings.
• It prevents us from being tempted to edit the file using any sort of editor. If provides SQL to manipulate it and prohibits us from editing it directly. However, if you do want to edit the file, there are techniques you can employ to accomplish that.
How to force database to use SPFILE.
If you are using oracle 9i or later by default server used SPFILE but if you migrated you can do it with.
Create spfile from pfile=’location of your pfile’;
Next time you start your database it will use SPFILE.
You can create textual version of this binary SPFILE using below command.
Create pfile=’path where you want to be created’ from spfile=’path of spfile’;
Now if you want to force database to use PFILE instead of SPFILE.
Start your database with below command instead of just startup.
Startup pfile=’location of your pfile’;
PFILE is not still obsolete?
Using a PFILE meant that if you started a database on Unix from your windows pc, the parameter file for the database need to be accessible on the PC. For some this was not a big deal. They would simply telnet into the database server itself and start it up or shut it down. The drawback to this was that everyone needed to physically access the server in order to ensure the correct init.ora file was being used to start the database (so everyone needed the ability to physically access the server)
The Problem with PFILEs
PFILE resided on the database server. To start and shutdown the server everyone need to access the database server physically to ensure the correct init.ora file being used.
Also, if you used system management tools OEM, OMS to start and stop the database, the PFILEs needed to be physically wherever the system management tools was, and this generally, was not the machine where your database ran. In short you great confusion and “missing parameters” from time to time, If you simply started the database from a different machine, an entirely different set of parameters could be used, since a different PFILE would be used.
Another problem was that changes where made to the instance via the ALTER SYSTEM command, you would need to remember to update each and every copy of the PFILE out there in order to have the new setting take effect after next restart of the database. If you forgot, the system would run file until someone restarted the database, and then you would discover that the new settings you made were no longer there.
How SPFILEs Work
As SPFILE is stored in binary format on the server itself. You no longer need to have a local copy to start the database remotely. The binary file format serves two purposes:
• Change made via the ALTER SYSTEM command may now persist across server restarts. If is no longer necessary to update the init.ora text file with every little change. You can use SQL to maintain your parameter settings.
• It prevents us from being tempted to edit the file using any sort of editor. If provides SQL to manipulate it and prohibits us from editing it directly. However, if you do want to edit the file, there are techniques you can employ to accomplish that.
How to force database to use SPFILE.
If you are using oracle 9i or later by default server used SPFILE but if you migrated you can do it with.
Create spfile from pfile=’location of your pfile’;
Next time you start your database it will use SPFILE.
You can create textual version of this binary SPFILE using below command.
Create pfile=’path where you want to be created’ from spfile=’path of spfile’;
Now if you want to force database to use PFILE instead of SPFILE.
Start your database with below command instead of just startup.
Startup pfile=’location of your pfile’;
PFILE is not still obsolete?
Using a PFILE meant that if you started a database on Unix from your windows pc, the parameter file for the database need to be accessible on the PC. For some this was not a big deal. They would simply telnet into the database server itself and start it up or shut it down. The drawback to this was that everyone needed to physically access the server in order to ensure the correct init.ora file was being used to start the database (so everyone needed the ability to physically access the server)
Subscribe to:
Posts (Atom)