Tuesday, December 10, 2013

How to export tables from multiple schemas with Oracle Data Pump in Oracle 10g and 11g databases

How to export tables from multiple schemas with Oracle Data Pump in Oracle 10g and 11g databases

Lets now try to export tables from different schemas in Oracle 10g database on a Linux server.
[oracle@localhost ~]$ sqlplus / as sysdba

We are assuming that a1 and a2 are 2 schemas  with a common table t1
[oracle@sap1]> expdp ‘”/ as sysdba”‘ directory=DATAPUMPDEMO_exp
dumpfile=a1-a2_tables logfile=a1-a2_tables tables=A1.T1,A2.T1                                                    

Export: Release 10.2.0.4.0 – 64bit Production on Thursday, 23 June, 2013 15:35:01

Copyright (c) 2003, 2007, Oracle.  All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 – 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
UDE-00012: table mode exports only allow objects from one schema

As you can see you can not export tables from different schemas by using “tables” parameter in expdp in Oracle 10g.
What you should do is create a table and insert the names of the tables to be exported to this table and use the “schemas” and “include” parameter in expdp command.
A1@ORCL > create table expdp_tables (table_name varchar2(30));

Table created.

A1@ORCL > insert into expdp_tables values (‘T1′);

1 row created.

A1@ORCL > commit;

Commit complete.
You only need to insert the distinct names of all the tables into “expdp_tables” schema. Put the names of the all the distinct schemas whose tables will be exported in the “schemas” parameter.
In that method you export all the tables in “expdp_tables” table from both schemas, this means if a specific table name exist in both schemas and you only want to export one of them, you can not do that.
[oracle@sap1]> expdp ‘”/ as sysdba”‘ directory=UURAL_DATAPUMPDEMO
dumpfile=a1-a2_tables logfile=a1-a2_tables schemas=A1,A2
INCLUDE=TABLE:\”IN \(SELECT table_name FROM a1.expdp_tables\)\”              
Export: Release 10.2.0.4.0 – 64bit Production
Copyright (c) 2003, 2007, Oracle.  All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 – 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting “SYS”.”SYS_EXPORT_SCHEMA_01″:  “/******** AS SYSDBA” directory=UURAL_DATAPUMPDEMO dumpfile=a1-a2_tables logfile=a1-a2_tables schemas=A1,A2 INCLUDE=TABLE:”IN (SELECT table_name FROM a1.expdp_tables)”
Estimate in progress using BLOCKS method…
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 0 KB
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
. . exported “A1″.”T1″                                 0 KB       0 rows
. . exported “A2″.”T1″                                 0 KB       0 rows
Master table “SYS”.”SYS_EXPORT_SCHEMA_01″ successfully loaded/unloaded
******************************************************************************

Lets now try to export tables from different schemas in Oracle 11g database on a Linux server.

[oracle@localhost ~]$ expdp ‘”/ as sysdba”‘ directory=DATA_PUMP_DIR dumpfile=a1-a2_tables logfile=a1-a2_tables tables=A1.T1,A2.T1

Export: Release 11.2.0.2.0 – Production
Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 – Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting “SYS”.”SYS_EXPORT_TABLE_01″:  “/******** AS SYSDBA” directory=DATA_PUMP_DIR dumpfile=a1-a2_tables logfile=a1-a2_tables tables=A1.T1,A2.T1
Estimate in progress using BLOCKS method…
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 0 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
. . exported “A1″.”T1″                                 0 KB       0 rows
. . exported “A2″.”T1″                                 0 KB       0 rows
Master table “SYS”.”SYS_EXPORT_TABLE_01″ successfully loaded/unloaded
******************************************************************************
Dump file set for SYS.SYS_EXPORT_TABLE_01 is:
/home/oracle/app/oracle/admin/orcl/dpdump/a1-a2_tables.dmp
Job “SYS”.”SYS_EXPORT_TABLE_01″ successfully completed
As you can see, in Oracle 11g, you can export tables from different schemas by using “tables” parameter in Export Data Pump Utility and this is not possible in Oracle 10g.

Tuesday, November 12, 2013

Oracle Tablespace High water mark

Oracle Tablespace High water mark

Tablespace High water mark
Tablespace HWM



col tablespace_name format a15
col file_size format 99999
col file_name format a50
col hwm format 99999
col can_save format 99999

SELECT tablespace_name, file_name, file_size, hwm, file_size-hwm can_save
FROM (SELECT /*+ RULE */ ddf.tablespace_name, ddf.file_name file_name,
ddf.bytes/1048576 file_size,(ebf.maximum + de.blocks-1)*dbs.db_block_size/1048576 hwm
FROM dba_data_files ddf,(SELECT file_id, MAX(block_id) maximum FROM dba_extents GROUP BY file_id) ebf,dba_extents de,
(SELECT value db_block_size FROM v$parameter WHERE name='db_block_size') dbs
WHERE ddf.file_id = ebf.file_id
AND de.file_id = ebf.file_id
AND de.block_id = ebf.maximum
ORDER BY 1,2);



TABLESPACE_NAME FILE_NAME                                          FILE_SIZE    HWM CAN_SAVE
--------------- -------------------------------------------------- --------- ------ --------
ANUJTEST /opt/app/oracle/oradata/orcl/anujtest.dbf 10 8 2
EXAMPLE /opt/app/oracle/oradata/orcl/example01.dbf 100 81 19
PERFSTAT /opt/app/oracle/oradata/orcl/anuj_perfstat.dbf 1000 115 885
RMAN /opt/app/oracle/oradata/orcl/rman.dbf 50 7 43
SYSAUX /opt/app/oracle/oradata/orcl/sysaux01.dbf 830 777 53
SYSTEM /opt/app/oracle/oradata/orcl/system01.dbf 840 823 17
TSAPEXU /opt/app/oracle/oradata/orcl/tsapexu01.dbf 110 97 13
UNDOTBR /opt/app/oracle/oradata/orcl/undotbR.dbf 500 46 454
USERS /opt/app/oracle/oradata/orcl/users01.dbf 609 576 33

9 rows selected.

Monday, October 14, 2013

RMAN Backup Optimization

RMAN Backup Optimization


From 11G R1 onwards only undo data blocks containing undo data for not yet committed --- or rolled back --- transactions are backed up. Undo data in the undo tablespace still required to satisfy the undo retention but only containing undo records for already committed transactions are not backed up anymore. Nice if you have undo tablespaces of multiple Gb !!!

If backup optimization is configured ( see below ) then
1. Data files belonging to read only tablespaces are not backed up all the time. It is not needed anymore to use the SKIP READONLY clause. ( Note that the SKIP READONLY could not be used for RMAN backups of standby databases. )
2. Data files belonging to tablespaces which were offlined normal are not backed up all the time. It is not needed anymore to use the SKIP TABLESPACE clause.
3. The archivelog sequence is not backed up all the time. This is nice if you keep a large amount of archived redo logs on disk for whatever reason ( Logminer, Streams, Data Guard )

How to configure backup optimization ?

RMAN> configure backup optimization on;

new RMAN configuration parameters:
CONFIGURE BACKUP OPTIMIZATION ON;
new RMAN configuration parameters are successfully stored

What is the RMAN output when files are skipped during backup ?

RMAN> backup as compressed backupset database;

Starting backup at 23-JUN-09
using channel ORA_DISK_1
using channel ORA_DISK_2
skipping datafile 4; already backed up 2 time(s)
channel ORA_DISK_1: starting compressed full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00002 name=C:\ORACLE\PRODUCT\11.1.0\ORADATA\PLATINUM\DATAFILE\O1_MF_SYSAUX_0MK30K1T_.DBF
input datafile file number=00003 name=C:\ORACLE\PRODUCT\11.1.0\ORADATA\PLATINUM\DATAFILE\O1_MF_UNDOTBS1_0NK30K84_.DBF
input datafile file number=00006 name=C:\ORACLE\PRODUCT\11.1.0\ORADATA\PLATINUM\DATAFILE\O1_MF_USERS_ARCHIVE_02_0PK30K9S_.DBF
input datafile file number=00009 name=C:\ORACLE\PRODUCT\11.1.0\ORADATA\PLATINUM\DATAFILE\O1_MF_PM2_4ZM8LTBW_.DBF
input datafile file number=00011 name=C:\ORACLE\PRODUCT\11.1.0\ORADATA\PLATINUM\DATAFILE\O1_MF_CATALOG_53ZL1X82_.DBF

Tuesday, September 17, 2013

TRIGGER TO GET SERVERERROR ON DATABASE



Lets create a trigger that will log all the errors in the database:

It is better to create this trigger under SYS or DBA user's so that you don't have to grant all permission if you have to grant permissions separately.

--------------------------------------------------------
--  DDL for Trigger AFTER_ERROR
--------------------------------------------------------

  CREATE OR REPLACE TRIGGER "AFTER_ERROR"
    AFTER SERVERERROR ON DATABASE
DECLARE
    pragma autonomous_transaction;
    id NUMBER;
    sql_text ORA_NAME_LIST_T;
    v_stmt CLOB;
    n NUMBER;
BEGIN
    SELECT oraerror_seq.nextval INTO id FROM dual; --- Seq no's
    --
    n := ora_sql_txt(sql_text);
    --
    IF n >= 1
    THEN
        FOR i IN 1..n LOOP
            v_stmt := v_stmt || sql_text(i);
        END LOOP;
    END IF;
    --
    FOR n IN 1..ora_server_error_depth
    LOOP
       IF ( ora_server_error(n) in (  '6512','25228','25254')
          AND ora_login_User in ('SYSMAN','DBSNMP') )
       THEN
           -- ignore this error
           NULL;
       ELSE
           INSERT INTO oraerror VALUES (id, sysdate, ora_login_user, ora_client_ip_address, ora_server_error(n), ora_server_error_msg(n),v_stmt);
           COMMIT;
       END IF;
    END LOOP;

/*  You can un comment this part to make this table auto clean on every insert. This will automatically cleanup the data in the table which is older than  30 days (you can change as per your requirement )
Delete from oraerror where log_date <=sysdate -30;
    commit; */
 --
END after_error;
/
ALTER TRIGGER "AFTER_ERROR" ENABLE;





Now create a seq needed for id generation :

--------------------------------------------------------
--  DDL for Sequence ORAERROR_SEQ
--------------------------------------------------------

   CREATE SEQUENCE  "ORAERROR_SEQ"  MINVALUE 1 MAXVALUE 9999999999999999999999999999 INCREMENT BY 1 START WITH 1040 CACHE 20 NOORDER  NOCYCLE ;


Table to capture err :


 CREATE TABLE "ORAERROR"
   (    "ID" NUMBER,
    "LOG_DATE" DATE,
    "LOG_USR" VARCHAR2(30 BYTE),
    "TERMINAL" VARCHAR2(50 BYTE),
    "ERR_NR" NUMBER(10,0),
    "ERR_MSG" VARCHAR2(4000 BYTE),
    "STMT" CLOB
   )  TABLESPACE ) ;


Tuesday, August 27, 2013

Find index skewed, rebuild

Find index skewed, rebuild
                                                

Summary
It is important to periodically examine your indexes to determine if they have become skewed and might need to be rebuilt. When an index is skewed, parts of an index are accessed more frequently than others. As a result, disk contention may occur, creating a bottleneck in performance.

The key column to decide index skewed is blevel. You must estimate statistics for the index or analyze index validate structure.

If the BLEVEL were to be more than 4, it is recommended to rebuild the index.
SELECT OWNER, INDEX_NAME, TABLE_NAME, LAST_ANALYZED, BLEVEL
FROM DBA_INDEXES
WHERE OWNER NOT IN ('SYS', 'SYSTEM')
AND BLEVEL >= 4
ORDER BY BLEVEL DESC;

Which SQL are doing a lot of disk I/O

Which SQL are doing a lot of disk I/O
                                                

Which SQL are doing a lot of disk I/O
SELECT * FROM 
(SELECT SUBSTR(sql_text,1,500) SQL,
ELAPSED_TIME, CPU_TIME,
disk_reads, executions,
disk_reads/executions "Reads/Exec",
hash_value,address
FROM V$SQLAREA
WHERE ( hash_value, address ) IN (
SELECT DISTINCT HASH_VALUE, address
FROM v$sql_plan
WHERE DISTRIBUTION IS NOT NULL )
AND disk_reads > 100
AND executions > 0
ORDER BY ELAPSED_TIME DESC)
WHERE ROWNUM <=30;

Disk I/O

Disk I/O

Script

Datafiles Disk I/O
Tablespace Disk I/O
Which segments have top Logical I/O & Physical I/O
Which SQL are doing a lot of disk I/O

Which segments have top Logical I/O & Physical I/O

Which segments have top Logical I/O & Physical I/O
                                                

Summary
Do you know which segments in your Oracle Database have the largest amount of I/O, physical and logical? This SQL helps to find out which segments are heavily accessed and helps to target tuning efforts on these segments:
SELECT ROWNUM AS Rank, Seg_Lio.* FROM 
(SELECT St.Owner, St.Obj#, St.Object_Type, St.Object_Name, St.VALUE, 'LIO' AS Unit
FROM V$segment_Statistics St
WHERE St.Statistic_Name = 'logical reads'
ORDER BY St.VALUE DESC) Seg_Lio
WHERE ROWNUM <= 10
UNION ALL
SELECT ROWNUM AS Rank, Seq_Pio_r.* FROM
(SELECT St.Owner, St.Obj#, St.Object_Type, St.Object_Name, St.VALUE, 'PIO Reads' AS Unit
FROM V$segment_Statistics St
WHERE St.Statistic_Name = 'physical reads'
ORDER BY St.VALUE DESC) Seq_Pio_r
WHERE ROWNUM <= 10
UNION ALL
SELECT ROWNUM AS Rank, Seq_Pio_w.* FROM
(SELECT St.Owner, St.Obj#, St.Object_Type, St.Object_Name, St.VALUE, 'PIO Writes' AS Unit
FROM V$segment_Statistics St
WHERE St.Statistic_Name = 'physical writes'
ORDER BY St.VALUE DESC) Seq_Pio_w
WHERE ROWNUM <= 10;

Tablespace Disk I/O

Tablespace Disk I/O
                                               

Summary
The Physical design of the database reassures optimal performance for DISK I/O. Storing the datafiles in different filesystems (Disks) is a good technique to minimize disk contention for I/O.

How I/O is spread per Tablespace
SELECT T.NAME, SUM(Physical_READS) Physical_READS, 
ROUND((RATIO_TO_REPORT(SUM(Physical_READS)) OVER ())*100, 2) || '%' PERC_READS,
SUM(Physical_WRITES) Physical_WRITES,
ROUND((RATIO_TO_REPORT(SUM(Physical_WRITES)) OVER ())*100, 2) || '%' PERC_WRITES,
SUM(total) total, ROUND((RATIO_TO_REPORT(SUM(total)) OVER ())*100, 2) || '%' PERC_TOTAL
FROM (SELECT ts#, NAME, phyrds Physical_READS, phywrts Physical_WRITES,
phyrds + phywrts total
FROM v$datafile df, v$filestat fs
WHERE df.FILE# = fs.FILE#
ORDER BY phyrds DESC ) A, sys.ts$ T
WHERE A.ts# = T.ts#
GROUP BY T.NAME
ORDER BY Physical_READS DESC;

Datafiles Disk I/O

Datafiles Disk I/O
                                            

Summary
The Physical design of the database reassures optimal performance for DISK I/O. Storing the datafiles in different filesystems (Disks) is a good technique to minimize disk contention for I/O.

How I/O is spread per datafile
SELECT NAME, phyrds Physical_READS, ROUND((RATIO_TO_REPORT(phyrds) OVER ())*100, 2)|| '%' PERC_READS, 
phywrts Physical_WRITES, ROUND((RATIO_TO_REPORT(phywrts) OVER ())*100, 2)|| '%' PERC_WRITES,
phyrds + phywrts total
FROM v$datafile df, v$filestat fs
WHERE df.FILE# = fs.FILE#
ORDER BY phyrds DESC;
Tip: ORDER BY phyrds, order by physical reads descending. ORDER BY phywrts, order by physical writes descending.

How I/O is spread per filesystem
SELECT filesystem, ROUND((RATIO_TO_REPORT(READS) OVER ())*100, 2) || '%' PERC_READS, 
ROUND((RATIO_TO_REPORT(WRITES) OVER ())*100, 2) || '%' PERC_WRITES,
ROUND((RATIO_TO_REPORT(TOTAL) OVER ())*100, 2) || '%' PERC_TOTAL
FROM (SELECT filesystem, SUM(Physical_READS) READS, SUM(Physical_WRITES) WRITES, SUM(total) TOTAL
FROM (SELECT SUBSTR(NAME, 0, 25) filesystem, phyrds Physical_READS,
ROUND((RATIO_TO_REPORT(phyrds) OVER ())*100, 2)|| '%' PERC_READS,
phywrts Physical_WRITES, ROUND((RATIO_TO_REPORT(phywrts) OVER ())*100, 2)|| '%' PERC_WRITES,
phyrds + phywrts total
FROM v$datafile df, v$filestat fs
WHERE df.FILE# = fs.FILE#
ORDER BY phyrds DESC) A
GROUP BY filesystem) B
ORDER BY ROUND((RATIO_TO_REPORT(total) OVER ())*100, 2) DESC;
Tip: To see the filesystems correct experiment with the SUBSTR(NAME, 0, 25)

How I/O is spread for the datafiles of a specific tablespace
SELECT df.NAME, phyrds Physical_READS, ROUND((RATIO_TO_REPORT(phyrds) OVER ())*100, 2)|| '%' PERC_READS, 
phywrts Physical_WRITES, ROUND((RATIO_TO_REPORT(phywrts) OVER ())*100, 2)|| '%' PERC_WRITES,
phyrds + phywrts total
FROM v$datafile df, v$filestat fs, ts$ t
WHERE df.FILE# = fs.FILE#
AND df.ts# = t.ts#
AND t.NAME = 'TABLESPACE_NAME'
ORDER BY phyrds DESC;

Current waiting events Summary

The first and most important script about OWI, is where current sessions waiting
SELECT 
a.SID, b.serial#, b.status, p.spid, b.logon_time, a.event, l.NAME latch_name, a.SECONDS_IN_WAIT SEC,
b.sql_hash_value, b.osuser, b.username, b.module, b.action, b.program,
a.p1,a.p1raw, a.p2, a.p3, --, b.row_wait_obj#, b.row_wait_file#, b.row_wait_block#, b.row_wait_row#,
'alter system kill session ' || '''' || a.SID || ', '|| b.serial# || '''' || ' immediate;' kill_session_sql
FROM v$session_wait a, v$session b, v$latchname l, v$process p
WHERE a.SID = b.SID
AND b.username IS NOT NULL
AND b.TYPE <> 'BACKGROUND'
AND a.event NOT IN (SELECT NAME FROM v$event_name WHERE wait_class = 'Idle')
AND (l.latch#(+) = a.p2)
AND b.paddr = p.addr
--AND a.sid = 559
--AND module IN ('JDBC Thin Client')
--AND p.spid = 13317
--AND b.sql_hash_value = '4119097924'
--AND event like 'library cache pin%'
--AND b.osuser = 'oracle'
--AND b.username = 'APPS'
ORDER BY a.SECONDS_IN_WAIT DESC;
Some useful explanations

sid, serial#, status, logon_time, osuser, username, module, action, program: comes from v$session information
kill_session_sql: is the sql statement for killing the session
spid: is the unix process id, in case you want to $>kill -9 spid
sql_hash_value: is the SQL_ADDRESS to identify the SQL statement that is currently being executed. You'll need it for explain plans, etc.
event, latch_name, sec: wait events and how much time is waiting!
p1, p1raw, p2, p3: arguments to find the object related to waiting

Db file sequential read

Db file sequential read
                                                

Summary
The db file sequential read wait event means that Oracle is waiting while doing a single-block I/O read. This is the case when reading an index.

Like all wait events the columns P1, P2, P3 give us the information needed to diagnose the waiting.

Tip: A db sequential read is usually a single-block read, although it is possible to see sequential reads for more than one block (See P3 which is the number of blocks read).
This wait may also be seen for reads from datafile headers (P2=1 indicates a file header read) .
 Parameters: 
P1 = file#
P2 = block#
P3 = blocks

file# This is the file# of the file that Oracle is trying to read
from. In Oracle8/9 it is the ABSOLUTE file number.

block# This is the starting block number in the file from where
Oracle starts reading the blocks. Typically only one block is
being read.

To find the object that Oracle doing the I/O use one of
the two following ways

SELECT owner, segment_type, segment_name, partition_name,
tablespace_name
FROM dba_extents
WHERE :P2 BETWEEN block_id AND (block_id + blocks - 1)
AND file_id = :P1;

Or even better
SELECT a.SID, c.obj, c.FILE#, c.dbablk
FROM v$session_wait a, x$bh c
WHERE a.p1 = c.FILE#(+)
AND a.p2 = c.dbablk(+)
AND a.event = 'db file sequential read'
AND a.SID = :sid_waiting;

blocks This parameter specifies the number of blocks that Oracle is
trying to read from the file# starting at block#. This is
usually "1" but if P3 > 1 then this is a multiblock read.
Multiblock "db file sequential read"s may be seen in
earlier Oracle versions when reading from a SORT (TEMPORARY)
segments.

Wait Time:
The IO is generally issued as a single IO request to the OS - the wait
blocks until the IO request completes.
Note than an Oracle read request to the OS may be satisfied from an
OS file system cache so the wait time may be very small.
Some advise
If you see this wait event then general you are in a good position. All the databases (especially the very big systems) have some wait events and doing IO for an index scan is usual.

But SEC (seconds in waiting) from this query must be 0 or close to zero (1,2,3,4). If you see time waiting to increasing then you must tune the index I/O

How to improve performance for db file sequential read
The steps, starting first from that might have better results, are:

1. Rebuild the index (Fast index rebuild)
***To eliminate this case you have to examine the state of the index. 
If it is "compact" with no empty holes then there is no need to rebuild it.
Find index skewed, rebuild
Find indexes browning, rebuild

2. Tune SQL to use a better index
3. Distribute the index in different filesystems to reduce contention for I/O (Disk I/O)
4. Increase the db_cache_size

Embercadero has really nice explanation on this,try below site:

https://sites.google.com/site/embtdbo/wait-event-documentation/oracle-io-waits

Friday, August 23, 2013

DELETE or CLEANUP failed export Jobs


 Failed Jobs can be verified by using the following sql:

sql > select owner_name,job_name,operation,job_mode,state,attached_sessions from dba_datapump_jobs;

 





Check the state (status ) of the Jobs all the failed Jobs,they will show as not running for failed once.


DROP MASTER TABLE


Since  the  above  jobs  are  orphaned or  not running  won't  be  restarted  anymore,  so  drop  the master table. 
The master  tables  above  are  (SYS_EXPORT_FULL_01,   SYS_EXPORT_FULL_02, SYS_EXPORT_FULL_01) .
SQL*Plus: Release 10.2.0.4.0 - Production on Fri Aug 23 10:14:16 2013

Copyright (c) 1982, 2007, Oracle.  All Rights Reserved.

Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> drop table SYS_EXPORT_FULL_01;

Table dropped.

SQL> drop table SYS_EXPORT_FULL_02;

Table dropped.


Dropping these tables will have no impact on you new jobs are any database objects.
Now lets check if the Jobs are still showing with the failed status.

Wednesday, August 21, 2013

Extract all tablespaces DDL

Nice and easiest way to to extract the DDL for all tablepaces..

 Generate the DDL using the below query and you can re create the table spaces in any environment with any changes you want to make.

set heading off;
set echo off;
Set pages 999;
set long 90000;
spool ddl_list.sql

select dbms_metadata.get_ddl('TABLESPACE',tb.tablespace_name) from dba_tablespaces tb;

spool off

(sample) output will look something like this:

"
  CREATE TABLESPACE "SYSTEM" DATAFILE
  '/u02/oracle/oradata/system01.dbf' SIZE 314572800
  AUTOEXTEND ON NEXT 10485760 MAXSIZE 32767M
  LOGGING ONLINE PERMANENT BLOCKSIZE 8192
  EXTENT MANAGEMENT LOCAL AUTOALLOCATE SEGMENT SPACE MANAGEMENT MANUAL
   ALTER DATABASE DATAFILE
  '/u02/oracle/oradata/QPUAT/system01.dbf' RESIZE 4194304000
 "
"
  CREATE UNDO TABLESPACE "UNDOTBS1" DATAFILE
  '/u02/oracle/oradata/undotbs01.dbf' SIZE 209715200
  AUTOEXTEND ON NEXT 5242880 MAXSIZE 32767M
  BLOCKSIZE 8192
  EXTENT MANAGEMENT LOCAL AUTOALLOCATE
   ALTER DATABASE DATAFILE
  '/u02/oracle/oradata/QPUAT/undotbs01.dbf' RESIZE 1782579200
 "

Upgrading Opatch from 10.2.0.4.2 to 10.2.0.5.1 Version

Upgrading Opatch to Latest Version :

It is so simple to upgrade th Opatch version for your database.

I will check my current version of Opatch
[oracle@linux5 OPatch]$ opatch version
Invoking OPatch 10.2.0.4.2

OPatch Version: 10.2.0.4.2

OPatch succeeded.

Now download the latest Opatch version from oracle support site.
in this case I have downloaded p6880880_101000 & moved it to the database server
Move the patch file to your oracle home

[oracle@linux5 OPatch] mv p6880880_101000_SOLARIS64.zip $ORACLE_HOME
[oracle@linux5 OPatch] cd $ORACLE_HOME
(If a former OPatch directory already exist, then rename it)

Make sure you backup your current Opatch home
                                                                     
[oracle@linux5 OPatch] mv Opatch OPatch_100057_OLD    

Now unzip the patch file
[oracle@linux5 OPatch] unzip p6880880_101000_SOLARIS64.zip
Archive:  p6880880_101000_SOLARIS64.zip
   creating: OPatch/
   creating: OPatch/docs/
  inflating: OPatch/docs/FAQ      
  inflating: OPatch/docs/README.txt
  inflating: OPatch/docs/Users_Guide.txt
  inflating: OPatch/docs/bt1.txt  
  inflating: OPatch/docs/bt2.txt  
  inflating: OPatch/docs/tmp      
  inflating: OPatch/emdpatch.pl   
   creating: OPatch/jlib/
  inflating: OPatch/jlib/opatch.jar
  inflating: OPatch/opatch        
  inflating: OPatch/opatch.bat    
  inflating: OPatch/opatch.pl     
   creating: OPatch/perl_modules/
  inflating: OPatch/perl_modules/Apply.pm
  inflating: OPatch/perl_modules/AttachHome.pm
  inflating: OPatch/perl_modules/Command.pm
  inflating: OPatch/perl_modules/LsInventory.pm
  inflating: OPatch/perl_modules/Query.pm
  inflating: OPatch/perl_modules/RollBack.pm
  inflating: OPatch/perl_modules/Version.pm
  inflating: OPatch/perl_modules/XML.pm
  inflating: OPatch/perl_modules/opatchIO.pm
  inflating: OPatch/README.txt    
   creating: OPatch/ocm/
 extracting: OPatch/ocm/ocm.zip   
   creating: OPatch/ocm/doc/
  inflating: OPatch/ocm/doc/license.txt
   creating: OPatch/ocm/lib/
  inflating: OPatch/ocm/lib/emocmutl.jar
   creating: OPatch/ocm/bin/
  inflating: OPatch/ocm/bin/emocmrsp




thats it your patching is done.

check the Optach version :

[oracle@linux5 bin]$ cd $ORACLE_HOME/OPatch

export the path for Opatch
[oracle@linux5 OPatch]$ export PATH=$PATH:$ORACLE_HOME/OPatch

[oracle@linux5 OPatch]$ opatch version
Invoking OPatch 10.2.0.5.1

OPatch Version: 10.2.0.5.1

OPatch succeeded.

Monday, August 19, 2013

Why is Swapping Bad for oracle?


Linux OS is a virtual memory system like any other modern operating system. The Virtual Memory Management system of Linux includes:
  • Paging
  • Swapping
  • HugePages
  • Slab allocator
  • Shared memory
When almost all of the available physical memory (RAM) is started to be used in Linux, the kernel will start to swap out pages to the swap (disk space), or worse it may start swapping out entire processes. One another scenario is that it starts killing processes using the Out-of-Memory (OOM) Killer.

Swap Usage on Linux

To check swap usage on Linux you can use one of below:
  • free: Seek for low (or zero) values for Swap / used:
# free -m
             total       used       free     shared    buffers     cached
Mem:          4018       3144        873          0         66       2335
-/+ buffers/cache:        742       3276
Swap:         4690          0       4690
  • meminfo: Seek for SwapTotal = SwapFree
# grep Swap /proc/meminfo
SwapCached:            0 kB
SwapTotal:       4803392 kB
SwapFree:        4803380 kB
  • top: Look for low (preferably zero) values of Swap / used:
# top

...
Mem:   4115320k total,  3219408k used,   895912k free,    68260k buffers
Swap:  4803392k total,       12k used,  4803380k free,  2390804k cached
...
  • vmstat: Look for si / so values to be zero:
# vmstat
procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa
 0  0     12 871592  69308 2405188    0    0   103    36  275  500 14 13 71  1

Why is Swapping Bad?

Especially on Linux we try to avoid swapping because:
  • The swapping potentially makes every memory page access a thousand (or more) times slower (and Linux swapping mechanism is not specifically fast).
  • As more memory swapped, more operations take longer time
  • As operations take longer time, more requests come in to be served
  • The demand for resources exponentially increase
Due to scenario above, if you have a memory bound application running (like a database), if once we start swapping, most of the time there is no recovering back.

The Oracle Database SGA pages are pageable on Linux by default, and potentially those pages can be swapped out if system runs out of memory. Using HugePages  is one of the methods to make the Oracle SGA not to be swapped out at all, still you need to be careful about the configuration. To learn all about HugePages please read Document 361323.1 and references.

Conclusions

  • Make sure your total SGA, PGA fit in your RAM also leaving some decent memory for process spaces and system services. See the database installation guides for more information
  • Consider using HugePages on Linux
  • Be very careful with your memory configuration (HugePages, Automatic Memory Management, Swap, VLM)
  • Monitor your OS continuously for memory usage and swapping

Friday, August 16, 2013

Monday, August 12, 2013

script to get all table and system privileges granted to a user

Description

This script all table and system privileges granted to a user. 
The scripts also takes into acount those privileges assigned via 
roles granted via roles

Parameters

username - The username to display privileges for

SQL Source


REM Copyright (C) Think Forward.com 1998- 2005. All rights reserved. 

set echo off
set verify off
set pages 200
col granted_role form a20
col owner form a10
col table_name form a27
col privilege form a12

ACCEPT username prompt 'Enter Username : '

spool privs.lst

PROMPT Roles granted to user

SELECT granted_role,admin_option,default_role
FROM dba_role_privs
WHERE grantee=UPPER('&username');
PROMPT Table Privileges granted to a user through roles
SELECT granted_role, owner, table_name, privilege
FROM ( SELECT granted_role
FROM dba_role_privs WHERE grantee=UPPER('&username')
UNION
SELECT granted_role
FROM role_role_privs
WHERE role in (SELECT granted_role
FROM dba_role_privs WHERE grantee=UPPER('&username')
)
) roles, dba_tab_privs
WHERE granted_role=grantee;

PROMPT System Privileges assigned to a user through roles

SELECT granted_role, privilege
FROM ( SELECT granted_role
FROM dba_role_privs WHERE grantee=UPPER('&username')
UNION
SELECT granted_role
FROM role_role_privs
WHERE role in (SELECT granted_role
FROM dba_role_privs WHERE grantee=UPPER('&username')
)
) roles, dba_sys_privs
WHERE granted_role=grantee;

PROMPT Table privileges assigned directly to a user

SELECT owner, table_name, privilege
FROM dba_tab_privs
WHERE grantee=UPPER('&username');

PROMPT System privileges assigned directly to a user

SELECT privilege, admin_option
FROM dba_sys_privs
WHERE grantee=UPPER('&username');
spool off

Friday, August 9, 2013

Oracle Password Verification function

Password Verification : Enforcing Password Complexity

Password complexity is enforced using a verification function. Oracle provide an example password verification function in the "$ORACLE_HOME/rdbms/admin/utlpwdmg.sql" file, but you can code a custom one if you prefer. The function must accept three parameters (username, password and old_password) and return a boolean value, where the value TRUE signifies the password is valid. The simple example below just forces the password to be at least 8 characters long.

CREATE OR REPLACE FUNCTION my_verification_function (
username VARCHAR2,
password VARCHAR2,
old_password VARCHAR2)
RETURN BOOLEAN AS
BEGIN
IF LENGTH(password) < 8 THEN
RETURN FALSE;
ELSE
RETURN TRUE;
END IF;
END my_verification_function;
/

In the above we have restricted that password must contain greater that 8 characters.

Once the function is compiled under the SYS schema it can be referenced by the PASSWORD_VERIFY_FUNCTION parameter of a profile.

You can enable Password verify Function as below:
 
ALTER PROFILE my_profile LIMIT
PASSWORD_VERIFY_FUNCTION my_verification_function;
The code below assigns the completed profile to a user and tests it.
SQL> ALTER USER scott PROFILE my_profile;

User altered.

SQL> ALTER USER scott IDENTIFIED BY small;
ALTER USER scott IDENTIFIED BY small
*
ERROR at line 1:
ORA-28003: password verification for the specified password failed
ORA-28003: password verification for the specified password failed


SQL> ALTER USER scott IDENTIFIED BY much_bigger;

User altered.

SQL>
A more complete example of a password verification function is provided by the "$ORACLE_HOME/rdbms/admin/utlpwdmg.sql" script.

If you have trouble thinking of strong passwords, try using a password generator like RandPass.com. It creates random strong passwords with a phonetic sound, making them easier to remember.
In Oracle 11g the "$ORACLE_HOME/rdbms/admin/utlpwdmg.sql" script has been updated to improve the default password verification function. It also applies the function to the DEFAULT profile, enabling it for all users.


FLASHBACK STANDBY DATABASE

FLASHBACK STANDBY

SQLPLUS commands to turn off DATA GUARD temporarily in order to activate the physical standby for testing purposes.  Notice commands are indicated by PRMY or STBY designation for which database they need to executed on.  11g reduces these steps making this a feature they call the “SNAPSHOT DATABASE”.

PRMY> ALTER SYSTEM ARCHIVE LOG CURRENT;

PRMY> ALTER SYSTEM SET DG_BROKER_START=FALSE;

PRMY> ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=DEFER;

REDO APPLY is no longer running, Broker is automatically re-enabled if not turned off. Logs aren’t shipping. All of these things change in 11g!

STBY> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;

STBY> CREATE RESTORE POINT HOTFIX1 GUARANTEE FLASHBACK DATABASE;

STBY> ALTER DATABASE ACTIVATE STANDBY DATABASE;

NOW you can use the physical standby safely for testing purposes, it is a fully functional database with data the same as production as of a certain point in time, no new data will be added…remember your Recovery Time Objective.

When testing is over, Revert….back into a physical standby.

STBY> STARTUP MOUNT FORCE;

STBY> FLASHBACK DATABASE TO RESTORE POINT HOTFIX1;

STBY> ALTER DATABASE CONVERT TO PHYSICAL STANDBY;

STBY> STARTUP MOUNT FORCE;

STBY> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT;

Some Very usefull Oracle Views


Oracle Views


  dba_cons_columns

dba_constraints
dba_data_files
dba_ddl_locks

dba_errors
dba_extents
dba_free_space

dba_indexes
dba_jobs
dba_lock

dba_objects
dba_role_privs
dba_roles

dba_rollback_segs
dba_segments
dba_source

dba_tab_columns
dba_tables
dba_temp_files

dba_triggers
dba_ts_quotas
dba_undo_extents

dba_users
dba_mview_logs
dba_mview_refresh_times

dba_mviews
v$archived_log
v$asm_disk

v$backup_datafile
v$controlfile
v$database

v$datafile
v$db_cache_advice
v$filestat

v$instance
v$lock
v$locked_object

v$log
v$logfile
v$parameter

v$process
v$segment_statistics
v$session

v$session_longops
v$session_wait
v$sesstat

v$sort_usage
v$sqlarea
v$statname

v$sysstat
v$system_event
v$tempfile

v$transaction
dba_part_tables
dba_queues

dba_resumable
dba_tab_partitions