Posts

Showing posts from August, 2013

Find index skewed, rebuild

Find index skewed, rebuild                                                  Summary It is important to periodically examine your indexes to determine if they have become skewed and might need to be rebuilt. When an index is skewed, parts of an index are accessed more frequently than others. As a result, disk contention may occur, creating a bottleneck in performance. The key column to decide index skewed is blevel . You must estimate statistics for the index or analyze index validate structure . If the BLEVEL were to be more than 4, it is recommended to rebuild the index. SELECT OWNER, INDEX_NAME, TABLE_NAME, LAST_ANALYZED, BLEVEL FROM DBA_INDEXES WHERE OWNER NOT IN ('SYS', 'SYSTEM') AND BLEVEL >= 4 ORDER BY BLEVEL DESC;

Which SQL are doing a lot of disk I/O

Which SQL are doing a lot of disk I/O                                                  Which SQL are doing a lot of disk I/O SELECT * FROM (SELECT SUBSTR(sql_text,1,500) SQL, ELAPSED_TIME, CPU_TIME, disk_reads, executions, disk_reads/executions "Reads/Exec", hash_value,address FROM V$SQLAREA WHERE ( hash_value, address ) IN ( SELECT DISTINCT HASH_VALUE, address FROM v$sql_plan WHERE DISTRIBUTION IS NOT NULL ) AND disk_reads > 100 AND executions > 0 ORDER BY ELAPSED_TIME DESC) WHERE ROWNUM <=30;

Disk I/O

Disk I/O Script Datafiles Disk I/O Tablespace Disk I/O Which segments have top Logical I/O & Physical I/O Which SQL are doing a lot of disk I/O

Which segments have top Logical I/O & Physical I/O

Which segments have top Logical I/O & Physical I/O                                                  Summary Do you know which segments in your Oracle Database have the largest amount of I/O, physical and logical? This SQL helps to find out which segments are heavily accessed and helps to target tuning efforts on these segments: SELECT ROWNUM AS Rank, Seg_Lio.* FROM (SELECT St.Owner, St.Obj#, St.Object_Type, St.Object_Name, St.VALUE, 'LIO' AS Unit FROM V$segment_Statistics St WHERE St.Statistic_Name = 'logical reads' ORDER BY St.VALUE DESC) Seg_Lio WHERE ROWNUM <= 10 UNION ALL SELECT ROWNUM AS Rank, Seq_Pio_r.* FROM (SELECT St.Owner, St.Obj#, St.Object_Type, St.Object_Name, St.VALUE, 'PIO Rea...

Tablespace Disk I/O

Tablespace Disk I/O                                                 Summary The Physical design of the database reassures optimal performance for DISK I/O . Storing the datafiles in different filesystems (Disks) is a good technique to minimize disk contention for I/O. How I/O is spread per Tablespace SELECT T.NAME, SUM(Physical_READS) Physical_READS, ROUND((RATIO_TO_REPORT(SUM(Physical_READS)) OVER ())*100, 2) || '%' PERC_READS, SUM(Physical_WRITES) Physical_WRITES, ROUND((RATIO_TO_REPORT(SUM(Physical_WRITES)) OVER ())*100, 2) || '%' PERC_WRITES, SUM(total) total, ROUND((RATIO_TO_REPORT(SUM(total)) OVER ())*100, 2) || '%' PERC_TOTAL FROM (SELECT ts#, NAME, phyrds Physical_READS, phywrts Physical_WRITES, phyrds + phywrts total FROM v$datafile df, v$file...

Datafiles Disk I/O

Datafiles Disk I/O                                              Summary The Physical design of the database reassures optimal performance for DISK I/O . Storing the datafiles in different filesystems (Disks) is a good technique to minimize disk contention for I/O. How I/O is spread per datafile SELECT NAME, phyrds Physical_READS, ROUND((RATIO_TO_REPORT(phyrds) OVER ())*100, 2)|| '%' PERC_READS, phywrts Physical_WRITES, ROUND((RATIO_TO_REPORT(phywrts) OVER ())*100, 2)|| '%' PERC_WRITES, phyrds + phywrts total FROM v$datafile df, v$filestat fs WHERE df.FILE# = fs.FILE# ORDER BY phyrds DESC; Tip : ORDER BY phyrds, order by physical reads descending. ORDER BY phywrts, order by physical writes descending. How I/O is spread per filesystem SELECT filesystem, ROUND((RATIO_TO_REP...

Current waiting events Summary

Current waiting events   Summary The first and most important script about OWI, is where current sessions waiting SELECT a.SID, b.serial#, b.status, p.spid, b.logon_time, a.event, l.NAME latch_name, a.SECONDS_IN_WAIT SEC, b.sql_hash_value, b.osuser, b.username, b.module, b.action, b.program, a.p1,a.p1raw, a.p2, a.p3, --, b.row_wait_obj#, b.row_wait_file#, b.row_wait_block#, b.row_wait_row#, 'alter system kill session ' || '''' || a.SID || ', '|| b.serial# || '''' || ' immediate;' kill_session_sql FROM v$session_wait a, v$session b, v$latchname l, v$process p WHERE a.SID = b.SID AND b.username IS NOT NULL AND b.TYPE <> 'BACKGROUND' AND a.event NOT IN (SELECT NAME FROM v$event_name WHERE wait_class = 'Idle') AND (l.latch#(+) = a.p2) AND b.paddr = p.addr --AND a.sid = 559 --AND module IN ('JDBC Thin Client') --AND p.spid = 13317 --AND b.sql_hash_value = '4119097924' --AND event like 'libra...

Db file sequential read

Db file sequential read                                                  Summary The db file sequential read wait event means that Oracle is waiting while doing a single-block I/O read . This is the case when reading an index. Like all wait events the columns P1, P2, P3 give us the information needed to diagnose the waiting. Tip : A db sequential read is usually a single-block read, although it is possible to see sequential reads for more than one block (See P3 which is the number of blocks read). This wait may also be seen for reads from datafile headers (P2=1 indicates a file header read) . Parameters: P1 = file# P2 = block# P3 = blocks file# This is the file# of the file that Oracle is trying to read from. In O...

DELETE or CLEANUP failed export Jobs

Image
 Failed Jobs can be verified by using the following sql: sql > select owner_name,job_name,operation,job_mode,state,attached_sessions from dba_datapump_jobs;   Check the state ( status ) of the Jobs all the failed Jobs,they will show as not running for failed once. DROP MASTER TABLE Since  the  above  jobs  are  orphaned or  not running  won't  be  restarted  anymore,  so  drop  the master table.  The master  tables  above  are  ( SYS_EXPORT_FULL_01,   SYS_EXPORT_ FULL _02, SYS_EXPORT_ FULL _01) . SQL*Plus: Release 10.2.0.4.0 - Production on Fri Aug 23 10:14:16 2013 Copyright (c) 1982, 2007, Oracle.  All Rights Reserved. Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options SQL> drop table SYS_EXPORT_FULL_01; Table dropped. SQL> drop table SYS_EXPORT_FULL_...

Extract all tablespaces DDL

Nice and easiest way to to extract the DDL for all tablepaces..  Generate the DDL using the below query and you can re create the table spaces in any environment with any changes you want to make. set heading off; set echo off; Set pages 999; set long 90000; spool ddl_list.sql select dbms_metadata.get_ddl('TABLESPACE',tb.tablespace_name) from dba_tablespaces tb; spool off (sample) output will look something like this: "   CREATE TABLESPACE "SYSTEM" DATAFILE   '/u02/oracle/oradata/system01.dbf' SIZE 314572800   AUTOEXTEND ON NEXT 10485760 MAXSIZE 32767M   LOGGING ONLINE PERMANENT BLOCKSIZE 8192   EXTENT MANAGEMENT LOCAL AUTOALLOCATE SEGMENT SPACE MANAGEMENT MANUAL    ALTER DATABASE DATAFILE   '/u02/oracle/oradata/QPUAT/system01.dbf' RESIZE 4194304000  " "   CREATE UNDO TABLESPACE "UNDOTBS1" DATAFILE   '/u02/oracle/oradata/undotbs01.dbf' SIZE 209715200   AUTOEXTEND ON NEXT 5242880 MAXSIZE 32767M   BLOCKSIZE 8192 ...

Upgrading Opatch from 10.2.0.4.2 to 10.2.0.5.1 Version

Upgrading Opatch to Latest Version : It is so simple to upgrade th Opatch version for your database. I will check my current version of Opatch [oracle@linux5 OPatch]$ opatch version Invoking OPatch 10.2.0.4.2 OPatch Version: 10.2.0.4.2 OPatch succeeded. Now download the latest Opatch version from oracle support site. in this case I have downloaded p6880880_101000 & moved it to the database server Move the patch file to your oracle home [oracle@linux5 OPatch] mv p6880880_101000_SOLARIS64.zip $ORACLE_HOME [oracle@linux5 OPatch] cd $ORACLE_HOME (If a former OPatch directory already exist, then rename it) Make sure you backup your current Opatch home                                                         ...

Why is Swapping Bad for oracle?

Linux OS is a virtual memory system like any other modern operating system. The Virtual Memory Management system of Linux includes: Paging Swapping HugePages Slab allocator Shared memory When almost all of the available physical memory (RAM) is started to be used in Linux, the kernel will start to swap out pages to the swap (disk space), or worse it may start swapping out entire processes. One another scenario is that it starts killing processes using the Out-of-Memory (OOM) Killer. Swap Usage on Linux To check swap usage on Linux you can use one of below: free : Seek for low (or zero) values for Swap / used: # free -m              total       used       free     shared    buffers     cached Mem:          4018       3144  ...

ORACLE GOLDEN GATE INSTALLATION

will start the post soon with all installation steps & snapshots .........

script to get all table and system privileges granted to a user

Description This script all table and system privileges granted to a user.  The scripts also takes into acount those privileges assigned via  roles granted via roles Parameters username - The username to display privileges for SQL Source REM Copyright (C) Think Forward.com 1998- 2005. All rights reserved. set echo off set verify off set pages 200 col granted_role form a20 col owner form a10 col table_name form a27 col privilege form a12 ACCEPT username prompt 'Enter Username : ' spool privs.lst PROMPT Roles granted to user SELECT granted_role,admin_option,default_role FROM dba_role_privs WHERE grantee=UPPER('&username'); PROMPT Table Privileges granted to a user through roles SELECT granted_role, owner, table_name, privilege FROM ( SELECT granted_role FROM dba_role_privs WHERE grantee=UPPER('&username') UNION SELECT granted_role FROM role_role_privs WHERE role in (SELECT granted_role FROM dba_role_privs WHERE grantee=UPPER(...

Oracle Password Verification function

Password Verification : Enforcing Password Complexity Password complexity is enforced using a verification function. Oracle provide an example password verification function in the " $ORACLE_HOME/rdbms/admin/utlpwdmg.sql " file, but you can code a custom one if you prefer. The function must accept three parameters (username, password and old_password) and return a boolean value, where the value TRUE signifies the password is valid. The simple example below just forces the password to be at least 8 characters long. CREATE OR REPLACE FUNCTION my_verification_function ( username VARCHAR2, password VARCHAR2, old_password VARCHAR2) RETURN BOOLEAN AS BEGIN IF LENGTH(password) < 8 THEN RETURN FALSE; ELSE RETURN TRUE; END IF; END my_verification_function; / In the above we have restricted that password must contain greater that 8 characters . Once the function is compiled under the SYS schema it can be referenced by the PASSWORD_VERIFY_FUNCTI...

FLASHBACK STANDBY DATABASE

FLASHBACK STANDBY SQLPLUS commands to turn off DATA GUARD temporarily in order to activate the physical standby for testing purposes.  Notice commands are indicated by PRMY or STBY designation for which database they need to executed on.  11g reduces these steps making this a feature they call the “SNAPSHOT DATABASE”. PRMY> ALTER SYSTEM ARCHIVE LOG CURRENT; PRMY> ALTER SYSTEM SET DG_BROKER_START=FALSE; PRMY> ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=DEFER; REDO APPLY is no longer running, Broker is automatically re-enabled if not turned off. Logs aren’t shipping. All of these things change in 11g! STBY> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL; STBY> CREATE RESTORE POINT HOTFIX1 GUARANTEE FLASHBACK DATABASE; STBY> ALTER DATABASE ACTIVATE STANDBY DATABASE; NOW you can use the physical standby safely for testing purposes, it is a fully functional database with data the same as production as of a certain point in time, no new data will be added…rem...

Some Very usefull Oracle Views

Oracle Views   dba_cons_columns dba_constraints dba_data_files dba_ddl_locks dba_errors dba_extents dba_free_space dba_indexes dba_jobs dba_lock dba_objects dba_role_privs dba_roles dba_rollback_segs dba_segments dba_source dba_tab_columns dba_tables dba_temp_files dba_triggers dba_ts_quotas dba_undo_extents dba_users dba_mview_logs dba_mview_refresh_times dba_mviews v$archived_log v$asm_disk v$backup_datafile v$controlfile v$database v$datafile v$db_cache_advice v$filestat v$instance v$lock v$locked_object v$log v$logfile v$parameter v$process v$segment_statistics v$session v$session_longops v$session_wait v$sesstat v$sort_usage v$sqlarea v$statname v$sysstat v$system_event v$tempfile v$transaction dba_part_tables dba_queues dba_resumable dba_tab_partitions

Rman backup status

--------------------------------- Check if rman is still runnning ------------------------------ select   sid,   start_time,   totalwork   sofar,  (sofar/totalwork) * 100 pct_done from    v$session_longops where    totalwork > sofar AND    opname NOT LIKE '%aggregate%' AND    opname like 'RMAN%'; select    sid,    spid,    client_info,    event,    seconds_in_wait,    p1, p2, p3  from    v$process p,    v$session s  where    p.addr = s.paddr  and    client_info like 'rman channel=%'; Yousef Rifai has published this RMAN monitoring script, quite handy when you need to monitor the status of a long running RMAN backup job: ---------------    Gives percentage of job completed     -------------------------- REM RMAN Progress alter session set nls_date_format='dd/mm/yy hh24:mi:ss' /...

using sqlloader to insert data in database table

Oracle SQL Loader – Importing CSV files to a table This post shows how to easily insert data from a CSV file into a database table, using the Oracle SQL Loader tool. We will only concentrate on importing CSV formatted data to the Oracle table with a 1-to-1 column mapping, otherwise I will need to write a book. Users can easily reuse my code, and just customise the columns to their needs. Source files: import.ctl import.csv Step 1 Using the command line/prompt, use SQL Plus to login to your Oracle database. Make sure the user has permission to create tables. $ sqlplus user/password@DBINSTANCE; Step 2 Under this user, create a table where you want to import the data (skip this step if the table already exists). view plaincopy to clipboardprint?     SQL> create table T_IMPORTED_DATA      2  (      3    SERIAL       NUMBER(16),      4    TITLE ...

Finding Last DDL and DML Activity on a Table

   Finding out Last DDL and DML Activity on a Table Here is a small piece of SQL Commands to get the same info: create table t (t1 number,t2 varchar2(20),t3 date); Table created. Here is how you could find the same OracleDba.In >select   2  (select last_ddl_time from dba_objects where object_name='T' and owner='SYS') "DDL Time",   3   decode(maxscn,0,'N/A',scn_to_timestamp(maxscn)) "DML Time"   4  from   5  (select nvl(max(ora_rowscn),0) maxscn from t); DDL Time            DML Time ------------------- ------------------------------- 2012-01-25 15:58:35 N/A Now add some data to the table sql >insert into t values(1,'A',sysdate); 1 row created. sql  >select           (select last_ddl_time from dba_objects where object_name='T' and owner='SYS') "DDL Time",            decode(...

Configuration of HugePages for Oracle

Configuration of HugePages for Oracle database 10.2 on Red hat Linux 5.1 Environment : Oracle database 11.2.0.3 Enterprise Edition, Oracle Linux 6.4 64-bit HugePages can give a performance boost on Linux systems with large SGA sizes. However, this is not set up by default, and the configuration can sometimes be a bit tricky. This is how I did it on a system with a 4GB SGA size: 1) Disable Automatic Memory Management (AMM) AMM is not compatible with HugePages. I disabled it by setting the following memory-related initialization parameters: ALTER SYSTEM SET memory_max_target=0 SCOPE=SPFILE; ALTER SYSTEM SET memory_target=0 SCOPE=BOTH; ALTER SYSTEM SET sga_max_size=4016M SCOPE=SPFILE; ALTER SYSTEM SET sga_target=4016M SCOPE=BOTH; + restart the instance 2) Calculate the number of HugePages needed The size of one HugePage can be found as follows: $ cat /proc/meminfo|grep Hugepagesize Hugepagesize: 2048 kB The amount of HugePages that you need can be found with the fol...

Export/Zip and Unzip/Import using UNIX Pipes

Export/Zip and Unzip/Import using UNIX Pipes If you are creating and using large dump (.dmp) files with Oracle's export (exp) and import (imp) utilities, you can compress and uncompress the dump file as needed using gzip and gunzip (from GNU ) or the unix compress and uncompress utilities.  One problem with this is that you will still have to have the large dump file on your disk at some point, which you may or may not have room for, and you may run up against the 2 Gig file size limit on some machines and Oracle versions.  Instead of doing the export and zip separately, creating an interim dump file (or doing unzip and import), unix has the ability to pipe the output from one program (such as exp) as input to another program (such as gzip) as they both run in parallel, without having to run the programs sequentially and without having to create interim files. To do this, you would create a pipe (using the unix mknod command), run the programs (in either order), with the fir...