Monday, March 30, 2026
Thursday, March 26, 2026
Oracle Audit
Procedure (Unified Auditing; 12c and later)
1) Verify/enable Unified Auditing view availability
- Test access:
select count(*) from unified_audit_trail;
- If it errors, you are likely in pre-12c or Unified Auditing not available; stop and clarify version.
2) Create a DML auditing policy for target tables
- For a specific table:
create audit policy app_dml_pol actions
insert on APP_SCHEMA.T1,
update on APP_SCHEMA.T1,
delete on APP_SCHEMA.T1;
audit policy app_dml_pol;
- For many existing tables (generate statements):
select 'alter audit policy app_dml_pol add actions insert, update, delete on '
|| owner || '.' || table_name || ';'
from dba_tables
where owner = 'APP_SCHEMA';
-- run the generated ALTER AUDIT POLICY statements
3) Automatically include future tables (workaround via DDL trigger; no schema-wide wildcard)
- One-time setup:
create or replace trigger app_schema_audit_ddl
after create on database
declare
v_stmt varchar2(4000);
begin
if ora_dict_obj_type = 'TABLE' and ora_dict_obj_owner = 'APP_SCHEMA' then
v_stmt := 'alter audit policy app_dml_pol add actions insert, update, delete on '
|| ora_dict_obj_owner || '.' || ora_dict_obj_name;
execute immediate v_stmt;
end if;
end;
/
-- Ensure the policy app_dml_pol exists before creating the trigger.
4) (Optional) Limit auditing to application users to reduce volume
- Example:
noaudit policy app_dml_pol;
audit policy app_dml_pol by users APP_USER1, APP_USER2;
5) Review/Report audited DML
- Recent DML counts by object and action:
select to_char(event_timestamp, 'YYYY-MM-DD HH24:MI') ts_min,
obj_name,
action_name,
count(*) cnt
from unified_audit_trail
where dbusername in ('APP_USER1','APP_USER2') -- or remove to see all
and action_name in ('INSERT','UPDATE','DELETE')
and event_timestamp >= systimestamp - interval '1' day
group by to_char(event_timestamp, 'YYYY-MM-DD HH24:MI'), obj_name, action_name
order by 1,2,3;
- Who/where summary:
select obj_name, action_name, dbusername, client_id, userhost, module,
count(*) cnt
from unified_audit_trail
where action_name in ('INSERT','UPDATE','DELETE')
and event_timestamp >= systimestamp - interval '1' day
group by obj_name, action_name, dbusername, client_id, userhost, module
order by cnt desc;
6) Purge/retention (prevent trail growth and performance impact)
- Create a purge job retaining 30 days (adjust as needed):
begin
dbms_audit_mgmt.init_cleanup(audit_trail_type => dbms_audit_mgmt.audit_trail_unified,
default_cleanup_interval => 24);
dbms_audit_mgmt.create_purge_job(
audit_trail_type => dbms_audit_mgmt.audit_trail_unified,
audit_trail_purge_interval => 24,
audit_trail_purge_name => 'PURGE_UNIFIED_AUDIT_30D',
use_last_arch_timestamp => true);
dbms_audit_mgmt.set_last_archivelog_timestamp(
audit_trail_type => dbms_audit_mgmt.audit_trail_unified,
last_archive_time => systimestamp - interval '30' day);
end;
/
7) Disable if needed
- Stop auditing:
noaudit policy app_dml_pol;
- Remove objects from policy or drop policy when finished:
drop audit policy app_dml_pol;
Notes
- Auditing every DML on a large schema can add overhead; start with the most-suspect tables/users, then expand as required.
- For SYS activity (if relevant), enabling audit_sys_operations captures privileged operations; use only if needed.
- If fixed object stats jobs slow down due to a very large unified audit trail, lock stats on SYS.X$UNIFIED_AUDIT_TRAIL:
exec dbms_stats.lock_table_stats('SYS','X$UNIFIED_AUDIT_TRAIL');
Summary: Yes. Use Unified Auditing to audit INSERT/UPDATE/DELETE on the application tables. Create a policy, attach target tables (generate statements for many tables), optionally auto-add future tables with a DDL trigger, restrict to app users if desired, report from UNIFIED_AUDIT_TRAIL, and manage retention with DBMS_AUDIT_MGMT.
Tuesday, September 23, 2025
URL
DB MONITORING SCRIPTS DBACLASS
GitHub - fatdba/Oracle-Database-Scripts: My Oracle DB Scripts
oracle-developer (oracle-developer.net) · GitHub
ORACLE RAC STARTUP SEQUENCE - Oracle Consulting Services | USA | 99% Customer Retention | Doyensys
download Oracle database virtual machines
Databases Are Fun – dohdatabase.com
Oracle Scratchpad | Just another Oracle weblog
Home - All Things Oracle (Oracle Patch)
https://database-heartbeat.com/
Oracle Exadata Command Reference
Understanding Exadata disk layout | Page On-Call DBA Exadata
Step By Step Patching an Exadata cell node Exadata Storage Cell Patching
OracleDBPro - Pini Dibask Blog: Data Protection Flashback
SQL Plan Baseline, SQL Patch, SQL Profile: Differences and Use Cases – Osman’s DBlog SQL PROFILE ,SQL PATCH ,SQL BASE LINE
In Oracle RAC (Real Application Clusters), the components CSSD, CRS, EVMD, and DIAG are core parts of the Clusterware stack. They work together to ensure cluster stability, node coordination, failover, and diagnostics.
1. CSSD (Cluster Synchronization Services Daemon)
🔹 What it is
CSSD (ocssd.bin) is the heartbeat and cluster membership manager.
🔹 Core Responsibilities
✅ Node Membership Management
- Tracks which nodes are alive or dead
- Maintains the cluster node list
✅ Heartbeat Mechanism
Uses two types:
- Network heartbeat → via private interconnect
- Disk heartbeat → via voting disks
👉 If a node misses heartbeat → it is considered dead.
🔹 Split-Brain Prevention
This is CSSD’s most critical job.
- Uses voting disks
- Requires majority quorum
- If a node loses quorum → it gets evicted (rebooted)
🔹 Failure Scenario
Example:
- Node loses network connectivity
- CSSD checks voting disks
- If quorum lost → node is killed to protect data integrity
🔹 Key Files / Logs
$GRID_HOME/log/<node>/cssd/ocssd.log
🔹 Key Process
ocssd.bin⚙️ 2. CRS (Cluster Ready Services)
🔹 What it is
CRS (
crsd.bin) is the resource manager of Oracle Clusterware.
🔹 Core Responsibilities
✅ Resource Management
Manages:
- Databases
- Instances
- Listeners
- VIPs
- ASM
✅ Start/Stop Resources
- Starts resources in correct order
- Handles dependencies
Example:
ASM → Database → Services
✅ Failover Management
- If a resource fails → CRS restarts it
- If node fails → relocates resources to another node
🔹 Resource Dependency Example
Database depends on ASM
Listener depends on network
🔹 Logs
$GRID_HOME/log/<node>/crsd/crsd.log
🔹 Key Process
crsd.bin
📡 3. EVMD (Event Manager Daemon)
🔹 What it is
EVMD (
evmd.bin) is the event notification system.
🔹 Core Responsibilities
✅ Event Publishing
- Publishes cluster events:
- Node up/down
- Instance start/stop
- Failover events
✅ Event Subscription
- Applications/scripts can subscribe to events
✅ FAN (Fast Application Notification)
- Sends events to clients (e.g., JDBC, OCI)
- Helps apps react instantly to failures
🔹 Example Use Case
- Node crashes
- EVMD sends event
- Application connection pool drops dead connections immediately
🔹 Logs
$GRID_HOME/log/<node>/evmd/evmd.log
🔹 Key Process
evmd.bin
🩺 4. DIAG (Diagnostic Daemon)
🔹 What it is
DIAG (
diagdaemon/ integrated diag framework) is responsible for diagnostics and health monitoring.
🔹 Core Responsibilities
✅ Log Collection
- Collects logs from all cluster components
✅ Health Monitoring
- Tracks component health
- Works with Cluster Health Monitor (CHM)
✅ Incident Detection
- Detects critical issues
- Generates trace files and dumps
🔹 Integration
- Works with:
- ADR (Automatic Diagnostic Repository)
- Trace infrastructure
🔹 Logs Location
$GRID_HOME/log/<node>/diag/
🔗 How They Work Together
🔄 Startup Flow
- OHASD starts
- CSSD starts → establishes cluster membership
- CRS starts → manages resources
- EVMD starts → enables event system
- DIAG runs in background
🔄 Failure Flow Example
Node Crash:
- CSSD detects heartbeat loss
- Node evicted
- CRS relocates resources
- EVMD sends notifications
- DIAG logs everything
🧩 Architecture Relationship
OHASD (Oracle High Availability Service)
│
├── CSSD → Cluster membership + heartbeat
├── CRS → Resource management
├── EVMD → Event system
└── DIAG → Diagnostics
🚨 Quick Comparison Table
Component Role Critical Function CSSD Cluster control Node membership, heartbeat CRS Resource manager Start/stop/failover EVMD Event system FAN notifications DIAG Diagnostics Logs, health, incidents
🛠️ Important Commands
Check cluster status
crsctl stat res -tCheck cluster health
crsctl check cluster -allCheck CSS
crsctl check cssd
Monday, July 14, 2025
Gather Statistics
1.Gather Dictionary Stats: EXEC DBMS_STATS.gather_dictionary_stats;
DBMS_STATS.GATHER_TABLE_STATS(
ownname => 'HR',
tabname => 'EMPLOYEES',
estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE,
method_opt => 'FOR ALL COLUMNS SIZE SKEWONLY',
cascade => TRUE
);
END;
The METHOD_OPT parameter in Oracle's DBMS_STATS procedures is a powerful and nuanced tool that controls how column statistics are gathered—especially histograms and extended statistics. Here's a breakdown to help you use it effectively
What METHOD_OPT Does
It determines:
- Which columns get base statistics (min, max, NDV, nulls, etc.)
- Whether histograms are created, and what type
- Whether extended statistics (like column groups) are gathered
🧾 Syntax Overview
METHOD_OPT => 'FOR [column_spec] SIZE [histogram_spec]'
🔹 Column Spec Options
FOR ALL COLUMNS ==> Gathers stats for all columns (default) | |
FOR ALL INDEXED COLUMNS ==> Only indexed columns (not recommended) | |
FOR ALL HIDDEN COLUMNS ==> Only virtual/hidden columns (special use case) | |
FOR COLUMNS (col1, col2) ==> Specific columns or column group |
🔸 Histogram Spec Options
SIZE 1 ==> No histograms | |
SIZE AUTO ==> Oracle decides based on usage and data distribution (recommended) | |
SIZE SKEWONLY ==> Histograms only if data is skewed | |
SIZE REPEAT ==> Repeats previous histogram settings | |
SIZE ==> Explicit number of histogram buckets (e.g. 254) |
✅ Examples
1. No Histograms
METHOD_OPT => 'FOR ALL COLUMNS SIZE 1'
2. Auto Histograms
METHOD_OPT => 'FOR ALL COLUMNS SIZE AUTO'
3. Specific Columns with Histograms
METHOD_OPT => 'FOR COLUMNS (col1, col2) SIZE 254'
4. Column Group Statistics
METHOD_OPT => 'FOR COLUMNS (col1, col2) SIZE AUTO'
This gathers extended stats for the column group.
🧠 Best Practices
- Use
SIZE AUTOfor most environments—it lets Oracle decide intelligently. - Avoid
FOR ALL INDEXED COLUMNSunless you have a very narrow workload. - Use column groups for correlated columns in WHERE or JOIN clauses.
In Oracle's DBMS_STATS package, the ESTIMATE_PERCENT parameter controls how much of the data is sampled when gathering statistics. It directly affects the accuracy and performance of the stats collection process.
What Does ESTIMATE_PERCENT Do?
It specifies the percentage of rows Oracle should sample from a table (or index) to generate statistics. You can set it to:
- A specific number (e.g.,
10,50,100) - Or use the automatic setting:
DBMS_STATS.AUTO_SAMPLE_SIZE
Recommended Setting: DBMS_STATS.AUTO_SAMPLE_SIZE
Oracle recommends using:
estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE
🔍 Why?
- Oracle uses adaptive sampling algorithms to determine the best sample size.
- It avoids full table scans unless necessary.
- It ensures accurate NDV (Number of Distinct Values) without high overhead.
- Required for incremental statistics and hybrid histograms
1. Auto Sample Size (Best Practice)
Monday, December 5, 2022
IBM DB2 LUW
Table Function SNAP_GET_LOCKWAIT
To retrieve information about lock waits using SQL, you can use Db2’s table function SNAP_GET_LOCKWAIT and the administrative view SNAPLOCKWAIT.
The following statement, for example, returns information about lock waits of the currently connected database, its current partition, and the agent ID of the participating agents:
[db2inst1@xxxxxxxxxx]$ db2 "SELECT AGENT_ID, LOCK_MODE, LOCK_OBJECT_TYPE, AGENT_ID_HOLDING_LK, LOCK_MODE_REQUESTED FROM TABLE(SNAP_GET_LOCKWAIT('',-1)) AS T"
AGENT_ID LOCK_MODE LOCK_OBJECT_TYPE AGENT_ID_HOLDING_LK LOCK_MODE_REQUESTED
-------------------- ---------- ------------------ -------------------- -------------------
Monitoring Utility db2pd
To gather information about lock waits and deadlocks, you can also use the Db2 monitoring utility db2pd. This utility reads the information directly from the shared memory of the database and therefore is very fast.
A db2pd command looks, for example, as follows:
db2pd -db <dbsid> -locks showlocks wait
The output of this command can look as follows:
db2
Give information about db2 instance
Raised alert if “ConnectStatus”
shows “disconnected”. (Alert already configured in the monitoring tool by tools
team)
If the show
disconnected please restart the HADR in both Primary and Standby
Check2 > Check at either DC
& DR (Primary & Standby)
Command: db2pd -d <DB_NAME> -hadr |
grep -i LOG
Output:
Log number should be same. Difference
should not be more than 5. If “Connectstatus” shows “Connected” and Log
difference more than 5 send an alert. (Alert already
configured in the monitoring tool by tools team)
Monitor the
log difference , if it is rapidly increasing instead of decreasing (more than
100 ) inform application Team about it .
Steps we need to fellow to do DC-DR switchover
db2 "get db cfg for misprd1" > /dr_dc_backups/misprd1/misprd1_cfg.out
db2 "get dbm cfg" > /dr_dc_backups/misprd1/bodbins1_cfg.out
db2set -all > /dr_dc_backups/misprd1/db2set.out
db2 list database directory > /dr_dc_backups/misprd1/db_dir.out
db2 list node directory > /dr_dc_backups/misprd1/node_dir.out
crontab -l > /dr_dc_backups/misprd1/bodbins1_crontab.out
|
DR
Switch Over Plan |
||||
|
Sl. No. |
Step Description |
Owner |
Required Team(s) |
Dependencies /
Roadblocks (if any) |
|
1 |
Stop all your business applications that are
connected to database on DC site. |
Application Team |
|
ALL Jobs and application should be stopped |
|
2 |
Check if any connection is there with Database
. If it is there stop the connection |
DBA |
|
|
|
3 |
Verify and make sure the databases on DC/DR
are connected, they're in PEER state and log position of DR (Standby) is
identical with DC (Primary). |
DBA |
|
|
|
4 |
Verify if all the tablespaces on DR is in
normal state. |
DBA |
|
|
|
5 |
Check if all the tables are in normal state in
DR |
DBA |
|
|
|
6 |
Switch the HADR role between DC / DR |
DBA |
|
|
DC Side :
1>
db2 list applications
2>
db2 force application all
3>
db2 get db cfg for misprd1 | egrep
“STATUS|ROLE|STATE|POS”
DR Side :
1>
db2 get db cfg for misprd1 | egrep
“STATUS|ROLE|STATE|POS”
2>
db2 takeover hadr on db MISPRD1
3>
db2 connect to MISPRD1
4>
db2 list tablespaces | grep –i state
5>
db2 "select tabschema,tabname,status from
syscat.tables"
Daily
Checklist of Database:
1.
Database Backup: Can set up
daily incremental and weekly full backup or weekly incremental backup and
monthly full backup.
2.
Mount point check - Should take
corrective action if it reach 75%
3.
HADR status - Sync or not.
4.
Tablespace state check - It
should be 0x0000 ( db2 list tablespaces )
5.
Table status check - It should
be normal ( db2 load query table <schemaname>.<tabname> )
6.
RUNSTAT - Can be done once in a
week ( When data load will be less )
7.
Table Space Utilization (Not
for temporary tablespace )
8.
Daily backup of diag.log and
DDL script.
9.
Archive log deletion after last
backup.
10. Log utilization of the database - Should not cross 70%.