Friday, October 25, 2013

OraSASH visualization


Hello,

I started working on that project 2 years ago and at that stage I created a JavaScript code to display real time graphs and parse JSON code. The backend providing JSON data from Oracle Database was based on PHP and I didn't like that solution as it required to much configuration so project has been postponed.

Couple of weeks ago I discovered NodeJS and I decided to come back to my old project and this time use this lightweight server to work as backend server. I couldn't find NodeJS database driver for Oracle but I decide to present performance data using DBMS_EPG and PL/SQL procedures. Data are taken from OraSASH (but it can be used for ASH/AWR as well) and encoded as JSON. Browser is using AJAX to call NodeJS and NodeJS is connecting to Oracle DB to take required data.

This project is on very early stage so stay tuned - you can find initial code here

Please find screen shots from VISASH project:


One instance view 



Two instances view

regards,
Marcin

 

 

New release candidate of OraSASH

Hi,

There is a new release candidate of OraSASH 2.4 available on github - https://github.com/pioro/orasash/tree/v2.4-rc2

This version is providing much more views and it's easier to use your existing AWR queries against OraSASH. 

regards,
Marcin 



Tuesday, July 30, 2013

How to improve VMware performance on your PC

I found very interesting post here - VMware Performance Enhancing Tweaks (Over-the-Counter Solutions). I used it on my VMware Player and Workstation and when I disabled vmem file my laptop it stopped blinking HDD led for 5 - 10 min every time it wake up from sleep mode.

regards,
Marcin

Sunday, July 28, 2013

DataGuard – Far Sync – part 2 - Data Guard Broker

Oracle introduced Far Sync Data Guard configuration which I described briefly in this post. Now is time for part two and using Data Guard Broker to add Far Sync instance.
Assuming that you have basic Data Guard Broker configuration ready (as described in - How to quickly build standby database and setup DataGuard configuration using Oracle 12c) adding new Far Sync instance is quite easy task.

First we have to setup a new instance which will be used as Far Sync for our primary database.

Instance parameter file - as a copy of primary database configuration (not necessary)
*.audit_file_dest='/home/oracle/admin/test12c/adump'
*.audit_trail='db'
*.compatible='12.1.0.0.0'
*.control_files='/u01/app/oracle/oradata/test12c/control01.ctl','/u01/app/oracle/oradata/test12c/control02.ctl','/u01/app/oracle/oradata/test12c/control03.ctl'
*.db_block_size=8192
*.db_domain=''
*.db_name='test12c'
*.diagnostic_dest='/u01/app/oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=test12cXDB)'
*.log_archive_dest_1='location=/u01/app/oracle/oradata/test12c/arch'
*.open_cursors=300
*.processes=300
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=450m
*.undo_tablespace='UNDOTBS1'
# for far sync
*.db_unique_name='test12c_far_sync'
*.LOG_FILE_NAME_CONVERT='test','test'
*.local_listener='(ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.1.60)(PORT = 1522)))'
There are three parameters which have to be changed:
  • db_unique_name 
  • local_listener - new instance has to be registered in proper listener
  • log_file_name_convert - it is useful to have even with dummy values so Oracle can recreate standby logs automatically
Listener configuration
SID_LIST_LISTENER_DG =
 (SID_LIST =
   (SID_DESC=
       (ORACLE_HOME= /u01/app/oracle/product/12.1.0/db1)
       (SID_NAME=test12c)
   )
   (SID_DESC =
      (SID_NAME = test12c)
      (GLOBAL_DBNAME = test12c_far_sync_DGMGRL)
      (ORACLE_HOME = /u01/app/oracle/product/12.1.0/db1)
   )
 )


LISTENER_DG =
 (DESCRIPTION_LIST =
   (DESCRIPTION =
     (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROCDG1))
     (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.1.60)(PORT = 1522))
   )
 )
tnsnames file - entry test12c_far_sync has to be added on primary and standby server as well
test12c_prim =
 (DESCRIPTION =
   (ADDRESS_LIST =
     (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.1.45)(PORT = 1522))
   )
 (CONNECT_DATA =
   (SID = test12c)
   (SERVER=dedicated)
 )
)

test12c_std =
 (DESCRIPTION =
   (ADDRESS_LIST =
     (ADDRESS = (PROTOCOL = TCP)(HOST = 10.0.1.79)(PORT = 1522))
   )
 (CONNECT_DATA =
   (SID = test12c)
   (SERVER=dedicated)
 )
)


test12c_far_sync =
 (DESCRIPTION =
   (ADDRESS_LIST =
     (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.1.60)(PORT = 1522))
   )
 (CONNECT_DATA =
   (SID = test12c)
   (SERVER=dedicated)
 )
)
When instance parameter file and network configuration is ready we need to create Far Sync control file on primary server.
[oracle@ora12c dbs]$ rlwrap sqlplus / as sysdba

SQL*Plus: Release 12.1.0.1.0 Production on Fri Jul 26 23:21:59 2013

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> ALTER DATABASE CREATE FAR SYNC INSTANCE CONTROLFILE AS '/tmp/farsync.ctl';

Database altered.

Copy it to Far Sync server
[oracle@ora12c dbs]$ scp /tmp/farsync.ctl 192.168.1.60:/u01/app/oracle/oradata/test12c/control01.ctl
oracle@192.168.1.60's password:
farsync.ctl                                                                                                                  100%   10MB  10.3MB/s   00:00
[oracle@ora12c dbs]$ scp /tmp/farsync.ctl 192.168.1.60:/u01/app/oracle/oradata/test12c/control02.ctl
oracle@192.168.1.60's password:
farsync.ctl                                                                                                                  100%   10MB  10.3MB/s   00:01
[oracle@ora12c dbs]$ scp /tmp/farsync.ctl 192.168.1.60:/u01/app/oracle/oradata/test12c/control03.ctl
oracle@192.168.1.60's password:
farsync.ctl                                                                                                                  100%   10MB  10.3MB/s   00:00
[oracle@ora12c dbs]$
Now Far Sync instance can be started and new standby logs could be created. Make sure if Data Guard Broker has been started as well.
[oracle@oraclese dbs]$ rlwrap sqlplus / as sysdba
SQL*Plus: Release 12.1.0.1.0 Production on Fri Jul 26 23:24:27 2013
Copyright (c) 1982, 2013, Oracle.  All rights reserved.
Connected to an idle instance.

SQL> create spfile from pfile;
File created.

SQL> startup mount
ORACLE instance started.

Total System Global Area  471830528 bytes
Fixed Size                  2289688 bytes
Variable Size             293605352 bytes
Database Buffers          167772160 bytes
Redo Buffers                8163328 bytes
Database mounted.

SQL> alter system set dg_broker_start = true;
System altered.

SQL> alter database add standby logfile ('/u01/app/oracle/oradata/test12c/standby01.log') size 50M reuse;
Database altered.

SQL> alter database add standby logfile ('/u01/app/oracle/oradata/test12c/standby02.log') size 50M reuse;
Database altered.

SQL> alter database add standby logfile ('/u01/app/oracle/oradata/test12c/standby03.log') size 50M reuse;
Database altered.
Like if previous post about Data Guard configuration - Broker will be used to do all configuration work. You should to connect to Broker from primary database. This is how current configuration looks like:
[oracle@ora12c ~]$ rlwrap dgmgrl /
DGMGRL for Linux: Version 12.1.0.1.0 - 64bit Production

Copyright (c) 2000, 2012, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected as SYSDG.
DGMGRL> connect /
Connected as SYSDG.
DGMGRL>  show configuration verbose;

Configuration - fsc

  Protection Mode: MaxPerformance
  Databases:
  test12c_prim - Primary database
    test12c_std - Physical standby database

  Properties:
    FastStartFailoverThreshold      = '30'
    OperationTimeout                = '30'
    TraceLevel                      = 'USER'
    FastStartFailoverLagLimit       = '30'
    CommunicationTimeout            = '180'
    ObserverReconnect               = '0'
    FastStartFailoverAutoReinstate  = 'TRUE'
    FastStartFailoverPmyShutdown    = 'TRUE'
    BystandersFollowRoleChange      = 'ALL'
    ObserverOverride                = 'FALSE'

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS
Adding Far Sync
DGMGRL> add far_sync test12c_far_sync as connect identifier is 'test12c_far_sync';
far sync instance "test12c_far_sync" added
DGMGRL>  show configuration verbose;

Configuration - fsc

  Protection Mode: MaxPerformance
  Databases:
  test12c_prim - Primary database
    test12c_std - Physical standby database
    test12c_far_sync  - Far Sync (disabled)

  Properties:
    FastStartFailoverThreshold      = '30'
    OperationTimeout                = '30'
    TraceLevel                      = 'USER'
    FastStartFailoverLagLimit       = '30'
    CommunicationTimeout            = '180'
    ObserverReconnect               = '0'
    FastStartFailoverAutoReinstate  = 'TRUE'
    FastStartFailoverPmyShutdown    = 'TRUE'
    BystandersFollowRoleChange      = 'ALL'
    ObserverOverride                = 'FALSE'

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS
Now new rules for redo log transport have to be configured
DGMGRL> edit database 'test12c_prim' set property redoroutes='(local : test12c_far_sync sync)';
Property "redoroutes" updated
DGMGRL> edit far_sync 'test12c_far_sync' set property redoroutes='(test12c_prim : test12c_std)';
Property "redoroutes" updated
Above configuration is sending redo data from primary database to far sync instance using SYNC mode and redo data from far sync instance to standby database using ASYNC mode. As primary and far sync instance are close one to each other you can have no data loss mode without additional network synchronization overhead.
Redo logs routing rules are described in Oracle documentation.

Enabling and checking configuration
DGMGRL> enable far_sync test12c_far_sync;
Enabled.
DGMGRL> show configuration verbose;

Configuration - fsc

  Protection Mode: MaxPerformance
  Databases:
  test12c_prim - Primary database
    test12c_far_sync  - Far Sync
      test12c_std - Physical standby database

  Properties:
    FastStartFailoverThreshold      = '30'
    OperationTimeout                = '30'
    TraceLevel                      = 'USER'
    FastStartFailoverLagLimit       = '30'
    CommunicationTimeout            = '180'
    ObserverReconnect               = '0'
    FastStartFailoverAutoReinstate  = 'TRUE'
    FastStartFailoverPmyShutdown    = 'TRUE'
    BystandersFollowRoleChange      = 'ALL'
    ObserverOverride                = 'FALSE'

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS
If everything is OK we can switch protection mode from Maximum Performance to Maximum Availability to get all advantages of our new configured Far Sync instance.
DGMGRL> edit configuration set protection mode as MaxAvailability;
Succeeded.
DGMGRL> show configuration verbose;

Configuration - fsc

  Protection Mode: MaxAvailability
  Databases:
  test12c_prim - Primary database
    test12c_far_sync  - Far Sync
      test12c_std - Physical standby database

  Properties:
    FastStartFailoverThreshold      = '30'
    OperationTimeout                = '30'
    TraceLevel                      = 'USER'
    FastStartFailoverLagLimit       = '30'
    CommunicationTimeout            = '180'
    ObserverReconnect               = '0'
    FastStartFailoverAutoReinstate  = 'TRUE'
    FastStartFailoverPmyShutdown    = 'TRUE'
    BystandersFollowRoleChange      = 'ALL'
    ObserverOverride                = 'FALSE'

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

Now all is done. Happy testing. 
Marcin

Saturday, July 27, 2013

How to quickly build standby database and setup DataGuard configuration using Oracle 12c

With every release of Oracle database there are new possibilities to build a required solution (ex. standby database) faster and with fewer steps. This is not official guide but rather a mix of Oracle 12c functionality used to build DataGuard configuration using RMAN and DataGuard Broker.

Assumptions:

  • There is a database which will be used as primary.
  • New listener will be added to support standby traffic
Steps to run on primary database

Listener configuration
LISTENER_DG =
 (DESCRIPTION_LIST =
   (DESCRIPTION =
     (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROCDG1))
     (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.1.45)(PORT = 1522))
   )
 )

SID_LIST_LISTENER_DG =
 (SID_LIST =
   (SID_DESC=
       (ORACLE_HOME= /u01/app/oracle/product/12.1.0/db1)
       (SID_NAME=test12c)
   )
   (SID_DESC =
      (SID_NAME = test12c)
      (GLOBAL_DBNAME = test12c_prim_DGMGRL)
      (ORACLE_HOME = /u01/app/oracle/product/12.1.0/db1)
   )
 )
tnsnames entries
test12c_std =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(HOST = 10.0.1.79)(PORT = 1522))
    )
    (CONNECT_DATA =
      (SID = test12c)
      (SERVER = dedicated)
    )
  )


test12c_prim =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.1.45)(PORT = 1522))
    )
    (CONNECT_DATA =
      (SID = test12c)
      (SERVER = dedicated)
    )
  )


New parameters for database
[oracle@ora12c ~]$ rlwrap sqlplus / as sysdba
SQL*Plus: Release 12.1.0.1.0 Production on Fri Jul 26 21:02:14 2013
Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> alter system set local_listener='(ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.1.45)(PORT = 1522)))';
System altered.

SQL> alter system set db_unique_name='test12c_prim' scope=spfile;
System altered.

SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.

Total System Global Area  471830528 bytes
Fixed Size                  2289688 bytes
Variable Size             255856616 bytes
Database Buffers          209715200 bytes
Redo Buffers                3969024 bytes
Database mounted.
Database opened.
New control file for standby
SQL> alter database create standby controlfile as '/tmp/standby.ctl' reuse;
Database altered.
Copy standby control file to standby server
[oracle@ora12c ~]$ scp -i /home/oracle/oracle_standby.key  /tmp/standby.ctl     oracle@10.0.1.79:/u01/app/oracle/oradata/test12c/control01.ctl
standby.ctl                                                                                                                       100%   10MB 438.0KB/s   00:24
[oracle@ora12c ~]$ scp -i /home/oracle/oracle_standby.key  /tmp/standby.ctl     oracle@10.0.1.79:/u01/app/oracle/oradata/test12c/control02.ctl
standby.ctl                                                                                                                       100%   10MB 438.0KB/s   00:24
[oracle@ora12c ~]$ scp -i /home/oracle/oracle_standby.key  /tmp/standby.ctl     oracle@10.0.1.79:/u01/app/oracle/oradata/test12c/control03.ctl
standby.ctl                                                                                                                       100%   10MB 438.0KB/s   00:24

Steps to run on new standby database

Database parameter file - it could be a copy of primary database file with two changes - db_unique_name and local_listener
*.audit_file_dest='/home/oracle/admin/test12c/adump'
*.audit_trail='db'
*.compatible='12.1.0.0.0'
*.control_files='/u01/app/oracle/oradata/test12c/control01.ctl','/u01/app/oracle/oradata/test12c/control02.ctl','/u01/app/oracle/oradata/test12c/control03.ctl'
*.db_block_size=8192
*.db_domain=''
*.db_name='test12c'
*.diagnostic_dest='/u01/app/oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=test12cXDB)'
*.log_archive_dest_1='location=/u01/app/oracle/oradata/test12c/arch'
*.open_cursors=300
*.pga_aggregate_target=150m
*.processes=300
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=450m
*.undo_tablespace='UNDOTBS1'
# changes for standby
*.db_unique_name='test12c_std'
*.local_listener='(ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = 10.0.1.79)(PORT = 1522)))'
Listener configuration
LISTENER_DG =
 (DESCRIPTION_LIST =
   (DESCRIPTION =
     (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROCDG1))
     (ADDRESS = (PROTOCOL = TCP)(HOST = 10.0.1.79)(PORT = 1522))
   )
 )

SID_LIST_LISTENER_DG =
 (SID_LIST =
   (SID_DESC=
       (ORACLE_HOME= /u01/app/oracle/product/12.1.0/db1)
       (SID_NAME=test12c)
   )
   (SID_DESC =
      (SID_NAME = test12c)
      (GLOBAL_DBNAME = test12c_std_DGMGRL)
      (ORACLE_HOME = /u01/app/oracle/product/12.1.0/db1)
   )
 )
tnsnames entries
test12c_std =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(HOST = 10.0.1.79)(PORT = 1522))
    )
    (CONNECT_DATA =
      (SID = test12c)
      (SERVER = dedicated)
    )
  )


test12c_prim =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.1.45)(PORT = 1522))
    )
    (CONNECT_DATA =
      (SID = test12c)
      (SERVER = dedicated)
    )
  )
 
In next step we have to check if all datafile and configuration directory are in place on standby machine and startup instance in mount mode.

Starting up instance
 
[oracle@ip-10-0-1-79 dbs]$ rlwrap sqlplus / as sysdba
SQL*Plus: Release 12.1.0.1.0 Production on Fri Jul 26 17:39:48 2013
Copyright (c) 1982, 2013, Oracle.  All rights reserved.
Connected to an idle instance.

SQL> create spfile from pfile;
File created.

SQL> startup mount
ORACLE instance started.

Total System Global Area  471830528 bytes
Fixed Size                  2289688 bytes
Variable Size             255856616 bytes
Database Buffers          209715200 bytes
Redo Buffers                3969024 bytes
Database mounted.
SQL>

RMAN restore over network is an Oracle 12c feature. Yes there was duplicate from active database in 11g but this one is much easier to run and required fewer configuration steps. To restore Oracle data files on standby server we need to configure only instance in mount mode and add tns alias to primary database.

Starting RMAN and restoring data files 
[oracle@ip-10-0-1-79 dbs]$ rlwrap rman target /
Recovery Manager: Release 12.1.0.1.0 - Production on Fri Jul 26 16:18:57 2013
Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.
connected to target database: TEST12C (DBID=1424547711, not open)

RMAN> restore database from service test12c_prim using compressed backupset;
Starting restore at 26-JUL-13
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=27 device type=DISK

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: using compressed network backup set from service test12c_prim
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00001 to /u01/app/oracle/oradata/test12c/system01.dbf
channel ORA_DISK_1: restore complete, elapsed time: 00:08:15
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: using compressed network backup set from service test12c_prim
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00002 to /u01/app/oracle/oradata/test12c/sysaux01.dbf
channel ORA_DISK_1: restore complete, elapsed time: 00:08:55
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: using compressed network backup set from service test12c_prim
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00003 to /u01/app/oracle/oradata/test12c/undotbs01.dbf
channel ORA_DISK_1: restore complete, elapsed time: 00:01:25
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: using compressed network backup set from service test12c_prim
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00004 to /u01/app/oracle/oradata/test12c/users01.dbf
channel ORA_DISK_1: restore complete, elapsed time: 00:00:15
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: using compressed network backup set from service test12c_prim
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00005 to /u01/app/oracle/oradata/test12c/soe.dbf
channel ORA_DISK_1: restore complete, elapsed time: 00:46:45
Finished restore at 26-JUL-13

RMAN>

At that stage we have primary and standby database almost ready. 

Step to run on primary and standby database

Starting DataGuard Broker and create standby log files
 
SQL> alter system set dg_broker_start = true;
System altered.

SQL> alter database add standby logfile ('/u01/app/oracle/oradata/test12c/standby01.log') size 50M reuse;
Database altered.

SQL> alter database add standby logfile ('/u01/app/oracle/oradata/test12c/standby02.log') size 50M reuse;
Database altered.

SQL> alter database add standby logfile ('/u01/app/oracle/oradata/test12c/standby03.log') size 50M reuse;
Database altered.

Steps to run on primary database

Now is time for DataGuard Broker to work for us. 

Adding primary database
[oracle@ora12c ~]$ rlwrap dgmgrl /
DGMGRL for Linux: Version 12.1.0.1.0 - 64bit Production

Copyright (c) 2000, 2012, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected as SYSDG.
DGMGRL> connect /
Connected as SYSDG.
DGMGRL> create configuration FSC as primary database is test12c_prim connect identifier is 'test12c_prim';
Configuration "fsc" created with primary database "test12c_prim"
 
Adding standby database and displaying configuration
DGMGRL> add database test12c_std as connect identifier is 'test12c_std' maintained as physical;
Database "test12c_std" added
DGMGRL> show configuration verbose;

Configuration - fsc

  Protection Mode: MaxPerformance
  Databases:
  test12c_prim - Primary database
    test12c_std - Physical standby database

  Properties:
    FastStartFailoverThreshold      = '30'
    OperationTimeout                = '30'
    TraceLevel                      = 'USER'
    FastStartFailoverLagLimit       = '30'
    CommunicationTimeout            = '180'
    ObserverReconnect               = '0'
    FastStartFailoverAutoReinstate  = 'TRUE'
    FastStartFailoverPmyShutdown    = 'TRUE'
    BystandersFollowRoleChange      = 'ALL'
    ObserverOverride                = 'FALSE'

Fast-Start Failover: DISABLED

Configuration Status:
DISABLED
 
Enabling configuration and checking DataGuard status
DGMGRL> enable configuration;
Enabled.
DGMGRL> show configuration verbose;

Configuration - fsc

  Protection Mode: MaxPerformance
  Databases:
  test12c_prim - Primary database
    test12c_std - Physical standby database

  Properties:
    FastStartFailoverThreshold      = '30'
    OperationTimeout                = '30'
    TraceLevel                      = 'USER'
    FastStartFailoverLagLimit       = '30'
    CommunicationTimeout            = '180'
    ObserverReconnect               = '0'
    FastStartFailoverAutoReinstate  = 'TRUE'
    FastStartFailoverPmyShutdown    = 'TRUE'
    BystandersFollowRoleChange      = 'ALL'
    ObserverOverride                = 'FALSE'

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

And our DataGuard configuration is up and running. Now we can change default configuration to one which meet our requirements. I next post I will add Far Sync instance and switch configuration to Maximum Availability.

regards,
Marcin 

Tuesday, July 16, 2013

DataGuard – Far Sync – part 1 – simple configuration

Oracle introduced a new feature for DataGuard called Far Sync. This configuration is design to support synchronous redo transport between primary and standby database which are far apart. In usual configuration synchronous redo transport between data centre in Europe and US is possible but inefficient from the primary database performance perspective. Every commit has to be written locally to redo logs and send to standby with confirmation – a network round trip between Europe and US is a significant delay for every commit operation in synchronous mode. If database has to run in no data loss mode DBA had no choice but run primary in SYNC mode and suffer from every transaction network round-trip or setup both databases in data centres which are relatively close to each other.

To mitigate that problem an Oracle 12c has introduced DataGuard Far Sync instance. Far Sync instance is a log transport "proxy" and it has to be located close to the primary database. As Far Sync is a type of instance only – there is no need to put whole database there. Far Sync instance requires space for a standby and an archive logs.
In that configuration primary is sending transaction to Far Sync instance in
synchronous mode so every transaction is written to primary redo logs and Far Sync standby logs – so commit time is limited to time required for both writes and network round trip between primary and Far Sync. In next stage Far Sync instance is sending redo stream in asynchronous mode to standby database located in other data centre.

In my test configuration I setup OpenVPN connection between VM running in my home ESX server (located in Europe) and two EC2 instances running in USA. That configuration allows me to measure commit time for
synchronous mode with and without Far Sync instance.

Configuration details:

Primary database – TEST_PRM
Far sync instance – FARSYNC
Standby database – TEST_STD


Primary database configuration – DataGuard related parameters only
DB_UNIQUE_NAME=TEST_PRM
FAL_SERVER=TEST_STD
LOG_ARCHIVE_CONFIG='DG_CONFIG=( TEST_PRM,FARSYNC, TEST_STD)' 
LOG_ARCHIVE_DEST_1='LOCATION=USE_DB_RECOVERY_FILE_DEST
VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME= TEST_PRM
Standby configuration:
LOG_ARCHIVE_CONFIG='DG_CONFIG=( TEST_PRM,FARSYNC, TEST_STD)' 
LOG_ARCHIVE_DEST_1='location=/u01/app/oracle/oradata/test12c/arch'
 
First test case – SYNC mode redo transport between primary and standby database 

Primary log transport configuration
log_archive_dest_2 = 'service="TEST_STD", SYNC AFFIRM delay=0 optional compression=disable max_failure=0 max_connections=1 reopen=300 db_unique_name="TEST_STD" net_timeout=10, valid_for=(online_logfile,all_roles)';
log_archive_config = 'dg_config=(TEST_PRM,TEST_STD,FARSYNC)
In both cases I run Swingbench against primary database and monitor time histogram for ‘log file sync’ event (primary database was restarted every time to have identical test conditions and reset counters)
SQL> select * from  v$event_histogram where  EVENT = 'log file sync';
    EVENT# EVENT                          WAIT_TIME_MILLI WAIT_COUNT LAST_UPDATE_TIME                                               CON_ID
---------- ------------------------------ --------------- ---------- ------------------------------------------------------------------------- ----------
       144 log file sync                                1          5 10-JUL-13 08.50.01.857358 PM +01:00                         0
       144 log file sync                                2         23 10-JUL-13 08.38.51.202819 PM +01:00                         0
       144 log file sync                                4         11 10-JUL-13 08.40.00.723312 PM +01:00                         0
       144 log file sync                                8          2 10-JUL-13 08.59.06.068904 PM +01:00                         0
       144 log file sync                               16          8 10-JUL-13 08.59.22.090580 PM +01:00                         0
       144 log file sync                               32         18 10-JUL-13 08.59.29.450597 PM +01:00                         0
       144 log file sync                               64         41 10-JUL-13 08.59.33.983966 PM +01:00                         0
       144 log file sync                              128         62 10-JUL-13 08.59.39.447536 PM +01:00                         0
       144 log file sync                              256        326 10-JUL-13 08.59.40.640604 PM +01:00                         0
       144 log file sync                              512       1423 10-JUL-13 08.59.40.640691 PM +01:00                         0
       144 log file sync                             1024         37 10-JUL-13 08.59.11.646609 PM +01:00                         0
       144 log file sync                             2048          1 10-JUL-13 08.53.53.715921 PM +01:00                         0
       144 log file sync                             4096          1 10-JUL-13 08.56.15.150343 PM +01:00                         0

13 rows selected.
As we can see in histogram – log file sync time between 256 and 512 ms is most frequent one and commit time for most of transaction were in that time range. In addition to Swingbench traffic I run one test transaction with simple row insert and traced it – here are important lines from trace file  

WAIT #140043716402464: nam='log file sync' ela= 184627 buffer#=419 sync scn=987285
 

As you can see 184 ms for commit is not a great result.

Second test case – SYNC mode redo transport between primary and far sync instance 

Primary log transport configuration
LOG_ARCHIVE_DEST_2='service="FARSYNC"','SYNC AFFIRM delay=0 optional compression=disable max_failure=0 max_connections=1 reopen=300 db_unique_name="FARSYNC" net_timeout=30','valid_for=(online_logfile,all_roles)' SCOPE=BOTH;
Far Sync instance required Oracle 12c binaries installed and special version of control file created on primary database using following command
ALTER DATABASE CREATE FAR SYNC INSTANCE CONTROLFILE AS '/tmp/farsync.ctl';
Instance parameters:
CONTROL_FILE=/u01/app/oracle/oradata/test12c/control01.ctl
DB_UNIQUE_NAME= FARSYNC
FAL_SERVER= TEST_PRM
LOG_FILE_NAME_CONVERT='test','test'
LOG_ARCHIVE_CONFIG='DG_CONFIG=( TEST_PRM,FARSYNC, TEST_STD)' 

LOG_ARCHIVE_DEST_1       = "location=/u01/app/oracle/oradata/test12c/arch"
LOG_ARCHIVE_DEST_2='service="TEST_STD"','ASYNC NOAFFIRM delay=0 optional compression=disable max_failure=0 max_connections=1 reopen=300 db_unique_name="TEST_STD" net_timeout=10','valid_for=(standby_logfile,all_roles)' SCOPE=BOTH;
As you can see in above configuration LOG_ARCHIVE_DEST_2 is configured as ASYNC and pointed to real standby database. 

Let's run test again
SQL> select * from  v$event_histogram where  EVENT = 'log file sync';

    EVENT# EVENT                          WAIT_TIME_MILLI WAIT_COUNT LAST_UPDATE_TIME                                               CON_ID
---------- ------------------------------ --------------- ---------- ------------------------------------------------------------------------- ----------
       144 log file sync                                1        254 10-JUL-13 09.23.58.069192 PM +01:00                         0
       144 log file sync                                2        364 10-JUL-13 09.23.58.027215 PM +01:00                         0
       144 log file sync                                4        338 10-JUL-13 09.23.57.951897 PM +01:00                         0
       144 log file sync                                8        278 10-JUL-13 09.23.57.540682 PM +01:00                         0
       144 log file sync                               16         82 10-JUL-13 09.23.55.700236 PM +01:00                         0
       144 log file sync                               32         29 10-JUL-13 09.23.56.896548 PM +01:00                         0
       144 log file sync                               64         29 10-JUL-13 09.23.52.709345 PM +01:00                         0
       144 log file sync                              128         18 10-JUL-13 09.23.40.675756 PM +01:00                         0
       144 log file sync                              256          8 10-JUL-13 09.23.32.935084 PM +01:00                         0
       144 log file sync                              512         18 10-JUL-13 09.23.32.949511 PM +01:00                         0
       144 log file sync                             1024          7 10-JUL-13 09.22.50.183919 PM +01:00                         0

11 rows selected.
Now results are totally different – most of transactions have a commit time between 1 to 8 ms – so all transaction has been commit after writing data to primary redo and far sync standby logs. From performance perspective this is much better and still primary database is protected in no data loss mode. Similar line from trace file:

WAIT #140132880983720: nam='log file sync' ela= 1003 buffer#=63 sync scn=1042355

This time commit time was almost 180 faster and took only 1 ms. 


Above configuration is a simplest one and doesn't include alternate destination if Far Sync is down. When standby database become primary there is no far sync instance close to new primary and it has to be addressed as well. 
I will keep my tests and I hope I will blog soon next part.


regards,
Marcin

Tuesday, July 9, 2013

Oracle 12c - incremental backup for DataGuard over network

If you have DataGuard or standby database in your organization you probably will love that new RMAN feature. Since 12c it is possible to catchup standby database using incremental backup using one command. Additional space and time need to run incremental backup, copy over to standby and restore can be limited to time required to run incremental backup over network.

See short example:

Stopping recovery on standby 

[oracle@ip-10-0-1-79 ~]$ sqlplus / as sysdba
SQL*Plus: Release 12.1.0.1.0 Production on Sun Jul 7 12:56:24 2013
Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL>  ALTER DATABASE RECOVER MANAGED STANDBY DATABASE cancel;

Database altered.

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options


Running incremental backup:
[oracle@ip-10-0-1-79 ~]$ rman target /
Recovery Manager: Release 12.1.0.1.0 - Production on Sun Jul 7 12:56:39 2013
Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.
connected to target database: TEST12C (DBID=1424547711, not open)

RMAN>  recover database from service test12c_b using compressed backupset;

Starting recover at 07-JUL-13
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=35 device type=DISK
channel ORA_DISK_1: starting incremental datafile backup set restore
channel ORA_DISK_1: using compressed network backup set from service test12c_b
destination for restore of datafile 00001: /u01/app/oracle/oradata/test12c/system01.dbf
channel ORA_DISK_1: restore complete, elapsed time: 00:00:25
channel ORA_DISK_1: starting incremental datafile backup set restore
channel ORA_DISK_1: using compressed network backup set from service test12c_b
destination for restore of datafile 00002: /u01/app/oracle/oradata/test12c/sysaux01.dbf
channel ORA_DISK_1: restore complete, elapsed time: 00:00:35
channel ORA_DISK_1: starting incremental datafile backup set restore
channel ORA_DISK_1: using compressed network backup set from service test12c_b
destination for restore of datafile 00003: /u01/app/oracle/oradata/test12c/undotbs01.dbf
channel ORA_DISK_1: restore complete, elapsed time: 00:00:25
channel ORA_DISK_1: starting incremental datafile backup set restore
channel ORA_DISK_1: using compressed network backup set from service test12c_b
destination for restore of datafile 00004: /u01/app/oracle/oradata/test12c/users01.dbf
channel ORA_DISK_1: restore complete, elapsed time: 00:00:07

...

RMAN>; exit
Recovery Manager complete.

For anyone who spend night trying to catch up standby database above solution looks brilliant.

regards,
Marcin

Saturday, June 29, 2013

PL/SQL function in SQL - short example - Oracle 12c

This is my first post about Oracle 12c. When I first time heard about PL/SQL functions inside SQL my first thought was - can I use it in my topaas.sql script to run it in loop ? I decided to give PL/SQL functions in SQL a quick try to and here are my results. My goal was to display results from SQL line by line with small delay. This is a code a decided to use:
SQL> set arraysize 1
SQL> col slow_me noprint
SQL> with function slow_me return number is 
  2   begin
  3     dbms_lock.sleep(1);
  4     return 1;
  5   end;
  6  select level, slow_me() slow_me from dual connect by level < 10;
  7  /

     LEVEL
----------
         1
         2
         3
         4
         5
         6
         7
         8
         9

9 rows selected.
Unfortunately only first row is displayed alone. Rest of rows is displayed in pairs even if arraysize is set to 1. Not sure now why it happen and I would to find out some day. After problems with pure SQL I decided to check how SQL*Plus refcursor print functionality is working. Here is second version of code:
SQL> var c refcursor
SQL> declare
  2  stm varchar2(1000);
  3  begin
  4  stm:='with function slow_me return varchar2 is
  5   begin
  6     dbms_lock.sleep(1);
  7     return '''';
  8  end;
  9  select level, slow_me() slow_me from dual connect by level < 10';
 10  open :c for stm;
 11  end;
 12  /

PL/SQL procedure successfully completed.

SQL> print c

     LEVEL
----------
         1
         2
         3
         4
         5
         6
         7
         8
         9

9 rows selected.
Now it's working. This code is displaying row by row with 1 sec delay. So now it is time to display some more interesting results. Code below is displaying Average Active Sessions (15s) taken from v$sysmetric view with delay defined by user aas.sql
var c refcursor
set arraysize 1 

declare
sqlst varchar2(2000):='
with function aas RETURN varchar2 IS
v varchar2(100);
begin
select VALUE into v from v$sysmetric where METRIC_NAME = ''Average Active Sessions'' and INTSIZE_CSEC < 5000;
dbms_lock.sleep(&SLEEP_SEC);
v:=''Average Active Sessions '' || v;
return v;
end;
select aas() from dual connect by level < 1000';

begin
 open :c for sqlst;
end;
/

print c
It can be run from SQL*Plus and it will be some kind of top-like utility.
[oracle@ora12c ~]$ rlwrap sqlplus / as sysdba

SQL*Plus: Release 12.1.0.1.0 Production on Sat Jun 29 13:34:37 2013

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> @aas.sql
Enter value for sleep_sec: 1
old   7: dbms_lock.sleep(&SLEEP_SEC);
new   7: dbms_lock.sleep(1);

PL/SQL procedure successfully completed.


AAS()
--------------------------------------------------------------------------------
Average Active Sessions .0000153231179213857
Average Active Sessions .0000153231179213857
Average Active Sessions .0000153231179213857
Average Active Sessions .0000153231179213857
Average Active Sessions .0000153231179213857
Average Active Sessions .0000153231179213857
Average Active Sessions .0000153231179213857

That's only examples of new functionality. Now I have to review topaas code again and see if I can use that feature there. 

regards, 
Marcin

Friday, June 7, 2013

DataGuard FastStart Failover configuration with Oracle Wallet

Last two months were very busy for me with lot of work and moving house as well. There is a lot of thoughts in my head I want to write about but some of them have to wait until my lab will be online again.Recently I was working on Oracle DataGuard configuration with FastStart Failover and this is what I want to share with you today.

FastStart Failover is a good solution for automatic promoting standby database to be primary and allow application to keep running in case of primary database failure. I don't want to describe whole solution here but want to focus on Observer problems I had recently.

Observer is a part of DataGuard Broker running on 3rd server and it is used to prevent a brain split between primary and standby database. Observer is started by DGMGRL program and it required a sys user credentials. Customer doesn't want to keep sys password in scripts to start and stop observer so Oracle Wallet looks like a good solution to address this issue.

Configuration overview:
- database name - TESTDB
- unique database names - TESTDB_PRIMARY and TESTDB_STANDBY

Wallet has been created and sys user with alias for primary and standby database has been added to it.
I was able to connect to both servers:

[oracle@orasvr3 ~]$ mkstore -wrl /home/oracle/testwallet -listCredential
Oracle Secret Store Tool : Version 11.2.0.3.0 - Production
Copyright (c) 2004, 2011, Oracle and/or its affiliates. All rights reserved.
Enter wallet password:
List credential (index: connect_string username)
2: TESTDB_STANDBY sys
1: TESTDB_PRIMARY sys
[oracle@orasvr3 ~]$ dgmgrl
DGMGRL for Linux: Version 11.2.0.3.0 - 64bit Production
Copyright (c) 2000, 2009, Oracle. All rights reserved.
Welcome to DGMGRL, type "help" for information.
DGMGRL> connect /@TESTDB_PRIMARY
Connected.
DGMGRL> connect /@TESTDB_STANDBY
Connected.
DGMGRL>

Observer has been started and I begun failover tests. Here is a first attempt:
12:09:25.21  Thursday, May 30, 2013
Initiating Fast-Start Failover to database "TESTDB_PRIMARY"...
Performing failover NOW, please wait...
Failover succeeded, new primary is "TESTDB_PRIMARY"
12:09:31.78  Thursday, May 30, 2013

12:10:47.91  Thursday, May 30, 2013
Initiating reinstatement for database "TESTDB_STANDBY"...
Reinstating database "TESTDB_STANDBY", please wait...
Operation requires shutdown of instance "TESTDB" on database "TESTDB_STANDBY"
Shutting down instance "TESTDB"...
ORA-01031: insufficient privileges

Warning: You are no longer connected to ORACLE.

Please complete the following steps and reissue the REINSTATE command:
        shut down instance "TESTDB" of database "TESTDB_STANDBY"
        start up and mount instance "TESTDB" of database "TESTDB_STANDBY"

12:11:06.36  Thursday, May 30, 2013

Looks like Observer was unable to shutdown TESTDB_STANDBY database. This is typical error if there is no DGMGRL services registered in listeners but I was sure network configuration is fine as it was working perfectly fine when observer has been started with user and password in connection string. So this problem has to be related to Oracle Wallet only. There is a not well know switch for DGMGRL program to enable debug mode and have some more information about issue and I decided to use it. First of all I have started with switchover tests as there need this same interaction with starting and stopping instance like failover and are faster to perform.

Here is a output with debug option enabled:
[oracle@orasvr3 ~]$ dgmgrl -debug
DGMGRL for Linux: Version 11.2.0.3.0 - 64bit Production

Copyright (c) 2000, 2009, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
DGMGRL> connect /@TESTDB_PRIMARY
[W000 05/30 12:25:02.04] Connecting to database using TESTDB_PRIMARY.
[W000 05/30 12:25:02.09] Checking broker version [BEGIN :version := dbms_drs.dg_broker_info('VERSION'); END;].
[W000 05/30 12:25:02.10] Broker version is '11.2.0.3.0'
Connected.

DGMGRL> switchover to TESTDB_PRIMARY;
Performing switchover NOW, please wait...
New primary database "TESTDB_PRIMARY" is opening...
Operation requires shutdown of instance "TESTDB" on database "TESTDB_STANDBY"
Shutting down instance "TESTDB"...
[W000 05/30 12:26:23.24] Connecting to database using (DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=orasvr2)(PORT=1522)))(CONNECT_DATA=(SERVICE_NAME=TESTDB_STANDBY_DGB)(INSTANCE_NAME=TESTDB)(SERVER=dedicated))).
ORA-01031: insufficient privileges

Warning: You are no longer connected to ORACLE.

Please complete the following steps to finish switchover:
        shut down instance "TESTDB" of database "TESTDB_STANDBY"
        start up instance "TESTDB" of database "TESTDB_STANDBY"

  It is trying to connect to TESTDB_STANDBY using following connection description

(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=orasvr2)(PORT=1522)))(CONNECT_DATA=(SERVICE_NAME=TESTDB_STANDBY_DGB)(INSTANCE_NAME=TESTDB)(SERVER=dedicated)))

but this one doesn't exist in wallet. Service _DBG is created by DMON process and it is registered in default listener or listeners specified in local_listener parameter.
I have added above connection string to Oracle Wallet and completed all manual steps before next try. Here is a output for second switchover run:
[oracle@orasvr3 ~]$ dgmgrl -debug
DGMGRL for Linux: Version 11.2.0.3.0 - 64bit Production
Copyright (c) 2000, 2009, Oracle. All rights reserved.
Welcome to DGMGRL, type "help" for information.
DGMGRL> connect /@TESTDB_PRIMARY
[W000 06/06 12:50:31.01] Connecting to database using TESTDB_PRIMARY.
[W000 06/06 12:50:31.05] Checking broker version [BEGIN :version := dbms_drs.dg_broker_info('VERSION'); END;].
[W000 06/06 12:50:31.06] Broker version is '11.2.0.3.0'
Connected.
DGMGRL> switchover to TESTDB_PRIMARY;
Performing switchover NOW, please wait...
New primary database "TESTDB_PRIMARY" is opening...
Operation requires shutdown of instance "TESTDB" on database "TESTDB_STANDBY"
Shutting down instance "TESTDB"...
[W000 06/06 12:50:45.95] Connecting to database using (DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=orasvr2)(PORT=1522)))(CONNECT_DATA=(SERVICE_NAME=TESTDB_STANDBY_DGB)(INSTANCE_NAME=TESTDB)(SERVER=dedicated))).
ORACLE instance shut down.
Operation requires startup of instance "TESTDB" on database "TESTDB_STANDBY"
Starting instance "TESTDB"...
[W000 06/06 12:50:47.09] Connecting to database using (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=orasvr2)(PORT=1522))(CONNECT_DATA=(SERVICE_NAME=TESTDB_STANDBY_DGMGRL)(INSTANCE_NAME=TESTDB)(SERVER=DEDICATED))).
ORA-01031: insufficient privileges
Warning: You are no longer connected to ORACLE.
Please complete the following steps to finish switchover:
        start up and mount instance "TESTDB" of database "TESTDB_STANDBY"
Still there is manual step to do but this time database was shut down automatically, and Observer was unable to connect to start up database. There is other connection string

(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=orasvr2)(PORT=1522))(CONNECT_DATA=(SERVICE_NAME=TESTDB_STANDBY_DGMGRL)(INSTANCE_NAME=TESTDB)(SERVER=DEDICATED))).

This one can be found here
DGMGRL> show database verbose TESTDB_STANDBY;

Database - TESTDB_STANDBY

  Role:            PHYSICAL STANDBY
  Intended State:  APPLY-ON
  Transport Lag:   0 seconds
  Apply Lag:       0 seconds
  Real Time Query: OFF
  Instance(s):
    TESTDB

  Properties:
    DGConnectIdentifier             = 'TESTDB_STANDBY'
    ObserverConnectIdentifier       = ''
    LogXptMode                      = 'SYNC'
    DelayMins                       = '0'
    Binding                         = 'optional'
    MaxFailure                      = '0'
    MaxConnections                  = '1'
    ReopenSecs                      = '300'
    NetTimeout                      = '10'
    RedoCompression                 = 'DISABLE'
    LogShipping                     = 'ON'
    PreferredApplyInstance          = ''
    ApplyInstanceTimeout            = '0'
    ApplyParallel                   = 'AUTO'
    StandbyFileManagement           = 'AUTO'
    ArchiveLagTarget                = '0'
    LogArchiveMaxProcesses          = '4'
    LogArchiveMinSucceedDest        = '1'
    DbFileNameConvert               = ''
    LogFileNameConvert              = 'a, a'
    FastStartFailoverTarget         = 'TESTDB_PRIMARY'
    InconsistentProperties          = '(monitor)'
    InconsistentLogXptProps         = '(monitor)'
    SendQEntries                    = '(monitor)'
    LogXptStatus                    = '(monitor)'
    RecvQEntries                    = '(monitor)'
    SidName                         = 'TESTDB'
    StaticConnectIdentifier         = '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=orasvr2)(PORT=1522))(CONNECT_DATA=(SERVICE_NAME=TESTDB_STANDBY_DGMGRL)(INSTANCE_NAME=TESTDB)(SERVER=DEDICATED)))'
    StandbyArchiveLocation          = '/oralogs/arch/TESTDB'
    AlternateLocation               = ''
    LogArchiveTrace                 = '0'
    LogArchiveFormat                = 'arch_%t_%s_%r.arc'
    TopWaitEvents                   = '(monitor)'

Database Status:
SUCCESS
It has been added to Oracle Wallet as well and now my wallet looks like this
[oracle@orasvr3 ~]$ mkstore -wrl /home/oracle/testwallet -listCredential
Oracle Secret Store Tool : Version 11.2.0.3.0 - Production
Copyright (c) 2004, 2011, Oracle and/or its affiliates. All rights reserved.
Enter wallet password:
List credential (index: connect_string username)
6: (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=orasvr2)(PORT=1522))(CONNECT_DATA=(SERVICE_NAME=TESTDB_STANDBY_DGMGRL)(INSTANCE_NAME=TESTDB)(SERVER=DEDICATED))) sys
5: (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=orasvr1)(PORT=1522))(CONNECT_DATA=(SERVICE_NAME=TESTDB_PRIMARY_DGMGRL)(INSTANCE_NAME=TESTDB)(SERVER=DEDICATED))) sys
4: (DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=orasvr2)(PORT=1522)))(CONNECT_DATA=(SERVICE_NAME=TESTDB_STANDBY_DGB)(INSTANCE_NAME=TESTDB)(SERVER=dedicated))) sys
3: (DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=orasvr1)(PORT=1522)))(CONNECT_DATA=(SERVICE_NAME=TESTDB_PRIMARY_DGB)(INSTANCE_NAME=TESTDB)(SERVER=dedicated))) sys
2: TESTDB_STANDBY sys
1: TESTDB_PRIMARY sys
After manual completion of required steps I run switchover again and this time it completed without any errors
DGMGRL> switchover to TESTDB_PRIMARY;
Performing switchover NOW, please wait...
New primary database "TESTDB_PRIMARY" is opening...
Operation requires shutdown of instance "TESTDB" on database "TESTDB_STANDBY"
Shutting down instance "TESTDB"...
[W000 05/30 12:36:51.39] Connecting to database using (DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=orasvr2)(PORT=1522)))(CONNECT_DATA=(SERVICE_NAME=TESTDB_STANDBY_DGB)(INSTANCE_NAME=TESTDB)(SERVER=dedicated))).
ORACLE instance shut down.
Operation requires startup of instance "TESTDB" on database "TESTDB_STANDBY"
Starting instance "TESTDB"...
[W000 05/30 12:36:52.54] Connecting to database using (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=orasvr2)(PORT=1522))(CONNECT_DATA=(SERVICE_NAME=TESTDB_STANDBY_DGMGRL)(INSTANCE_NAME=TESTDB)(SERVER=DEDICATED))).
[W000 05/30 12:36:52.58] Checking broker version [BEGIN :version := dbms_drs.dg_broker_info('VERSION'); END;].
ORA-01034: ORACLE not available
Process ID: 0
Session ID: 363 Serial number: 5

ORACLE instance started.
[W000 05/30 12:36:54.81] Connecting to database using (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=orasvr2)(PORT=1522))(CONNECT_DATA=(SERVICE_NAME=TESTDB_STANDBY_DGMGRL)(INSTANCE_NAME=TESTDB)(SERVER=DEDICATED))).
[W000 05/30 12:36:54.84] Checking broker version [BEGIN :version := dbms_drs.dg_broker_info('VERSION'); END;].
[W000 05/30 12:36:54.84] Broker version is '11.2.0.3.0'
alter database  mount

Database mounted.
[W000 05/30 12:37:02.74] Connecting to database using (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=orasvr2)(PORT=1522))(CONNECT_DATA=(SERVICE_NAME=TESTDB_STANDBY_DGMGRL)(INSTANCE_NAME=TESTDB)(SERVER=DEDICATED))).
[W000 05/30 12:37:02.77] Checking broker version [BEGIN :version := dbms_drs.dg_broker_info('VERSION'); END;].
[W000 05/30 12:37:02.78] Broker version is '11.2.0.3.0'
Switchover succeeded, new primary is "TESTDB_PRIMARY"
DGMGRL>

Lesson learned:
  • FSFO Observer can work with Oracle Wallet
  • 3 entries are required per database, service name has to be set to : <db name unique>, <db name unique_DBG> and <db name unique_DGMGRL>
  • DGMGRL entry can be checked in database configuration in Broker
  • DBG entry - use -debug mode if you can't find a proper connection string 
Hope it will help you with DataGuard Observer configuration.

regards,
Marcin


Friday, March 29, 2013

dbms_dnfs and clone.pl

If you are using to use dbms_dnfs to clone your database as described in MOS note 1210656.1 or on Tim's Hall blog - Direct NFS (DNFS) Clonedb in Oracle Database 11g Release 2 (Patchset 11.2.0.2) and your database version is 11.2.0.3 there is one additional step which is not covered by clone.pl script and has to be done manually.

In 11.2.0.2 it was enough to run crtdb.sql and dbren.sql scripts generated by clone.pl but this is what happen when you will run dbren.sql on 11.2.0.3
SQL> !cat dbren.sql
declare
begin
dbms_dnfs.clonedb_renamefile('/u01/app/oracle/backup/data_D-TESTSE_I-4169104998_TS-SYSAUX_FNO-2_22o1pnm0' , '/u01/clone/ora_data_CLONE2.dbf');
dbms_dnfs.clonedb_renamefile('/u01/app/oracle/backup/data_D-TESTSE_I-4169104998_TS-SYSTEM_FNO-1_20o1pnjq' , '/u01/clone/ora_data_CLONE3.dbf');
dbms_dnfs.clonedb_renamefile('/u01/app/oracle/backup/data_D-TESTSE_I-4169104998_TS-UNDOTBS1_FNO-3_23o1pnmp' , '/u01/clone/ora_data_CLONE4.dbf');
dbms_dnfs.clonedb_renamefile('/u01/app/oracle/backup/data_D-TESTSE_I-4169104998_TS-USERS_FNO-4_21o1pnkt' , '/u01/clone/ora_data_CLONE5.dbf');
end;
/
show errors;
alter database open resetlogs;
drop tablespace TEMP;
create temporary tablespace TEMP;

SQL> begin
dbms_dnfs.clonedb_renamefile('/u01/app/oracle/backup/data_D-TESTSE_I-4169104998_TS-SYSAUX_FNO-2_22o1pnm0' , '/u01/clone/ora_data_CLONE2.dbf');
dbms_dnfs.clonedb_renamefile('/u01/app/oracle/backup/data_D-TESTSE_I-4169104998_TS-SYSTEM_FNO-1_20o1pnjq' , '/u01/clone/ora_data_CLONE3.dbf');
dbms_dnfs.clonedb_renamefile('/u01/app/oracle/backup/data_D-TESTSE_I-4169104998_TS-UNDOTBS1_FNO-3_23o1pnmp' , '/u01/clone/ora_data_CLONE4.dbf');
dbms_dnfs.clonedb_renamefile('/u01/app/oracle/backup/data_D-TESTSE_I-4169104998_TS-USERS_FNO-4_21o1pnkt' , '/u01/clone/ora_data_CLONE5.dbf');
end;
  7  /
begin
*
ERROR at line 1:
ORA-01511: error in renaming log/data files
ORA-01141: error renaming data file 2 - new file '/u01/clone/ora_data_CLONE2.dbf' not found
ORA-01110: data file 2: '/u01/app/oracle/backup/data_D-TESTSE_I-4169104998_TS-SYSAUX_FNO-2_22o1pnm0'
ORA-17515: Creation of clonedb failed using snapshot file /u01/app/oracle/backup/data_D-TESTSE_I-4169104998_TS-SYSAUX_FNO-2_22o1pnm0
ORA-06512: at "SYS.X$DBMS_DNFS", line 10
ORA-06512: at line 2
After that operation I found following errors in alert log
Fri Mar 29 13:31:20 2013
ERROR: clonedb parameter not set. Make sure clonedb=TRUE is set
Fri Mar 29 13:31:20 2013
Signalling error 1152 for datafile 1!
Signalling error 1152 for datafile 2!
Signalling error 1152 for datafile 4!
Checker run found 4 new persistent data failures

Let's check parameter 
SQL> show parameter clone

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
clonedb                              boolean     FALSE


I have changed that parameter in init.ora and restarted instance.
SQL> show parameter clone

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
clonedb                              boolean     TRUE

SQL> begin
dbms_dnfs.clonedb_renamefile('/u01/app/oracle/backup/data_D-TESTSE_I-4169104998_TS-SYSAUX_FNO-2_22o1pnm0' , '/u01/clone/ora_data_CLONE2.dbf');
dbms_dnfs.clonedb_renamefile('/u01/app/oracle/backup/data_D-TESTSE_I-4169104998_TS-SYSTEM_FNO-1_20o1pnjq' , '/u01/clone/ora_data_CLONE3.dbf');
dbms_dnfs.clonedb_renamefile('/u01/app/oracle/backup/data_D-TESTSE_I-4169104998_TS-UNDOTBS1_FNO-3_23o1pnmp' , '/u01/clone/ora_data_CLONE4.dbf');
dbms_dnfs.clonedb_renamefile('/u01/app/oracle/backup/data_D-TESTSE_I-4169104998_TS-USERS_FNO-4_21o1pnkt' , '/u01/clone/ora_data_CLONE5.dbf');
end;
  7  /

PL/SQL procedure successfully completed.
Now it is working again so time to come back to other tests with cloning.

regards,
Marcin

Wednesday, March 20, 2013

Next OakTable 2012 video is online

Hi,

This time Alex Gorbachev presentation about ASM failures - next videos on the way.


regards,
Marcin

Sunday, March 3, 2013

New OakTable World 2012 video is published

Just published Kevin Closson video from OakTable World 2012.

Enjoy,

Marcin

Thursday, February 21, 2013

Oracle on AIX - where's my cpu time ?

This story begins couple of weeks ago when I was asked to look on performance problem for critical processes before go-live. There were differences between run times between different test systems and between runs with 10046 trace enabled or not.

First of all I started with tracing issue. I have checked server CPU and disk utilization using NMON (tool name is important here). There wasn’t any problem with CPU utilization – or at least this is what I was thinking at that time. Tracing was enabled with level 12 (bind and wait) and it generate 2 GB of trace file for 30 min work. First I decided to limit trace level to 8 and give it another try. In the meantime, I double-checked with Oracle community on Twitter (thanks @CaryMillsap and @MartinBerx) that there is not any know Oracle bugs related to trace and performance. Changing tracing level from 12 to 8 solve issue – I mean there no difference between process running with trace and without trace and also limit size of the file to 130MB from 2 GB (don’t ask ?)

In next step I focused on analyze differences between run times on two systems. It is quite easy to do with trace files and I was hoping for short solution.

My first surprise was that profile of SQL in both trace file were almost identical – there was not OS calls (waits) and query spend whole time on CPU. Execution plan were same and amount of data was same as well. Elapsed time for both executions was different and was equal to wall clock time recorded in process logs. To clarify my finding I will focus now on one UPDATE statement. 

When I compared elapsed time with CPU time for EXEC db call I have seen a huge gap between both values – on faster system ratio between CPU time / elapsed time was around 60 % and slower one around 37 % - remember there is no OS calls (wait events)
According to all my knowledge if there is no OS calls CPU time should be equal to elapsed time with probably some difference related to measurement or rounding. If there are differences, there is a huge chance that Oracle process was per-empted from CPU and was waiting in CPU queue. Well according to NMON stats system was utilized in 30 % only so why Oracle process was to wait in queue if there are lot of CPU’s free ? So maybe NMON is reporting system utilization it in wrong way and server itself is overloaded ? That question had to be answered.
I was lucky enough and I had a chance to run Oracle processes on idle server when no one else was connected. I got small LPAR with one core and 4 logical CPU’s as AIX SMT was enabled. I created a small test 

declare
 a number;
 begin
 for i in 1..100000 loop
 select count(*) into a from dba_users;
 end loop;
 end;
/
and I was surprised with results. I doesn’t trust nmon output anymore so I run topas  observing CPU utilization – it showed around 63 % of utilization for my session ! why not 95 – 100 % ? there was nothing else running there ?
I was even more surprised when I checked trace file – here is output formatted by tkprof for query I was looking at

SQL ID: 821zd9kr9k4yr Plan Hash: 3759210642

SELECT COUNT(*) 
FROM
 DBA_USERS


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute 100000      1.66       2.73          0          0          0           0
Fetch   100000     53.75      86.03          0    4200000          0      100000
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total   200001     55.41      88.77          0    4200000          0      100000
55.51 / 88.71 * 100 % = 62.5 %

Hey ! Dude where is my CPU time ? Did my process spend 33 s in CPU run queue ? on idle box ?

I needed more data and at that stage I had three hypothesis – Oracle bug, configuration problem or SMT option. I had funny feeling that SMT option is related to these results somehow but I checked MOS looking for bugs and asked Twitter friends to run same test on other system. (@MartinDBA  confirmed my findings - Thanks for your help).

Next step was to disable SMT option and I learn that AIX is so nice that you can do it online.  I rerun my test – Bingo – topas is showing 95 – 98 % of CPU utilization for my Oracle process and trace file is confirming it

SQL ID: 821zd9kr9k4yr Plan Hash: 3759210642

SELECT COUNT(*) 
FROM
 DBA_USERS


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute 100000      2.69       2.78          0          0          0           0
Fetch   100000     85.08      88.33          0    4200000          0      100000
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total   200001     87.78      91.12          0    4200000          0      100000
87.78 / 91.12 * 100 % = 96.3 %

This is what I was looking for. Based on both observations I was sure that there is no bug’s in tracing functionality for this version of Oracle (11.2.0.3) and I was sure that Oracle process can spin CPU to almost 100 %. There was still question – why there is a huge gap in cpu to elapsed time with SMT enabled. I start looking for IBM documentation and bugs and after some time I found that page - Understanding Processor Utilization on Power Systems - AIX and this picture (linked from IBM wiki page)



So for new PowerPC 7 if your process is running 1 sec on logical CPU (thread)  CPU utilization (user plus system) is 0.63 sec only. If you compare this one with no SMT enabled (or previous version of CPU) you can learn that if process was running for 1 sec on logical CPU it used to be reported as 1 sec of time used by CPU in user and system mode.

How tools are calculating CPU utilization for one process for  1 sec ? you need to divide CPU time used by this processes by 1 sec. So for pre-powerpc 7 or for power pc 7 without SMT

1 s on cpu / 1 s real time * 100 % = 100 %

For powerpc7 with 4 SMT

0.63 on cpu / 1 s * 100 % = 63 %

Have I seen this number before ? Yes this is CPU time taken from trace file divided by elapsed time ! My mystery has been solved – 63 % is now 100 % at least for that version of AIX and Oracle.


Where is a problem ? 

If you are using Cary Millsap Method-R methodology to diagnose performance problems you need to remember that unaccounted time for trace files will be impacted by this feature/bug/you name it.

ps.
I think this post will be updated when I will have more details.

regards,
Marcin

Saturday, February 9, 2013

OakTable World 2012 - Greg Rahn video

New video from OakTable World 2012 is available - Greg Rahn - Beyond Relational Databases

 

regards
Marcin

Friday, February 1, 2013

New OakTable Video - Delphix presenation

New OakTable World 2012 video from Kyle Hailey presentation has been uploaded into YouTube.


regards,
Marcin

Sunday, January 27, 2013

How dNFS database clone works – part 1

There is new feature in Oracle 11.2.0.2 called dNFS clone. It has been described by Kevin Closson on his blog post - Oracle Database 11g Direct NFS Clonedb Feature  and very good configuration description has been posted by Tim Hall on his blog - Direct NFS (DNFS) Clonedb in Oracle Database 11g Release 2 (Patchset 11.2.0.2). I have played with it just after I found both blogs but never think how Oracle implemented that feature. I came back to it when my colleague asked me if I ever use that in production environment and what are performance implications of using it. 

Oracle is using Direct NFS option as Oracle Disk Manager 

Oracle instance running with ODM: Oracle Direct NFS ODM Library Version 3.0
(is there any other still in use ? I remember one – Veritas Quick I/O module for Veritas filesystem / volume manager). As far as I understand ODM is a library which is providing IO functionality for database. As Oracle can be used on different hardware and Oracle kernel can call ODM functions to read or write data from storage and from Oracle kernel perspective it doesn’t really matter how ODM fulfill IO request.

When I started thinking how Oracle implemented that feature, two ideas came into my mind:
  • Oracle keep in-memory map of changed blocks and know where to go to find latest block – actually I never believe in it – this too costly solution 
  • Oracle is just doing read from both sources (backup and NFS) and it is comparing which block is newer if necessary
But guessing work is against BAAG and I want to check what database is actually doing when we are using dnfs clone. I have started with two scenarios – read data which have been changed after clone has been established and read unchanged data. Thanks to Oracle instrumentation (trace 10046), strace and wireshark I see much more details now but still there is long way to have whole picture. Let me share my findings with you.

Read changed data in clone 

I added one row to scott.bonus table which was empty when I cloned database and I executed simple query – select * from scott.bonus – below is a mixture output from all three monitoring sources
EXEC #140195143601328:c=0,e=42,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=953918076,tim=2013-01-17 14:29:43:108500
WAIT #140195143601328: nam='SQL*Net message to client' ela= 1 driver id=1650815232 #bytes=1 p3=0 obj#=13891 tim=2013-01-17 14:29:43:108660
WAIT #140195143601328: nam='Disk file operations I/O' ela= 5 FileOperation=2 fileno=4 filetype=2 obj#=75337 tim=2013-01-17 14:29:43:108800
First WAIT line with disk OS CALL (thanks to @CaryMillsap Method R training I started to thinking about WAIT lines as I should before) has no strace nor wireshark activity and call itself is very short – 5us –so I just ignored it for time being.
statfs("/u01/app/oracle/backup/data_data_D-TESTSE_I-4169104998_TS-USERS_FNO-4_10nvidnv", {f_type="EXT2_SUPER_MAGIC", f_bsize=4096, f_blocks=5158559, f_bfree=2226212, f_bavail=1964171, f_files=1310720, f_ffree=1268869, f_fsid={1982750833, 2010996514}, f_namelen=255, f_frsize=4096})
open("/u01/app/oracle/backup/data_data_D-TESTSE_I-4169104998_TS-USERS_FNO-4_10nvidnv", O_RDWR|O_DSYNC) = 11
fcntl(11, F_DUPFD, 256) = 257
close(11)               = 0
fcntl(257, F_SETFD, FD_CLOEXEC) = 0
fstatfs(257, {f_type="EXT2_SUPER_MAGIC", f_bsize=4096, f_blocks=5158559, f_bfree=2226212, f_bavail=1964171, f_files=1310720, f_ffree=1268869, f_fsid={1982750833, 2010996514}, f_namelen=255, f_frsize=4096}) = 0
WAIT #140195143601328: nam='Disk file operations I/O' ela= 182 FileOperation=2 fileno=4 filetype=2 obj#=75337 tim=2013-01-17 14:29:43:109050
WAIT #140195143601328: nam='Disk file operations I/O' ela= 160 FileOperation=2 fileno=4 filetype=2 obj#=75337 tim=2013-01-17 14:29:43:109150
There are next two i/o related calls and above we can see that backup file has been opened , handler has been duplicated and fstatfs call has been executed. There is another mystery here I have seen couple of system calls and then two entries from Oracle trace files – not sure why it look like this – as we are dealing here on microsecond level it is hard to match tim / ela from Oracle trace file with actuall OS calls from strace – but I will try to do it later - in next post probably. Right now we now that Oracle process opened backup file and it is ready to read from it. 
Query is using full scan on table so first of all Oracle need to read segment header
sendmsg(32, {msg_name(0)=NULL, msg_iov(1)=[{"\200\0\0\214\252\16\236\235\0\0\0\0\0\0\0\2\0\1\206\243\0\0\0\3\0\0\0\6\0\0\0\1\0\0\0000\253\315\357\253\0\0\0\24oraclese.localdomain\0\0\3241\0\0\3241\0\0\0\2\0\0\3241\0\0\3242\0\0\0\0\0\0\0\0\0\0\0$\1\0\7\1\1\240\202\3\0\0\0\0\222\364,

V3 READ Call (Reply In 44853), FH:0x8f3a5fe4 Offset:256131072 Len:8192,44846,2013-01-17 14:29:43.109264,192.168.1.60,192.168.1.20,NFS,210

poll([{fd=32, events=POLLIN}], 1, 500) = 1 ([{fd=32, revents=POLLIN}])
recvmsg(32, {msg_name(0)=NULL, msg_iov(1)=[{"\200\0 \200\252\16\236\235\0\0\0\1\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\1\0\0\0\1\0\0\201\240\0\0\0\1\0\0\3241\0\0\3241\0\0\0\0\21X \0\0\0\0\0\0\5\0\0\0\0\0\0\0\0\0\0\227\252.\312\216\16\364\222\0\0\0\0\4\207`\25P\370\n[\0\0\0\
recvmsg(32, {msg_name(0)=NULL, msg_iov(1)=[{"\0\0\0\0\0\0\0\0\0\0\0\0\1\0\0\0\0 \0\0\0\0\0\0004\24\0\0\0\0\0\0!z\0\1\1\0\0\0 z\0\1!z\0\1\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\1\0\0\0\0\0\0\0I&\1\0\0\0\0\20 z\0\1\10\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\

V3 READ Reply (Call In 44846) Len:8192,44853,2013-01-17 14:29:43.144922,192.168.1.20,192.168.1.60,NFS,1150

recvmsg(32, 0x7fffe701bc20, MSG_DONTWAIT|MSG_NOSIGNAL) = -1 EAGAIN (Resource temporarily unavailable)

WAIT #140195143601328: nam='db file sequential read' ela= 36202 file#=4 block#=31266 blocks=1 obj#=75337 tim=2013-01-17 14:29:43:145410
We can see that process is using sendmsg function sending NFS READ call (captured by wireshark) and it is waiting for answer. Due to my NFS and network configuration respond has been split into two recvmsg system calls but it is one NFS READ Reply – one database block ( 8kB) has been read and this action has been registered in trace file as db file sequential read. Now process is going to execute full scan and read blocks from table
sendmsg(32, {msg_name(0)=NULL, msg_iov(2)=[{"\200\0\0\214\252\16\236\237\0\0\0\0\0\0\0\2\0\1\206\243\0\0\0\3\0\0\0\6\0\0\0\1\0\0\0000\253\315\357\253\0\0\0\24oraclese.localdomain\0\0\3241\0\0\3241\0\0\0\2\0\0\3241\0\0\3242\0\0\0\0\0\0\0\0\0\0\0$\1\0\7\1\1\240\202\3\0\0\0\0\222\364,V3 READ Call (Reply In 44888), FH:0x8f3a5fe4 Offset:256139264 Len:32768  ; 
V3 READ Call (Reply In 44888), FH:0x8f3a5fe4 Offset:256139264 Len:32768
V3 READ Call (Reply In 44862), FH:0x8f3a5fe4 Offset:256172032 Len:8192
poll([{fd=32, events=POLLIN}], 1, 500) = 1 ([{fd=32, revents=POLLIN}])
recvmsg(32, {msg_name(0)=NULL, msg_iov(1)=[{"\200\0 \200\252\16\236\240\0\0\0\1\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\1\0\0\0\1\0\0\201\240\0\0\0\1\0\0\3241\0\0\3241\0\0\0\0\21X \0\0\0\0\0\0\5\0\0\0\0\0\0\0\0\0\0\227\252.\312\216\16\364\222\0\0\0\0\4\207`\25P\370\n[\0\0\0\
recvmsg(32, {msg_name(0)=NULL, msg_iov(1)=[{"\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0,
V3 READ Reply (Call In 44855) Len:8192,44862,2013-01-17 14:29:43.158250,192.168.1.20,192.168.1.60,NFS,1150,,,,
recvmsg(32, {msg_name(0)=NULL, msg_iov(1)=[{"\200\0\200\200\252\16\236\237\0\0\0\1\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\1\0\0\0\1\0\0\201\240\0\0\0\1\0\0\3241\0\0\3241\0\0\0\0\21X \0\0\0\0\0\0\5\0\0\0\0\0\0\0\0\0\0\227\252.\312\216\16\364\222\0\0\0\0\4\207`\25P\370\n[\0\0
recvmsg(32, {msg_name(0)=NULL, msg_iov(1)=[{"\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0
V3 READ Reply (Call In 44855) Len:32768,44888,2013-01-17 14:29:43.159262,192.168.1.20,192.168.1.60,NFS,1110,,,,
recvmsg(32, 0x7fffe7016aa0, MSG_DONTWAIT|MSG_NOSIGNAL) = -1 EAGAIN (Resource temporarily unavailable)
WAIT #140195143601328: nam='db file scattered read' ela= 13870 file#=4 block#=31267 blocks=5 obj#=75337 tim=2013-01-17 14:29:43:159580
There are two NFS READ calls and two NFS READ Reply and 5 blocks has been read – it exactly match Oracle trace file entry – db file scattered read – with blocks equal to 5. So this scenario is straight forward – Oracle is going directly to NFS to read changed blocks – but still I can’t be sure how it decide to go there. Let’s check now what will happen when we try to read unchanged blocks.

Read unchanged blocks.  

Oracle is executing query – select * from scott.salgrade – and here is a trace / strace / wireshark output. Process is executing table full scan so it need to read segment header first
sendmsg(32, {msg_name(0)=NULL, msg_iov(1)=[{"\200\0\0\214\252\16\236\245\0\0\0\0\0\0\0\2\0\1\206\243\0\0\0\3\0\0\0\6\0\0\0\1\0\0\0000\253\315\357\253\0\0\0\24oraclese.localdomain\0\0\3241\0\0\3241\0\0\0\2\0\0\3241\0\0\3242\0\0\0\0\0\0\0\0\0\0\0$\1\0\7\1\1\240\202\3\0\0\0\0\222\364\16\216\312.\252\227\0\0\0\0\0\0\0\0\25`\ 
V3 READ Call (Reply In 44938), FH:0x8f3a5fe4 Offset:1327104 Len:8192,44930,2013-01-17 14:29:55.162780,192.168.1.60,192.168.1.20,NFS,210    
sendmsg(32, {msg_name(0)=NULL, msg_iov(1)=[{"\200\0\0X\252\16\236\246\0\0\0\0\0\0\0\2\0\1\206\243\0\0\0\3\0\0\0\0\0\0\0\1\0\0\0000\253\315\357\253\0\0\0\24oraclese.localdomain\0\0\3241\0\0\3241\0\0\0\2\0\0\3241\0\0\3242\0\0\0\0\0\0\0\0", 92}], msg_controllen=0, msg_flags=0}, MSG_DONTWAIT|MSG_NOSIGNAL) = 92
poll([{fd=32, events=POLLIN}], 1, 500) = 1 ([{fd=32, revents=POLLIN}])
recvmsg(32, {msg_name(0)=NULL, msg_iov(1)=[{"\200\0 \200\252\16\236\245\0\0\0\1\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\1\0\0\0\1\0\0\201\240\0\0\0\1\0\0\3241\0\0\3241\0\0\0\0\21X \0\0\0\0\0\0\5\0\0\0\0\0\0\0\0\0\0\227\252.\312\216\16\364\222\0\0\0\0\4\207`\25P\370\n[\0\0\0\0P\370\nZ\0\0\0\0P\370\nZP\366\241\37\0\0
recvmsg(32, {msg_name(0)=NULL, msg_iov(1)=[{"\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 7996}], msg_con
recvmsg(32, {msg_name(0)=NULL, msg_iov(1)=[{"\200\0\0\30\252\16\236\246\0\0\0\1\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\1\0\0\0\1\0\0\201\240\0\0\0\1\0\0\3241\0\0\3241\0\0\0\0\21X \0\0\0\0\0\0\5\0\0\0\0\0\0\0\0\0\0\227\252.\312\216\16\364\222\0\0\0\0\4\207`\25P\370\n[\0\0\0\0P\370\nZ\0\0\0\0P\370\nZP\366\241\37\0\0
V3 READ Reply (Call In 44930) Len:8192 44938 2013-01-17 14:29:55.163827,192.168.1.20,192.168.1.60,NFS,1178
recvmsg(32, 0x7fffe701bc20, MSG_DONTWAIT|MSG_NOSIGNAL) = -1 EAGAIN (Resource temporarily unavailable)         
WAIT #140195143601328: nam='db file sequential read' ela= 1323 file#=4 block#=162 blocks=1 obj#=75338 tim=2013-01-17 14:29:55:164020
At first sight nothing has been changed – Oracle is doing NFS READ call and got a response. But if you will take a look on returned data – database block is full of zeros. It isn’t a surprise as NFS files are sparse one and it should contain only changed blocks. That read activity has been recorded in trace file as db file sequential read but we still have no data - so what is going next
pread(257, "#\242\0\0\242\0\0\1\250\34\17\0\0\0\1\4\317\215\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\1\0\0\0\10\0\0\0\234\n\0\0\0\0\0\0\10\0\0\0\10\0\0\0\250\0\0\1\0\0\0\0\0\0\0\0\0\0\0\0\5\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\10\0\0\0\10\0\0\0\250\0\0\1\0\0\0\0\0\0\0\0\0\0\0\0\5\0\0\0\240\0\0\1"..., 8192, 1327104) = 8192
WAIT #140195143601328: nam='Disk file operations I/O' ela= 49 FileOperation=9 fileno=4 filetype=2 obj#=75338 tim=2013-01-17 14:29:55:164170
Disk read is performed on file with handler 257 and as we can check in first strace output this file is a backup data file. Oracle is reading segment data from here and recorded that operation in trace file as ‘Disk file operations I/O’. Based on that behaviour I think there is no map of changed blocks but process is reading NFS first and if NFS will return zeroed block it is reading backup data file. Let see rest of full scan operation.
sendmsg(32, {msg_name(0)=NULL, msg_iov(2)=[{"\200\0\0\214\252\16\236\250\0\0\0\0\0\0\0\2\0\1\206\243\0\0\0\3\0\0\0\6\0\0\0\1\0\0\0000\253\315\357\253\0\0\0\24oraclese.localdomain\0\0\3241\0\0\3241\0\0\0\2\0\0\3241\0\0\3242\0\0\0\0\0\0\0\0\0\0\0$\1\0\7\1\1\240\202\3\0\0\0\0\222\364\16\216\312.\252\227\0\0\0\0\0\0\0\0\25`\ 
V3 READ Call (Reply In 44972), FH:0x8f3a5fe4 Offset:1335296 Len:32768  ; V3 READ Call (Reply In 44978), FH:0x8f3a5fe4 Offset:1368064 Len:8192
poll([{fd=32, events=POLLIN}], 1, 500) = 1 ([{fd=32, revents=POLLIN}])
recvmsg(32, {msg_name(0)=NULL, msg_iov(1)=[{"\200\0\200\200\252\16\236\250\0\0\0\1\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\1\0\0\0\1\0\0\201\240\0\0\0\1\0\0\3241\0\0\3241\0\0\0\0\21X \0\0\0\0\0\0\5\0\0\0\0\0\0\0\0\0\0\227\252.\312\216\16\364\222\0\0\0\0\4\207`\25P\370\n[\0\0\0\0P\370\nZ\0\0\0\0P\370\nZP\366\241\37\
recvmsg(32, {msg_name(0)=NULL, msg_iov(1)=[{"\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 32572}], msg_co
V3 READ Reply (Call In 44940) Len:32768 44972,2013-01-17 14:29:55.168776,192.168.1.20,192.168.1.60,NFS,1514    
recvmsg(32, {msg_name(0)=NULL, msg_iov(1)=[{"\200\0 \200\252\16\236\251\0\0\0\1\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\1\0\0\0\1\0\0\201\240\0\0\0\1\0\0\3241\0\0\3241\0\0\0\0\21X \0\0\0\0\0\0\5\0\0\0\0\0\0\0\0\0\0\227\252.\312\216\16\364\222\0\0\0\0\4\207`\25P\370\n[\0\0\0\0P\370\nZ\0\0\0\0P\370\nZP\366\241\37\0\0
recvmsg(32, {msg_name(0)=NULL, msg_iov(1)=[{"\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 7996}], msg_con
V3 READ Reply (Call In 44940) Len:8192,44978,2013-01-17 14:29:55.168790,192.168.1.20,192.168.1.60,NFS,746    
recvmsg(32, 0x7fffe7016aa0, MSG_DONTWAIT|MSG_NOSIGNAL) = -1 EAGAIN (Resource temporarily unavailable)           
WAIT #140195143601328: nam='db file scattered read' ela= 4762 file#=4 block#=163 blocks=5 obj#=75338 tim=2013-01-17 14:29:55:169040
Once again process send to NFS READ calls (even if segement header wasn’t on NFS) and again only zeroed blocks has been send back. Whole operation is recorded as db file scattered read but actually process still has no data.
pread(257, "\6\242\0\0\243\0\0\1\250\34\17\0\0\0\1\4\202\0\0\0\1\0\0\0J&\1\0\247\34\17\0\0\0\0\0\2\0002\0\240\0\0\1\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\377\377\16\0\230\37\212\37\212\37\0\0\0\0@\0\6\0\0\0`\37\0\0"..., 40960, 1335296) = 40 
WAIT #140195143601328: nam='Disk file operations I/O' ela= 6883 FileOperation=9 fileno=4 filetype=2 obj#=75338 tim=2013-01-17 14:29:55:176020      
Again process is reading file 257 (backup data file) and it is recording that activity as "Disk file operations I/O". Based on that observation I can say that Oracle is looking for data first on NFS and then in backup file. 

But it there any caching / learning algorithm in place ? I don’t think so cause when I run this query select * from dba_source where line like 'marcin' twice with flushing buffer cache between runs and execution pattern looks similar – there was NFS activity fallowed by reading backup disk file. 

This is end of part one and I will try to publish next post about same topic soon.

regards,
Marcin