Thursday, December 16, 2010

Drop user which objects are in different databases of PG Instance.

One of my colleague has asked about this. She was doing some testing and wanted to drop a user which has objects across the databases.

While dropping user/role, DBA/Admin will get following error messages:
postgres=# drop user test;
ERROR:  role "test" cannot be dropped because some objects depend on it
DETAIL:  2 objects in database test
Above error messages gives the sufficient information to Admin that there are some objects depend/Owned by the user exists in other database.

For dropping such user, there are two methods:
1. Reassign all the objects owned by the user to some other user and then drop the user.
Above is very useful, if employee, who left the company, has written some Procedure/objects, which is getting used in Application/process.

Command Which can be are following:
REASSIGN OWNED BY old_role to new_role; 
DROP USER old_role;

Note:: reassign command need to be executed for all the databases under one PG instance.

2. First Drop all the objects owned by the user and then drop the user.
This is useful if admin don't want to keep the users objects and wants to drop all the objects owned by user:
Command which can be use are following:
DROP OWNED BY name [, ...] [ CASCADE | RESTRICT ];
DROP user username;
Note:: DROP OWNED BY NAME need to be executed for all the database.


To make it further better, I have written following Function, which uses the dblink to make connections to other database and execute the command to drop the user:

CREATE OR REPLACE function reassign_drop_user(text,text) returns integer
AS
$$
Declare
    db record;
    cmd text;
    status text;
BEGIN
   cmd:='REASSIGN OWNED BY '||$1||' TO '||$2||';';
   for db in select datname from pg_database where datname!='template0'
   Loop
      execute 'select dblink_connect('||''''||'dbname='||db.datname||''''||')' into status;
      RAISE NOTICE 'CONNCETION STATUS % :- %',db.datname,status;
      execute 'select dblink_exec('||''''||cmd||''''||')' into status;
      RAISE NOTICE 'COMMAND " % " STATUS :- %',cmd,status;
      execute 'select dblink_disconnect()' into status;
   END Loop;
   execute 'DROP USER '||$1;
   exception when others then
     return 1;
   Return 0;
END;
$$ language plpgsql;
I have kept return value integer, so that I can verify if the function is successful (0) or unsuccessful (1)
Output:
select reassign_drop_user('fromuser','touser');
NOTICE:  CONNCETION STATUS template1 :- OK
NOTICE:  COMMAND " REASSIGN OWNED BY fromuser TO touser; " STATUS :- REASSIGN OWNED
NOTICE:  CONNCETION STATUS template_postgis :- OK
NOTICE:  COMMAND " REASSIGN OWNED BY fromuser TO touser; " STATUS :- REASSIGN OWNED
NOTICE:  CONNCETION STATUS test :- OK
NOTICE:  COMMAND " REASSIGN OWNED BY fromuser TO touser; " STATUS :- REASSIGN OWNED
NOTICE:  CONNCETION STATUS postgres :- OK
NOTICE:  COMMAND " REASSIGN OWNED BY fromuser TO touser; " STATUS :- REASSIGN OWNED 
reassign_drop_user 
--------------------                  
0
(1 row)


Similarly for Dropping Objects with User following function can be use:
REATE OR REPLACE function drop_user_withobjects(text) returns integer
AS
$$
Declare
    db record;
    cmd text;
    status text;
BEGIN
   cmd:='DROP OWNED BY '||$1||' CASCADE;';
   for db in select datname from pg_database where datname!='template0'
   Loop
      execute 'select dblink_connect('||''''||'dbname='||db.datname||''''||')' into status;
      RAISE NOTICE 'CONNCETION STATUS % :- %',db.datname,status;
      execute 'select dblink_exec('||''''||cmd||''''||')' into status;
      RAISE NOTICE 'COMMAND " % " STATUS :- %',cmd,status;
      execute 'select dblink_disconnect()' into status;
   END Loop;
   execute 'DROP USER '||$1;
   exception when others then
     return 1;
   Return 0;
END;
$$ language plpgsql;

Wednesday, December 15, 2010

Rebuilding Pkey and Indexes without locking table in PG 8.4.

On production System, some times DBAs have to rebuild the indices, since, Rebuilding indices lock the table therefore DBA can use Option CONCURRENTLY.

For normal Indices CONCURRENTLY is a best option, however Primary Key Indices Rebuild require Lock on table. To overcome this issue, I have made following function:
SWAP Index with Pkey:

CREATE OR REPLACE FUNCTION swap_for_pkey(text,text,text) returns integer
AS
$$
   DECLARE
     cmd text;
     oid1 integer;
     oid2 integer;
     filenode1 integer;
     filenode2 integer;
     relation text;
   BEGIN
      select oid::integer into oid1 from pg_class where relname=$2 and relnamespace = (select oid from pg_namespace where nspname=$1);
     RAISE NOTICE 'PKEY OID: %',oid1;
      select relfilenode::integer into filenode1 from pg_class where oid=oid1;
      select oid::integer into oid2 from pg_class where relname=$3 and relnamespace = (select oid from pg_namespace where nspname=$1);
     RAISE NOTICE 'PKEY OID: %',oid2;
      select relfilenode::integer into filenode2 from pg_class where oid=oid2;
      select (indrelid::regclass)::text into relation from pg_index where indexrelid=oid1;
    RAISE NOTICE 'RELATION NAME: %',relation;
      cmd:='LOCK '||relation||';';
      RAISE NOTICE 'Executing :- %',cmd;
      Execute cmd;      
      cmd:='UPDATE pg_class SET relfilenode='||filenode2|| ' WHERE oid='||oid1||';';
      RAISE NOTICE 'Executing :- %',cmd;
      Execute cmd;      
      cmd:='UPDATE pg_class SET relfilenode='||filenode1|| ' WHERE oid='||oid2||';';
      RAISE NOTICE 'Executing :- %',cmd;
      Execute cmd;
      cmd:='DROP INDEX '||$1||'.'||$3||';';
      RAISE NOTICE 'Executing :- %',cmd;
      Execute cmd;
      return 0;
   END;
$$language plpgsql;


Concept is simple, Create a UNIQUE INDEX on primary key columns and swap the relfilenode
CREATE OR REPLACE Function rebuild_pkey_index(text,text) returns setof text
AS
$$
  DECLARE
     reloid integer;
     cmd text;
     rec record;
     oid1 integer;
     oid2 integer;
     filenode1 integer;
     filenode2 integer;
     relname1 text;
  BEGIN
    select oid into reloid from pg_class where relname=$2 and relnamespace=(select oid from pg_namespace where nspname=$1);
    for rec in SELECT c2.relname as indexname, substring(pg_catalog.pg_get_indexdef(i.indexrelid, 0, true),0, position( 'INDEX'in pg_catalog.pg_get_indexdef(i.indexrelid, 0, true))+6) ||'CONCURRENTLY '|| c2.relname||'_new '|| substring(pg_catalog.pg_get_indexdef(i.indexrelid, 0, true),position( 'ON 'in pg_catalog.pg_get_indexdef(i.indexrelid, 0, true)))||';' as command FROM pg_catalog.pg_class c, pg_catalog.pg_class c2, pg_catalog.pg_index i WHERE c.oid=reloid  AND c.oid = i.indrelid AND i.indexrelid = c2.oid and i. indisprimary=true ORDER BY i.indisprimary DESC, i.indisunique DESC, c2.relname
    LOOP
       return next rec.command;
       cmd:='SELECT swap_for_pkey('||''''||$1||''''||','||''''||rec.indexname||''''||','||''''||rec.indexname||'_new'||''''||');';
       return next cmd;
   END LOOP;
   END;
$$language plpgsql;
Following is an Output:
postgres=# select rebuild_pkey_index('public','test');                            
rebuild_pkey_index                             
--------------------------------------------------------------------------- 
CREATE UNIQUE INDEX CONCURRENTLY test_pkey1_new ON test USING btree (id); 
SELECT swap_for_pkey('public','test_pkey1','test_pkey1_new');
(2 rows)
Executing the command provided by the above function would give following result:
CREATE INDEX
NOTICE:  PKEY OID: 260669
NOTICE:  PKEY OID: 260679
NOTICE:  RELATION NAME: test
NOTICE:  Executing :- LOCK test;
NOTICE:  Executing :- UPDATE pg_class SET relfilenode=260679 WHERE oid=260669;
NOTICE:  Executing :- UPDATE pg_class SET relfilenode=260678 WHERE oid=260679;
NOTICE:  Executing :- DROP INDEX public.test_pkey1_new;
 swap_for_pkey 
---------------
             0
(1 row)


Similarly for Non Pkey Indices, I have written following function, which can provide the all the command for rebuilding the non-pkey index:

CREATE OR REPLACE Function rebuild_nonpkey_index(text,text) returns setof text
AS
$$
  DECLARE
     reloid integer;
     cmd text;
     rec record;
  BEGIN
 
    select oid into reloid from pg_class where relname=$2 and relnamespace=(select oid from pg_namespace where nspname=$1);
    for rec in SELECT c2.relname as indexname, substring(pg_catalog.pg_get_indexdef(i.indexrelid, 0, true),0, position( 'INDEX'in pg_catalog.pg_get_indexdef(i.indexrelid, 0, true))+6) ||'CONCURRENTLY '|| c2.relname||'_new '|| substring(pg_catalog.pg_get_indexdef(i.indexrelid, 0, true),position( 'ON 'in pg_catalog.pg_get_indexdef(i.indexrelid, 0, true)))||';' as command FROM pg_catalog.pg_class c, pg_catalog.pg_class c2, pg_catalog.pg_index i WHERE c.oid=reloid  AND c.oid = i.indrelid AND i.indexrelid = c2.oid and i. indisprimary=false ORDER BY i.indisprimary DESC, i.indisunique DESC, c2.relname
    LOOP
       return next rec.command;
       cmd:= 'DROP INDEX '||rec.indexname||';';
       return next cmd;
       cmd:='ALTER INDEX '||rec.indexname||'_new'||' RENAME TO '||rec.indexname||';';
       return next cmd;
     END LOOP;
   END;
$$language plpgsql;

Output:
postgres=# select rebuild_nonpkey_index('public','test');                                                                                           
rebuild_nonpkey_index                        
-------------------------------------------------------------------- 
CREATE INDEX CONCURRENTLY test_idx_new ON test USING btree (col1); 
DROP INDEX test_idx; 
ALTER INDEX test_idx_new RENAME test_idx;
(3 rows)

Above functions can also be use in PG9.0. However, for PG9.1 Developers are working on ALTER TABLE command which can be use for Swapping the Unique index for Primary key.

Saturday, November 27, 2010

Some Handy SQLs for PG DBAs

For using these query DBA has to install the adminpack contrib module of PostgreSQL. If you are using one-click installer then user can find the adminpack.sql in /PostgreSQL Installation Directory/share/postgresql/contrib directory.

Following are some SQLs Based on adminpack contrib module DBA can use to find the last logfile, size of the logfile and can ready using PG Client applications (psql,pgAdmin etc.)
SELECT * FROM ( SELECT pg_ls_dir('pg_log')) AS t (filename)ORDER BY 1 DESC LIMIT 1 

SELECT size FROM pg_stat_file('pg_log/postgresql-2010-05-13.log')

SELECT pg_read_file('pg_log/postgresql-2010-05-13.log', 0, 21604)
Sometimes, user doesn’t have the access to the Server and for reading the PG logfile user has to login on the server. Now they don;t have to. If you have adminpack.sql installed in PostgreSQL user can easily read the last logfile by creating following function:
CREATE OR REPLACE FUNCTION pg_read_last_logfile() returns text
AS
$$
   DECLARE
     fname text;
     fsize integer;
   BEGIN
        SELECT filename into fname FROM ( SELECT pg_ls_dir('pg_log')) AS tmp (filename)ORDER BY 1 DESC LIMIT 1; -- to get the latest filename
        fname='pg_log/'||fname;
        SELECT size into fsize FROM pg_stat_file(fname); -- to get the size of the latest filename
   RETURN pg_read_file(fname, 0, fsize);
  END;
$$ language plpgsql;
Output is given below:
postgres=# select pg_read_last_logfile();
                                                                                                                                                                                                                                                                                                                   pg_read_last_logfile                                                                                                                                                                                                                                                                                                                    
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
  2010-11-28 02:16:35 IST 3965   DBMESSAGELOG:  database system was shut down at 2010-11-24 03:31:24 IST
  2010-11-28 02:16:35 IST 3968   DBMESSAGELOG:  autovacuum launcher started
  2010-11-28 02:16:35 IST 3959   DBMESSAGELOG:  database system is ready to accept connections
  2010-11-28 02:16:40 IST 3973 postgres postgres DBMESSAGELOG:  statement: /* $PostgreSQL: pgsql/contrib/adminpack/adminpack.sql.in,v 1.6 2007/11/13 04:24:27 momjian Exp $ */
         /* ***********************************************
          * Administrative functions for PostgreSQL
          * *********************************************** */
         /* generic file access functions */
         CREATE OR REPLACE FUNCTION pg_catalog.pg_file_write(text, text, bool)
         RETURNS bigint
         AS '$libdir/adminpack', 'pg_file_write'
         LANGUAGE C VOLATILE STRICT;
  2010-11-28 02:16:40 IST 3973 postgres postgres DBMESSAGELOG:  duration: 332.562 ms
  2010-11-28 02:16:40 IST 3973 postgres postgres DBMESSAGELOG:  statement: CREATE OR REPLACE FUNCTION pg_catalog.pg_file_rename(text, text, text)
         RETURNS bool
         AS '$libdir/adminpack', 'pg_file_rename'
         LANGUAGE C VOLATILE;
  2010-11-28 02:16:40 IST 3973 postgres postgres DBMESSAGELOG:  duration: 1.004 ms
  2010-11-28 02:16:40 IST 3973 postgres postgres DBMESSAGELOG:  statement: CREATE OR REPLACE FUNCTION pg_catalog.pg_file_rename(text, text)
         RETURNS bool
         AS 'SELECT pg_catalog.pg_file_rename($1, $2, NULL::pg_catalog.text);'
         LANGUAGE SQL VOLATILE STRICT;
  2010-11-28 02:16:40 IST 3973 postgres postgres DBMESSAGELOG:  duration: 2.472 ms
  2010-11-28 02:16:40 IST 3973 postgres postgres DBMESSAGELOG:  statement: CREATE OR REPLACE FUNCTION pg_catalog.pg_file_unlink(text)
         RETURNS bool
         AS '$libdir/adminpack', 'pg_file_unlink'
         LANGUAGE C VOLATILE STRICT;
  2010-11-28 02:16:40 IST 3973 postgres postgres DBMESSAGELOG:  duration: 0.575 ms
  2010-11-28 02:16:40 IST 3973 postgres postgres DBMESSAGELOG:  statement: CREATE OR REPLACE FUNCTION pg_catalog.pg_logdir_ls()
         RETURNS setof record
         AS '$libdir/adminpack', 'pg_logdir_ls'
         LANGUAGE C VOLATILE STRICT;
  2010-11-28 02:16:40 IST 3973 postgres postgres DBMESSAGELOG:  duration: 41.991 ms
  2010-11-28 02:16:40 IST 3973 postgres postgres DBMESSAGELOG:  statement: /* Renaming of existing backend functions for pgAdmin compatibility */
         CREATE OR REPLACE FUNCTION pg_catalog.pg_file_read(text, bigint, bigint)



Some times DBA has to re-verify the Parameters and they stick to "show all" command or if they know the parameter name, then they can use show parametername command.

Best part of PostgreSQL is that it has catogerized parameters. To view the parameter category-wise, DBA can use following SQL:

SELECT name, setting, unit, short_desc, extra_desc, boot_val, reset_val, sourcefile, sourceline, context, vartype, source, min_val, max_val FROM pg_settings WHERE category='Version and Platform Compatibility / Previous PostgreSQL Versions' ORDER BY name;
where category of postgresql.conf are following:
Write-Ahead Log / Settings
Client Connection Defaults / Locale and Formatting
Version and Platform Compatibility / Other Platforms and Clients
Query Tuning / Genetic Query Optimizer
Resource Usage / Memory
Statistics / Monitoring
Reporting and Logging / Where to Log
Resource Usage / Kernel Resources
Preset Options
Reporting and Logging
Resource Usage
Reporting and Logging / What to Log.
Lock Management
Connections and Authentication / Security and Authentication
Autovacuum
Write-Ahead Log / Checkpoints
Query Tuning / Planner Method Configuration
Statistics / Query and Index Statistics Collector
Developer Options
Write-Ahead Log / Replication
Customized Options.
Query Tuning / Other Planner Options
File Locations
Client Connection Defaults / Statement Behavior
Reporting and Logging / When to Log
Query Tuning / Planner Cost Constants
Client Connection Defaults / Other Defaults
Version and Platform Compatibility / Previous PostgreSQL Versions.

Following are some more handy PG SQLs:
To get the detail information of Database User can use following command:
SELECT datname as Database, pg_get_userbyid(datdba) AS dba, pg_catalog.pg_encoding_to_char(encoding) AS encoding, datcollate as Collate, datctype as CTYPE, datistemplate, datallowconn, datconnlimit, datlastsysoid, datfrozenxid, spcname as tablespace, pg_size_pretty(pg_database_size(datname)) AS size, datacl, age(datfrozenxid) AS freezeage, ROUND(100*(age(datfrozenxid)/freez::float)) AS perc FROM pg_database, pg_tablespace JOIN (SELECT setting AS freez FROM pg_settings WHERE name = 'autovacuum_freeze_max_age') AS param ON (true) WHERE dattablespace = pg_tablespace.oid ORDER BY datname

To get the details of Non Idle Processes:
SELECT extract(epoch FROM (now() - query_start))::numeric(10,2) AS age, procpid, usename, client_addr, application_name, current_query FROM pg_stat_activity WHERE current_query <> '' ORDER BY 1

Cursors in Use:
SELECT name, statement, is_holdable, is_binary, is_scrollable, creation_time FROM pg_cursors ORDER BY name

To get the Prepared Statements information:
SELECT name, statement, prepare_time, from_sql FROM pg_prepared_statements ORDER BY name

Prepare Transactions Query:
SELECT transaction, gid, prepared, owner, database FROM pg_prepared_xacts ORDER BY owner, database

All Locks Information of a Database:
SELECT locktype, CASE WHEN datname IS NOT NULL THEN datname ELSE database::text END AS database, nspname, relname, page, tuple, virtualxid, transactionid, classid, objid, objsubid, virtualtransaction, pid, mode, granted FROM pg_locks LEFT JOIN pg_database ON pg_database.oid = database LEFT JOIN pg_class ON pg_class.oid = relation LEFT JOIN pg_namespace ON pg_namespace.oid=pg_class.relnamespace

Get Exclusive Lock Information:
SELECT locktype, CASE WHEN datname IS NOT NULL THEN datname ELSE database::text END AS database, nspname, relname, page, tuple, virtualxid, transactionid, classid, objid, objsubid, virtualtransaction, pid, granted FROM pg_locks LEFT JOIN pg_database ON pg_database.oid = database LEFT JOIN pg_class ON pg_class.oid = relation LEFT JOIN pg_namespace ON pg_namespace.oid=pg_class.relnamespace WHERE mode='ExclusiveLock' AND locktype NOT IN ('virtualxid', 'transactionid'

Thursday, November 25, 2010

xDB Replication from Oracle to PPAS

EnterpriseDB Replication tool has been now modified and add new functionality in it.

Previous Replication Console, which had been made, was using the DBA Management server and Publication & Subscriptions were dependent.

Now, xDB Replication has been divided into two components:
1. Publication Server : (Master Server)
2. Subscription Server: (Slave Server)

Some Background of Publication Server. Publication Server has following components:
1. JAVA Based Daemon.
2. MetaDatabase of Publication Server.

Publication Server keeps its information (Information about the Primary Database, Tables Details etc.) in the MetaDatabase under schema _edb_replicator_pub

Publication server is independent and has no dependency on Subscriptions Server. By default publication server uses the 9011 port. However, if user wants, [S]he can use different ports too.

For running Publication Server on different port, use the following command:
$EDBHOME/jre/bin/java -Djava.awt.headless=true -jar $EDBHOME/bin/edb-repserver.jar pubserver 

Similarly, Subscription Server has two components:
1. JAVA Based Daemon
2. MetaDatabase of Publication. (which Subscription server uses for Keeping the information of Subscriptions)

In Metadatabase, xDB Subscription server keeps the information in _edb_replicator_sub.

By Default xDB Replication runs on port 9012. However if user wants [S]he can run the daemon on different port too, by using following command:
$EDBHOME/jre/bin/java -Djava.awt.headless=true -jar $EDBHOME/bin/edb-repserver.jar subserver 
Now, Questions comes, How to use the daemon for Publication and Subscriptions?

EnterpriseDB provides a jar file edb-repcli.jar in $EDBHOME/bin which can be use.

edb-repcli.jar has been exhausted with lots of options using which user can Create a Publication/Subcriptions, can add the database etc.

To find more of its options, user can use following command:
$EDBHOME/jre/bin/java -jar $EDBHOME/bin/edb-repcli.jar -help

With above, user has to create a two configuration files:
1. Publication Configuration file
2. Subscription Configuration file.

Format of Configuration files of Publication and Subcription are same.
Following is a snapshot and description of configuration file:
user=enterprisedb # User using which xDB Publication/Subscription can connect with MetaDAtabase
port=9011 # Port on  which Publication/Subscription Daemon is running.
password= 
type=enterprisedb # There are two of database which xDB Replication Daemon can use one EnterpriseDB databases (Postgres Plus Advanced Or postgres (PostgreSQL)
host=localhost # Host on which Publication Daemon is running

With above configuration file of Publication/Subscription configuration files, there is one more xDB Server configuration file. This file by default exists in /etc/edb-repl.conf. This file contains the information of xDB Metadabase.
Following is a snapshot of this file:
user=enterprisedb
port=5445 (port on which xDB Replication is running)
type=enterprisedb
host=localhost
database=edb

Few Handy Commands of edb-repcli.jar is given below:

1. To view list the publications, user can use following command:
$EDBHOME/jre/bin/java -jar $EDBHOME/bin/edb-repcli.jar -printpublist -repsvrfile 
2. To view the published tables of a publication, following command:
$EDBHOME/jre/bin/java -jar $EDBHOME/bin/edb-repcli.jar -printpublishedtables  -repsvrfile 

3. To list the subscriptions of a particular subscription database, following command can be use:
$EDBHOME/jre/bin/java -jar $EDBHOME/bin/edb-repcli.jar -printsublist -repsvrfile  -subdbid 

4. To print the subscribed database ids following command can be use:
$EDBHOME/jre/bin/java -jar $EDBHOME/bin/edb-repcli.jar -printsubdbidsdetails -repsvrfile 

5. To add the Databases [Oracle|EnterpriseDB] for publication, following command can be use:
$EDBHOME/jre/bin/java -jar $EDBHOME/bin/edb-repcli.jar -addpubdb -repsvrfile  -dbtype {oracle | enterprisedb | postgresql} -dbhost  -dbport  -dbuser  -dbpassword  -database { | } [-oraconnectiontype {sid | servicename}]

6. To print the published Databases in Publication Server, following command can be use:
$EDBHOME/jre/bin/java -jar $EDBHOME/bin/edb-repcli.jar -printpubdbidsdetails -repsvrfile 

To control the behavior/performance of Publication/Subscription daemon, EnterpriseDB provides a two important files.
1. xdb_pubserver.conf
2. xdb_subserver.con

Location of above two files is $EDBHOME/etc.

Information of the contents of these two files can be find in following link:
http://www.enterprisedb.com/docs/en/8.4/repserver/Postgres_Plus_Advanced_Server_Replication_Server_Users_Guide-29.htm#TopOfPage


Details of Permitted Source and Target databases can be find in following link:
http://www.enterprisedb.com/docs/en/8.4/repserver/Postgres_Plus_Advanced_Server_Replication_Server_Users_Guide-27.htm#TopOfPage

Sunday, November 7, 2010

Making Dynamic Copy Command

There was a Question put by one of user about loading data in PostgreSQL skipping column which has default value using Copy Command.

With above requirement, he has also mentioned that table’s column get changed and default values too.

So, he wanted a plpgsql function code, which can be use for any table and file to load the data.

Following is a sample plpgsql code which can be use to make things happen:
CREATE OR REPLACE FUNCTION Copy_test(text,text,text) returns boolean
AS
$$
 DECLARE
    rec record;
    cmd text;
    colstring text;
 BEGIN
        cmd:='select array_to_string(array(select column_name::text from information_schema.columns where table_name='||''''||$2||''''||' and table_schema='||''''||$1||''''||' and column_default is null),'||''''||','||''''||')';
         EXECUTE  cmd into colstring;
 cmd:='COPY "'||$1||'"."'||$2||'"('||colstring||') from '||''''||$3||'''';
        EXECUTE  cmd;
     EXCEPTION when others then
 return false;
 Return true;
  END;
$$ language plpgsql;
Usage:
select copy_test('schemaname','tablename','filename');

Usage Example is given below:
postgres=# select * from test_copy;
 id | id1 
----+-----
(0 rows)

postgres=#select Copy_test('public','test_copy','/Users/postgres/test_data');
 copy_test 
-----------
 t
(1 row)

postgres=# select * from test_copy;                                                                                                                                      id | id1 
----+-----
  1 |   1
  2 |   2
  3 |   3
  4 |   4
  5 |   5
(5 rows)

Wednesday, November 3, 2010

Does UPDATE Change ROWID in Oracle?

Yesterday, there was a discussion which was going on OTN (Oracle Forum). I was also part of that discussion. Discussion was about the "Does UPDATE statement change rowid in Oracle?"

As per my comment of "UPDATE does not change rowid", one of the user has pointed me that UPDATE statement sometime changes the rowid in Oracle and given me following example.
SQL> ed
Wrote file afiedt.buf
 
  1  create table moving_rowid (
  2    col1 number primary key,
  3    col2 number,
  4    col3 number,
  5    col4 varchar2(10)
  6  )
  7* organization index
SQL> / 
 Table created.
SQL> insert into moving_rowid values( 1, 2, 3, 'foo' );
1 row created.

SQL> insert into moving_rowid values( 2, 3, 4, 'bar' );
1 row created.
  
SQL> select rowid, col1, col2, col3, col4
  2    from moving_rowid;
ROWID                  COL1       COL2       COL3 COL4
---------------- ---------- ---------- ---------- ----------
*BAEADxsCwQL+             1          2          3 foo
*BAEADxsCwQP+             2          3          4 bar
 
SQL> update moving_rowid set col1=col1+1;
2 rows updated.
 
SQL> commit;
Commit complete.
 
SQL> select rowid, col1, col2, col3, col4
  2    from moving_rowid;
ROWID                  COL1       COL2       COL3 COL4
---------------- ---------- ---------- ---------- ----------
*BAEADxsCwQP+             2          2          3 foo
*BAEADxsCwQT+             3          3          4 bar
Above example has made me to think and I have given the following answer on forum with example to show that UPDATE does not change the rowid:
SQL> select rowid, col1,col2,col3,col4 from moving_rowid;
ROWID    COL1  COL2 COL3 COL4
AAAFiAAAEAAAAhgAAA  1  2  3 foo
AAAFiAAAEAAAAhgAAB  2  3  4 bar

SQL> update moving_rowid set col1=col1+1;
2 rows updated.

SQL> commit;
Commit complete.

SQL> select rowid, col1,col2,col3,col4 from moving_rowid;
ROWID    COL1  COL2 COL3 COL4
AAAFiAAAEAAAAhgAAA  2  2  3 foo
AAAFiAAAEAAAAhgAAB  3  3  4 bar
No changes in rowid, reason there is no organization has been done as per the index.

Please note: if you change any indexed column and table has an index for organizing data then index will change. This is what happens with partition too.

Now, see the example given below with organization index:
SQL> update moving_rowid set col1=col1+1;
2 rows updated.

SQL> commit;
Commit complete.

SQL> select rowid,col1,col2,col3,col4 from moving_rowid;
ROWID   COL1  COL2 COL3  COL4
*BAEACGwCwQP+  2  2   3   foo
*BAEACGwCwQT+  3  3  4  bar

Now lets update non-index column:
SQL> select rowid,col1,col2,col3,col4 from moving_rowid;
ROWID   COL1  COL2  COL3 COL4
*BAEACGwCwQP+  2  3  3 foo
*BAEACGwCwQT+  3  4  4 bar
I hope above would have clear the behavior. UPDATE does not change the rowid. Users who see change in rowid is actually movement of data pointer. data pointer changes if the organization done on the basis of index (and each update changing the index too) or if row-movement has happened.

Therefore it’s not best practice to do update on the basis of rowid.

Forum link is given below:
http://forums.oracle.com/forums/message.jspa?messageID=9115665#9115665

Tuesday, November 2, 2010

Database Technologies: Some trigger and programming feature in PostgreSQL

Database Technologies: Some trigger and programming feature in PostgreSQL

Some trigger and programming feature in PostgreSQL

Anonymous Block Support:
Now, user would be able to create anonymous block using plpgsql as given below:
do
$$
declare
   id1 varchar;
begin
   select id into id1 from test_sort limit 1;
   raise notice '%',id1;
end;
$$ language plpgsql;
Output:
NOTICE:  1.1
DO

Column Trigger:
PG9.0 is also supports the column level and When Triggers.

Following is an example:
create or replace function test_col_trigger() returns trigger
as
$$
  BEGIN
      RAISE NOTICE 'INSERTED VALUE in Column a %',new.id;
       RETURN null;
  END;
$$ language plpgsql;

create trigger col_trigger after update of "a" on test_def FOR EACH ROW EXECUTE procedure test_col_trigger();
Output:
postgres=# update test_def set a='Second';
NOTICE:  INSERTED VALUE in Column a 1
UPDATE 1

When Triggers in PG9.0:

PG9.0 has "When trigger" which allowed user to create trigger to execute procedure for a particular condition.
create or replace function test_col_trigger() returns trigger
as
$$
  BEGIN
      RAISE NOTICE 'INSERTED VALUE in Column a %',new.id;
       RETURN null;
  END;
$$ language plpgsql;

create trigger trigger_when AFTER update on test_Def for each row when (old.col1 = new.col2) execute procedure test_col_trigger();
Output:
postgres=# update test_def set a='Second';
NOTICE:  SAME Result inserted a 1
UPDATE 1

There are some other new features has been added in PG9.0.

Important features which DBA must know are following:

1. New VACUUM FULL

Prior to PG9.0, VACUUM FULL was slow, since it used to move the blocks to a block which is closer to beginning of the table. This process used to consume lot of CPU and does the lot of random I/O. With this, after VACUUM FULL, DBA has to reindex the indexes associated with table since, VACUUM FULL used to leave some Bloats/Fragmentation in Indexes.

Overall it was not an efficient method.

However from PG9.0, VACUUM FULL has been coded to work smartly. At the place of moving the tuples, VACUUM FULL will now create a fresh table with new indexes and will destroy the old bloated table. This method is faster than the old method.

2. New column added in pg_stat_activity

application_name is a new column which has been added in pg_stat_activity, which would be giving the information about the application from where connections has been made.

Example is given below:
postgres=# select * from pg_stat_activity where procpid=pg_backend_pid();
 datid | datname  | procpid | usesysid | usename  | application_name | client_addr | client_port |          backend_start           |            xact_start            |           query_start            | waiting |                         current_query                          
-------+----------+---------+----------+----------+------------------+-------------+-------------+----------------------------------+----------------------------------+----------------------------------+---------+----------------------------------------------------------------
 11874 | postgres |    4533 |       10 | postgres | psql             |             |          -1 | 2010-11-02 11:39:13.170051+05:30 | 2010-11-02 12:55:04.278363+05:30 | 2010-11-02 12:55:04.278363+05:30 | f       | select * from pg_stat_activity where procpid=pg_backend_pid();
3. Per database and per role configuration.

4. --analyze-only new option has been added in vacuumdb command.
This option would allow the user to run analyze only using vacuumdb command.
Following is an output of vacuumed —help shows:
-Z, --analyze-only              only update optimizer statistics

For more information other feature in PG9.0, following blog is helpful:
http://wiki.postgresql.org/wiki/Illustrated_9_0

Monday, November 1, 2010

pg_get_tabledef function in Postgres Plus

Till Advanced Server 8.3, there was a function pg_get_tabledef, which some people were using to get the definition of table.

This function is no more part of new versions of Advanced Server 8.3 (8.3R2AS and 8.4AS). So, they would be missing this function by default.

However, they can get the same functionality using following function, which provides similar result as of pg_get_tabledef.

Function code is given below:
CREATE OR REPLACE Function pg_get_tabledef(text) RETURNS text
AS
$$
  DECLARE
     tabledef TEXT;
     dotpos integer;
     tablename text;
     schemaname text;
     prevcol text;
     coltype text;
     notnull1 boolean;
     rec record;
     oidcheck boolean;
  BEGIN
   dotpos:=strpos($1,'.');
   if dotpos = 0 then
     schemaname:='public';
     tablename:=substr($1,dotpos+1);
   else
     schemaname:=substr($1,1,dotpos-1);
     tablename:=substr($1,dotpos+1);
  end if;
  select relhasoids into oidcheck from pg_class,pg_namespace where pg_class.relnamespace=pg_namespace.oid and pg_namespace.nspname=schemaname and pg_class.relname=tablename and pg_class.relkind='r';
   if not found then
     tabledef:='Table Does not exists!';
     return tabledef;
   end if;
  tabledef:= 'CREATE TABLE '|| schemaname||'.'||tablename;
   for rec in SELECT a.attname as columnname ,pg_catalog.format_type(a.atttypid, a.atttypmod) as coltype, (SELECT substring(pg_catalog.pg_get_expr(d.adbin, d.adrelid) for 128) FROM pg_catalog.pg_attrdef d WHERE d.adrelid = a.attrelid AND d.adnum = a.attnum AND a.atthasdef),a.attnotnull as notnull, a.attnum FROM pg_catalog.pg_attribute a WHERE a.attrelid = (select pg_class.oid from pg_class,pg_namespace where relname=tablename and pg_class.relnamespace=pg_namespace.oid and pg_namespace.nspname=schemaname) AND a.attnum > 0 AND NOT a.attisdropped ORDER BY a.attnum  
  loop
      if prevcol is null  then
 tabledef:=tabledef||' (';
        prevcol:=rec.columnname;
        coltype:=rec.coltype;
        notnull1:=rec.notnull;
      elsif notnull1 then
        tabledef:=tabledef||' '||E'\n'||prevcol||' '||coltype||' NOT NULL ,';
        prevcol:=rec.columnname;
        coltype:=rec.coltype;
        notnull1:=rec.notnull;
     else
        tabledef:=tabledef||' '||E'\n'||prevcol||' '||coltype||' ,';
        prevcol:=rec.columnname;
        coltype:=rec.coltype;
        notnull1:=rec.notnull;
     end if;
   end loop;
      if oidcheck = true and notnull1 = true then
        tabledef:=tabledef||E'\n'||prevcol||' '||coltype||' NOT NULL ) WITH OIDS;';
      elsif oidcheck = true and notnull1 = false then
        tabledef:=tabledef||E'\n'||prevcol||' '||coltype||' NOT NULL ) WITH OIDS;';
      elsif oidcheck=false and notnull1=true then
        tabledef:=tabledef||E'\n'|| prevcol||' '||coltype||' NOT NULL ) WITHOUT OIDS;';
      else
        tabledef:=tabledef||E'\n'||prevcol||' '||coltype||' ) WITHOUT OIDS;';
      end if;
   
   return tabledef;
   end;
$$ language plpgsql;   

Sample Output is given below:

postgres=# select pg_get_tabledef('public.test_def');                                                                                                                            pg_get_tabledef         
---------------------------------
 CREATE TABLE public.test_def ( +
 id numeric(9,0) NOT NULL ,     +
 a text ) WITHOUT OIDS;
(1 row)

postgres=# select pg_get_tabledef('public.test_des');
    pg_get_tabledef     
------------------------
 Table Does not exists!
(1 row)
Note:: Function would not give the information of any constraint defined on table and index definitions. However, if some body wants then user can keep modify this to get the result.

Sunday, October 31, 2010

Some UNIX Utilities for performance monitoring

vmstat command:
vmstat is a command which can be use to display system statistics. Syntax is given below:
vmstat <# second> < #number of times>
Example
vmstat 1 10

Above Command is to display the system statistics every second, 10 times.

Sample Output of the command:
vmstat
procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa
 1  0      0 950888  57536 512868    0    0   559    45  479  671  9  2 73 15
Lets understand the column definition:
Procs Column would be having three subcolumns
r: count (processes waiting for run time)
b: count( Processes in uninterruptible sleep)
w: Count (Process which are swapped out and runnable) (on RHEL)

If any process in b and w column then DBA/System admin has to check the system.

Memory Column would be having following subcolumns:
swap: swap space currently available.(in kBs)
free: Amount of idle memory(kBs)
buff: Memory used as buffer (kBs)

SWAP information
si: memory swapped in from disk (kB/s)
so: memory swapped out to disk (kB/s)

I/O information:
bi: count (Number of blocks sent to a block device)(blocks/s)
bo: count (Number of blocks received from block device) (Blocks/s)

System Information:
in: Interrupts per second, including clock.
cs: CPU Switches

Some other percentage of cpu information:
us: user time
sy: system time
id: idle time

If id is zero then cpu is overburden. Also if si and so is high then there is lot of swapping happening in system.

To find more information on CPU sar command can be use. sar command can be use for getting the information of CPU Utilization.
Syntax: sar <#seconds> <# of times>
First row of sar command gives the information of System including version, OS information, machine name etc.
Real information sar command gives in five columns. First column (Time) is about timestamp/time i.e when information collected.
Second Column (%user) gives the information of load given by the users.
Third Column (%system) gives the information of load given by system processes including daemons.
Fourth Column (%IO) gives the information of load/processes waiting for I/O (ie hardware)
Fifth Column (%idle) gives the average idle load. In other words system is waiting for task to start.

If the idle percentage is less that equal to 20 for long time then system would be performing slow and system admin needs to look in the system. To reach at any conclusion user must need to watch the output of sar command for at least one day. If the system consistently having <=20% idle then system need to analyze the other processes and load. To find the information on I/O of System, user can use command iostat command.

syntax: iostat <# seconds> <# of times> [disk#]
iostat command gives the I/O information of all the disks.

First column is about Device (i.e hardisk)
Second column gives the information of block transfer per second, where transfer is an I/O request to the disk
Third column gives the blocks read/second (in kBs)
Fourth column gives the information of Blocks written per second.
Fifth Column gives Amount of Block read from device
Sixth Column gives information of total number of block written.

Sample Output is given below:
Linux 2.6.28-19-generic (vibhore-laptop)  10/31/2010  _i686_ (2 CPU)

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
          22.58    0.83    5.56   43.94    0.00   27.50

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda             138.18      6067.27       446.48     809920      59600
sda1            137.70      6052.41       446.48     807936      59600
sda2              0.02         0.04         0.00          6          0
sda5              0.31        12.42         0.00       1658          0

Thursday, October 28, 2010

Some New Security Features in PG9.0

Some New Security Features in PG9.0

1. New Grant and Revoke in PG9.0

In Previous version of PG (7.x,8.x), all the DBAs and Users used to miss the GRANT and REVOKE command which can be use to give permissions on all the tables inside the Schema. Now, they don;t have to.

From PG9.0, user can execute single GRANT and REVOKE command to give the permission on all the tables in a SCHEMA.
GRANT SELECT ON ALL TABLES in SCHEMA TEST to test_user;

Here is output of query which shows that above command has given SELECT privileges on all the tables in SCHEMA Test.
postgres=# select * from information_schema.table_privileges  where grantee ='test_user';;
 grantor  |  grantee  | table_catalog | table_schema | table_name | privilege_type | is_grantable | with_hierarchy 
----------+-----------+---------------+--------------+------------+----------------+--------------+----------------
 postgres | test_user | postgres      | test         | test       | SELECT         | NO           | NO
 postgres | test_user | postgres      | test         | test2      | SELECT         | NO           | NO
Similarly user can execute single Revoke command to revoke a privilege from all the tables in Schema:
REVOKE SELECT ON ALL TABLES in SCHEMA test from test_user;
2. Assign Default privileges to a Role.

In PG9.0, managing role privileges is now more easy.
PG9.0 now supports ALTER DEFAULT PRIVILGES as given below:
postgres=# alter default privileges for role newrole GRANT SELECT ON TABLES to public;
ALTER DEFAULT PRIVILEGES

3. Now, user can put a check for verifying the strength of Password given by user. Module passwordcheck by default comes with the PG9.0.

To enable this module user has to add following in postgresql.conf
file and has to restart the PG instance:
shared_preload_libraries = '$libdir/passwordcheck'

Working example is given below:
postgres=# alter user test_user password 'test_user';
ERROR:  password must not contain user name
postgres=# alter user test_user password 'test123';
ERROR:  password is too short
postgres=# alter user test_user password 'test45678';
ALTER ROLE

This feature is also having some limitation. It does not work properly if somebody pass the encrypted password. This feature is not recommended if some security feature is already implemented i.e if pre-encrypted passwords are already passing to DB.

Wednesday, October 27, 2010

Understanding Real Application Cluster Processes

With all the Processes of Stand-alone Oracle Database, Real Application Cluster provides some other processes which plays important role. These Processes are:

1. LMON: Globa Enqueue Service Monitor
Responsibilities Assigned:

This Process is responsible for monitoring the entire cluster global enqueues and the resources. It also manages the instances failure and process failures and the associated recovery for Global Cache service and Global Enqueue Service.
Main responsibility of this process is recovery of Global Resources. Services provided by this process is also known as Cluster Group Services.

View:: So, we have one watcher and recovery agent for Global Resources.

2. LMDx: Global Enqueue Service Daemon (Lock Agent)
Responsibility Assigned:

This process controls the access of Global Enqueue and resources. It also handles the Deadlock detection and remote enqueue requests (i.e requests from instances).

View: Hmm, we have one recorder agent and resource information keeper.

3. LMSx: Global Cache Service Processes
Responsibility Assigned:

This is a process which works as Messages handler. RAC provokes up to 10 LMSx processes. It handles the acquisition interrupt and blocking interrupt requests from remote instances. It also keeps the consistency in read version of blocks. It also controls the flow of messages. Overall responsibilities can be described in bullets are given below:
a. Manage the resources requests and cross instance calls for shared resources
b. Listing all invalid lock elements and validating the locks during recovery.
c. Handles the deadlock detection and lock conversion timeouts.

View: So, we have clerk which can keep the records of locks and request for Global resources.

4. LCKx: Broadcaster
Responsibility Assigned:

This mainly manages the requests and Broadcast the information across the instances. Workload of this process automatically get balanced/shared if we have multiple LMSx.

View: This is a Broadcaster. I understand we require it.

4. DIAG: Diagnosability Darmon
Responsibility assigned:

It monitor the health of instances ad keeps the data of instance process failures. Naming Convention followed for this process is ora__NYDB1

View: Hmm Auditor and Nursing Assistant process.

Now lets understand the some important services and who is responsible for it.

1. Global Cache Service:
Responsibility comes under it:

a. Track the location and status of data block i.e its mode,role and access privileges of various data instances.
b. Block transfer between instances and give appropriate lock mode conversion

Who is responsible for it: LMSx and LMD

2. Global Enqueue service:
Responsibility comes under it:

a. Manage and track the status of enqueuing mechanism which includes all non cache fusion intra-instances operations.
b. Perform Concurrency Control on dictionary cache locks, library cache locks and transactions.

Who is responsible: Almost all the RAC specific processes are responsible for it.

So, above the important processes/managers which we get in RAC for Oracle Database Management to provide global services of GCS and GES.

Monday, October 25, 2010

Making Slony source compatible with EnterpriseDB Advanced Server

Since, edb-replication which comes with advanced Server is more compactly build with one particular version, therefore some times user does not able to replicate the data between two different version of Advanced Servers.

For replicating data between two advanced Server versions, it is important to have same version of Slony on Source and target database.

I did research around the slony source code and tried to make it compatible with Advanced Server.

If user plainly comile the slony source code against the Advanced Server database, then user will start to get messages like : version/ while configuring the Replication.

Reason for such messages is that slony did not able to parse the Advanced Server version, therefore it would not be able to continue further for replication.

I did some research and gone through the slony source code.
Slony uses a program called dbutil*.c for finding the version of PostgreSQL or checking the compatibility of PostgreSQL version with Slony.

Following are the main programs which calls the dbutil*.c
1. slon
2. slonik

Following are the changes if user makes in slony source code then user would be able to compile the source code against the Advanced Server different versions for replication:
Changes which is require are given below:
File: src/slon/dbutils.c 

static int db_get_version(PGconn *conn)
{
 PGresult    *res;
 SlonDString query;
 char     versionstr[7];
 int     version=0;
 int     major=0;
 int     minor=0;
 int     patch=0;
 dstring_init(&query);
 slon_mkquery(&query, "SELECT version();");
 res = PQexec(conn, dstring_data(&query));

 if ( !res || PQresultStatus(res) != PGRES_TUPLES_OK )
        {
  PQclear(res);
  return -1;
 }
 if (sscanf(PQgetvalue(res, 0, 0), "PostgreSQL %d.%d.%d", &major, &minor, &patch) < 2)
 {
  PQclear(res);
  return -1;
 }
 PQclear(res);
 snprintf(versionstr, 7, "%.2d%.2d%.2d", major, minor, patch);
 version=atoi(versionstr);
 dstring_free(&query);
 return version;
}
Replace the above function with following function:
static int db_get_version(PGconn *conn)
{
 PGresult    *res;
 SlonDString query;
 char     versionstr[7];
 int     version=0;
 int     major=0;
 int     minor=0;
 int     patch=0;
 int     dummy=0;
 dstring_init(&query);
 slon_mkquery(&query, "SELECT version();");
 res = PQexec(conn, dstring_data(&query));

 if ( !res || PQresultStatus(res) != PGRES_TUPLES_OK )
        {
  PQclear(res);
  return -1;
 }
 if (sscanf(PQgetvalue(res, 0, 0), "EnterpriseDB %d.%d.%d.%d", &major, &minor, &patch, &dummy) < 2)
 {
  PQclear(res);
  return -1;
 }
 PQclear(res);
 snprintf(versionstr, 7, "%.2d%.2d%.2d", major, minor, patch);
 version=atoi(versionstr);
 dstring_free(&query);
 return version;
}
File:src/slonik/dbutils.c

db_get_version(SlonikStmt * stmt, SlonikAdmInfo * adminfo)
{
 PGresult   *res;
 SlonDString query;
 char        versionstr[7];
 int     major=0;
 int     minor=0;
 int         patch=0;
 int         version=0;

 if (db_begin_xact(stmt, adminfo) < 0)
  return -1;

 dstring_init(&query);
 slon_mkquery(&query, "select version();");
 res = db_exec_select(stmt, adminfo, &query);
 dstring_free(&query);

 if (res == NULL)
  return -1;

 if (sscanf(PQgetvalue(res, 0, 0), "PostgreSQL %d.%d.%d", &major, &minor, &patch) < 2)
 {
  fprintf(stderr, "%s:%d: failed to parse %s for DB version\n",
    stmt->stmt_filename, stmt->stmt_lno,
    PQgetvalue(res, 0, 0));
  PQclear(res);
  return -1;
 }
 PQclear(res);
 snprintf(versionstr, 7, "%.2d%.2d%.2d", major, minor, patch);
 version=atoi(versionstr);
 return version;
}
Replace Above function with Following function:
db_get_version(SlonikStmt * stmt, SlonikAdmInfo * adminfo)
{
 PGresult   *res;
 SlonDString query;
 char        versionstr[7];
 int     major=0;
 int     minor=0;
 int         patch=0;
 int         version=0;
 int      dummy=0;

 if (db_begin_xact(stmt, adminfo) < 0)
  return -1;

 dstring_init(&query);
 slon_mkquery(&query, "select version();");
 res = db_exec_select(stmt, adminfo, &query);
 dstring_free(&query);

 if (res == NULL)
  return -1;

 if (sscanf(PQgetvalue(res, 0, 0), "EnterpriseDB %d.%d.%d.%d", &major, &minor, &patch, &dummy) < 2)
 {
  fprintf(stderr, "%s:%d: failed to parse %s for DB version\n",
    stmt->stmt_filename, stmt->stmt_lno,
    PQgetvalue(res, 0, 0));
  PQclear(res);
  return -1;
 }
 PQclear(res);
 snprintf(versionstr, 7, "%.2d%.2d%.2d", major, minor, patch);
 version=atoi(versionstr);
 return version;
}

Now, if you have made the above changes you would be able to compile the slony with advanced Server and would be able to use the slony. Please Note:: I have tested above changes with Slony version 1.2.21 for replicating data between databases of version "EnterpriseDB 8.3.0.112" and EnterpriseDB 8.4.5.16

Interesting !! :)

Thursday, October 21, 2010

pg_hotbackup utility for Backup of PG

Users always look for a Backup utility which can give some Good options and a utility can be use with all the instances of PG to take the backup.

I have thought in same way and created a pg_hotbackup script. pg_hotbackup utility which I worked on is now a server side utility which takes backup on server side and keep the backups in Backup directory.

Options which I have included in it, are following:
Compulsory Options:
-H  (This is for Bin Directory location, utility will use psql command of PG Instance )
-b  (Directory where user wants to keep the backup)
-p  (Port number of PG Instance)
-U username (Username)
-P passwd   (Password).
Some other options:
-a: Archive Only Option [1|0]
-r: Retention Policy [ in days ]
-l: List Backups
-n: Backup File Name
-v: Validate Only [1|0]
-R: Retention Only [|0]
\?: Help
So, I have all the options with me.

Now, lets understand what do we need:
1. We need a catalog file, in which utility can keep the backups information and validate the information as per new backups and retention policy.
If same binaries are getting used, I have preferred to keep that catalog file in Binary location.

2. A function, which can update the catalog file with each backup of archive/hotbackup backup:
Following is a code which can help to achieve this objective
 updatecatalog()
  {
 BCKLIST=`ls -1 $BIN/.$port*` 2>/dev/null
 if [ $? -ne 0 ];then
                echo "No Catalog Found! Please check the Home Director and Backup Directory"
                exit 1
        fi
 if [ -f /tmp/list ];then
  echo "Nothing to Update"
  exit
 fi
 for i in `cat /tmp/list`
 do
  cat $BCKLIST|awk -F';' '{if ($1==$i){printf "%s;%s;%s;%s;%s;%s;",$1,$2,$3,$4,$5,"Expired";}else {printf "%s;%s;%s;%s;%s;%s;",$1,$2,$3,$4,$5,$6}'>>/tmp/newcatlog
 done
 if [ -f /tmp/newcatlog ];then
  cat /tmp/newcatlog>$BCKLIST
  rm /tmp/newcatlog
 fi
 rm /tmp/list 2>/dev/null
 echo "Update Catalog is successful!"
  }
3. A function which can validate the backups, with every backups. Validation function should validate the backup information in catalog i.e it should check if the backups are available in defined destination or not.
Following is a Validation backup Function:
/dev/null
 if [ $? -ne 0 ];then
  echo "No catalog Available for Mentioned Bin and Cluster"
  exit 0
 fi
  for i in `cat $BCKLIST|awk -F';' '{print $1}'`
 do 
    ls -1 $BCKDIR/$i 2> /dev/null
    if [ $? -ne 0 ];then
  cat $BCKLIST|grep -v $i >/tmp/valcatalog.log
  cat /tmp/valcatalog >$BCKLIST
    fi
 done
 rm /tmp/valcatalog.log 2>/dev/null
 echo "All Backups has been validated!" 
   
   }
4. With this we would also need a function which can implement some retention policy on backup. i.e of how many days backup which user wants to keep:
 backup_retention()
  {
 if [ -z $RETENTION ];then
  echo "No retention days..."
  echo "Skipping the retention.."
  return 0
       else
  find $BCKDIR -mtime +$RETENTION -exec ls {} \; >/tmp/list;
  find $BCKDIR -mtime +$RETENTION -exec rm {} \;
 fi
 echo "Updating Catalog.."
 validate_backups
 updatecatalog
 }


With retention, policy above function will also update the catalog information.

5. Now, some functions which can be use for HotBackup. i.e Start and Stop backup functions:
 startbackup() 
   {
     QUERY="select pg_start_backup('HOTBACKUP `date +"%d-%m-%Y"`');"
     export PGPASSWORD="$PASSWD"
      $BIN/psql -t -p $port  -U $USER -c "$QUERY" template1
   }

 stopbackup()
   {
 QUERY="select pg_stop_backup();"
 export PGPASSWORD="$PASSWD"
 $BIN/psql -t -p $port -U $USER -c "$QUERY" template1
  }

6. One backup function, which can be use for backup, here is that function: backupdata()
    {
 if [ -z $FILENAME ];then
   NEWFILENAME=hotbackup_`echo $port`_`date +"%d-%m-%Y"`.tar.gz
   LABEL=HOTBACKUP
 else
   NEWFILENAME=`echo $FILENAME`_hotbackup.tar.gz
   LABEL=$FILENAME
 fi
     if [ -f /tmp/.backup.lock ];then 
 echo "One backup is already running.."
 exit 1
    fi
      LISTFILE=".`echo $port`_`echo $BCKDIR|sed 's/\//-/g'`"
      TYPE="Full Backup"
      STARTTIME=`date +"%d/%m/%Y %H:%M:%S"`
      touch /tmp/.backup.lock
      startbackup
      export PGPASSWORD=$PASSWD
      DATADIR=`$BIN/psql -t -U $USER -p $port -c "show data_directory;"|sed '/^$/d'`
      cmd="tar -zchvf $BCKDIR/$NEWFILENAME $DATADIR"
     if [ -d $BCKDIR ];then
         :
     else
        mkdir -p $BCKDIR
     fi
       $cmd >/tmp/Hot_backup_`date +"%d-%m-%Y"`.log 2>&1
       if [ $? -ne 0 ];then
  echo "Backup Failed.. Please /tmp/Hot_backup_`date +"%d-%m-%Y"`.log"
  stopbackup
  ENDTIME=`date +"%d/%m/%Y %H:%M:%S"`
  STATUS="Failed!"
     echo "$NEWFILENAME;$LABEL;$STARTTIME;$ENDTIME;$TYPE;$STATUS">>$BIN/$LISTFILE 
  rm /tmp/.backup.lock
         exit 1
       else 
  stopbackup
  rm /tmp/.backup.lock
  ENDTIME=`date +"%d/%m/%Y %H:%M:%S"`
  STATUS="Successful"
  echo "$NEWFILENAME;$LABEL;$STARTTIME;$ENDTIME;$TYPE;$STATUS">>$BIN/$LISTFILE
  echo "Data Directory of port $port has been backed up successfully!"
       fi
    } 
Above function, will also update the catalog file, which shell scriptor can figure out from the function definition.

7. With this we also need a archive backup functions too. Below is a archive function, which update the status in catalog and also takes the backup:
 backuparchive() 
 {
 touch /tmp/.archive.lock
 TYPE="Archive Backup"
 STARTTIME=`date +"%d/%m/%Y %H:%M:%S"`
        LISTFILE=".`echo $port`_`echo $BCKDIR|sed 's/\//-/g'`"
 if [ -z $FILENAME ];then
   NEWFILENAME=archivebackup_`echo $port`_`date +"%d-%m-%Y-%H:%M"`.tar.gz
   LABEL="Archive"
 else
   NEWFILENAME=`echo $FILENAME`_archive.tar.gz
   LABEL=$FILENAME
 fi
 archiveloc=`$BIN/psql -t -U $USER -p $port -c "show archive_command"|sed '/^$/d'|awk  '{k=split($0,arr," "); for(i=0;i<=k;i++){ if (match(arr[i],"%f")) print arr[i];}}'|xargs echo|awk '{print $1}'|sed 's/%f//g'`
        cmd="tar -zchvf $BCKDIR/$NEWFILENAME $archiveloc/*" 
        $cmd >/tmp/backup_archive_`date +"%d-%m-%Y"`.log 2>&1
 if [ $? -ne 0 ];then
  echo "Backup Failed!. Please check log file in /tmp/backup_archive_`date +"%d-%-%Y"`"
  rm /tmp/.archive.lock
  ENDTIME=`date +"%d/%m/%Y %H:%M:%S"`
  STATUS="Failed!"
         echo "$NEWFILENAME;$LABEL;$STARTTIME;$ENDTIME;$TYPE;$STATUS">>$BIN/$LISTFILE 
  exit 1
 fi
 echo "Archive Backup is successful"
 ENDTIME=`date +"%d/%m/%Y %H:%M:%S"`
 STATUS="Successful"
 echo "$NEWFILENAME;$LABEL;$STARTTIME;$ENDTIME;$TYPE;$STATUS">>$BIN/$LISTFILE
 rm /tmp/.archive.lock
}


So, I have all the necessary functions, now left function is: if user wants to view the status of backup and wants to list the backups. For that purpose following is a list backup function:
listbackups()
 {
 
 BCKLIST=`ls -1 $BIN/.$port*` 
 if [ $? -ne 0 ];then
    echo "No Catalog Found! Please check the Home Director and Backup Directory"
  exit 1
 fi
 
 BCKDIR=`echo $BCKLIST|awk -F"/" '{print $NF}'|cut -d"." -f2|awk -F"_" '{print $2}'|sed 's/-/\//g'`
 DATADIR=`$BIN/psql -t -U $USER -p $port -c "show data_directory;"|sed '/^$/d'` >/dev/null
 if [ $? -ne 0 ];then
  echo "Unable to Connect to Database for Verification. Exiting"
  exit 1
 fi
 echo ""|awk 'BEGIN{printf "\t\t%-20s\n","LISTING BACKUPS"}END{printf "\t\t%-20s\n\n","---------------"}'
 echo "PORT: $port"
 echo "DATADIR: $DATADIR"
 echo "LOCATION: $BCKDIR"
 echo ""
        awk -F';' 'BEGIN{ printf "%-20s\t%-20s\t%-20s\t%-20s\t%-20s \n","Filename","START DATE","END DATE", "TYPE", "STATUS"}{printf "%-20s\t%-20s\t%-20s\t%-20s\t%-20s\n",$2,$3,$4,$5,$6}' $BCKLIST  
 }

Now, I have all the required functions. Using above functions I can write a command pg_hotbackup which I can use for Base Backup of data directory, archive log backup, implementing retention policy and listing all the backups.

Few Snapshots are given below:
Hotbackup:
./pg_hotbackup -H $PGHOME/bin -b /Users/postgres/BackupDir -U postgres -p 5432 -P postgres
0/46000020
NOTICE: pg_stop_backup complete, all required WAL segments have been archived
0/460000D8
Data Directory of port 5432 has been backed up successfully!
Archive Backup is successful
No retention days...
Skipping the retention..
  LISTING BACKUPS     
  ---------------     

PORT: 5432
DATADIR:  /Library/PostgreSQL/9.0/data
LOCATION: /Users/postgres/BackupDir

Filename             START DATE           END DATE             TYPE                 STATUS               
HOTBACKUP            19/10/2010 06:11:41  19/10/2010 06:12:07  Full Backup          Successful          
Archive              19/10/2010 06:12:07  19/10/2010 06:12:29  Archive Backup       Successful          
HOTBACKUP            22/10/2010 02:04:40  22/10/2010 02:05:16  Full Backup          Successful          
Archive              22/10/2010 02:05:16  22/10/2010 02:05:44  Archive Backup       Successful   

Monday, October 18, 2010

ROLLUP Analytical function in PostgreSQL.

Currently, there is no version of PG, which supports the rollup.

However, people look for this analytical function features.

ROLLUP queries result can be achieve using the UNION of Queries.
First Let's Understand what does rollup do:

If SQL Query has col1,col2,col3,aggregate(col4) then rollup

Processing would be something like this.
1. Show the aggregate of col4 as per the col1,col2,col3
2. Then rollup will do the subtotal and will show the result as per the as aggregate of based on col1,col2
3. Then it will show the aggregate/subtotal as per the col1.
4. And at end Total/Sum

In short, it creates progressively higher-level subtotals, moving from right to left through the list of grouping columns. Finally, it creates a grand total.

In PG, this can be achieve by writing a SubQueries and and UNION those.
So, if the rollup query is something like given below:
select col1,col2,col3,agg(col4) from relation group by rollup(col1,col2,col3) 
Then in PG above can be written as:
select col1, col2,col3 agg(col4) from relation group by col1,col2,col3
UNION
select col1,col2,NULL,agg(col4) from relation group by col1,col2
UNION
select col1,NULL,NULL,agg(col4) from relation group by col1
UNION
select NULL,NULL,NULL, agg(col4) from relation;

Following is an example:
In Oracle:
select manager_id,job_id,sum(salary) from hr.employees group by rollup(manager_id,job_id);
In PG:
select manager_id,job_id,sum(salary) from hr.employees group by manager_id,job_id
UNION
select manager_id , NULL,sum(salary) from hr.employees group by manager_id
UNION
select NULL,NULL,sum(salary) from hr.employees;

Sunday, October 10, 2010

PG_REORG Utility for VACUUM FULL online

pg_reorg is a utility made by NTT for reorganizing the table structure.

Concept is simple, if you have all the require pointers and data in same page, then accessing those is much more faster. This is what pg_reorg provides to a user.

Following are some options, which pg_reorg provides.
-o [ —order-by] columns:
This option makes pg_reorg to oraganise the table data as per the mentioned column. At the backend pg_reorg will creates a new table using CTAS and SELECT Query include ORDER BY clause with columns mentioned with -o.
-n [—no-order] tablename:
When this option is being used, then pg_reorg, does the VACUUM FULL ONLINE. Now, question is how it must be doing. Simple Concept, create a new table using CTAS and create a trigger on current table to track the DML. As the New table got created play those tracked DML on new table. It works well. This option is only for table which has primary key.

pg_reorg by default does the CLUSTER of tables and it follows same concept, i.e without locking table do the CLUSTER.

After performing all the options, pg_reorg does the ANALYZE on the table.

Following are some elog information, which it performs at backend:
elog(DEBUG2, "---- reorg_one_table ----");
elog(DEBUG2, "target_name    : %s", table->target_name);
elog(DEBUG2, "target_oid     : %u", table->target_oid);
elog(DEBUG2, "target_toast   : %u", table->target_toast);
elog(DEBUG2, "target_tidx    : %u", table->target_tidx);
elog(DEBUG2, "pkid           : %u", table->pkid);
elog(DEBUG2, "ckid           : %u", table->ckid);
elog(DEBUG2, "create_pktype  : %s", table->create_pktype);
elog(DEBUG2, "create_log     : %s", table->create_log);
elog(DEBUG2, "create_trigger : %s", table->create_trigger);
elog(DEBUG2, "create_table   : %s", table->create_table);
elog(DEBUG2, "delete_log     : %s", table->delete_log);
elog(DEBUG2, "lock_table     : %s", table->lock_table);
elog(DEBUG2, "sql_peek       : %s", table->sql_peek);
elog(DEBUG2, "sql_insert     : %s", table->sql_insert);
elog(DEBUG2, "sql_delete     : %s", table->sql_delete);
elog(DEBUG2, "sql_update     : %s", table->sql_update);
elog(DEBUG2, "sql_pop        : %s", table->sql_pop);
Interesting Hunh.

Some Comparison between clusterdb and pg_reorg is given in below link:
http://reorg.projects.postgresql.org/index.html

With everything, DBA has to take care of few things:
While pg_reorg is going on one table, the DBA should not let anyone to execute DDL Changes and Index on the currently reorganizing table.

Thursday, October 7, 2010

Slony Vs PG9.0 Built in Streaming Replication.

People Generally asked such kind of Questions as PG9.0 Comes with Streaming Replication.

Following are some points which people need to think before deciding which replication, they should follow:

1. Slony has a some overhead on database than the Streaming replication+HotStandby in 9.0
2. All the changes must be apply via SLONIK Command
3. Slony gives advantage of replicating some tables and allows to ignore others
4. Slony also gives the advantage of replication between Different version of PG and PG on different OS.

Monday, October 4, 2010

PG9.0:: Monitoring Hot Standby

Now PG9.0 is in Market with new feature of Hot Standby and Streaming Replication.

So, I have started to explore the way of monitoring the Hot Standby. I was in process of writing my own code for Monitoring the Hot Standby.

For this purpose I have written a shell script to find the way of calculating lag.

In pgpool-II, Developer has used following formula to calculate the lagging:

lsn = xlogid * 16 * 1024 * 1024 * 255 + xrecoff;

Following is an explanation of meaning of xlogid and xrecoff:

postgres=# select pg_current_xlog_location();

pg_current_xlog_location 
--------------------------
0/13000078
(1 row)
0: is xlogid and xrecoff is 13000078

With this, Concept of implementation of finding the lagging is to calculate the replication lag by comparing the current WAL write location on the primary with the last WAL location received/replayed by the standby. These can be find using pg_current_xlog_location function on the primary and the pg_last_xlog_receive_location/pg_last_xlog_replay_location functions on the standby, respectively.

Following is also a code made by Karl Denninger:
http://www.denninger.net/check-replication.c

Above program, check the status of Master and Slave, if Both are in sync then it will be silent, if there is lagging then it will give proper message to stdout.

Saturday, October 2, 2010

pgFouine: PostgreSQL Log Analyzer

PgFouine is a interesting PostgreSQL Analyzer Tool which is available for Generating the Reports in html format.

Advantage of using this tool is that user gets report in text or HTML format, which is easy to analyze.

Using pgFouine user can make following types of Reports:
1. Error Reports
2. Slow Query reports
3. Vacuum Verbose Reports etc...

Installation of pgFouine is simple.

Download the source from following location:
http://pgfoundry.org/frs/download.php/2575/pgfouine-1.2.tar.gz

and then extract the pgfouine source using following command:
tar zxvf pgfouine-1.2.tar.gz.
Please note, before using pgfouine user has to make sure that it has php installed on his server.

pgFouine has some restriction over analyzing the log file. It analyzes the PostgreSQL logfiles, if the log_line_prefix has following format:
log_line_prefix = 'user=%u,db=%d ' ( Filter on database with user with syslog )
log_line_prefix = '%t [%p]: [%l-1] ' ( For standard errors)
log_line_prefix = '%t [%p]: [%l-1] user=%u,db=%d ' (Filter on database/user with steer)

For more information on Installation and configuration of pgfouine, following is a link:
http://pgfouine.projects.postgresql.org/tutorial.html

Usage of pgfouine:
php pgfouine.php -file  -logtype [syslog|stderr|csvlog] > report.html
pgfouine can also be used for analyzing the vacuum verbose output.

Following are sample of VACUUM VERBOSE report generated by pgfouine.


Thursday, September 30, 2010

Physical Standby Vs Hot Standby

Some thoughts always come in mind about standby terminologies. Once, Someone has asked question about Physical Standby of Oracle10g.

Is Oracle10g Physical Standby a Hot Standby Or WarmStandby?

Till Oracle 10g, Physical Standby of Oracle is a standby which has two mode: 1. Managed Recovery Mode 2. Read Only Mode. It cannot be in both mode at same time.
If the standby is in recovery mode then, it's a Warm Standby and when its read only mode then, then it's lagging from Primary and would be able to response SELECT queries.
Either way, till Oracle 10g physical standby was not meeting the requirement of Hot Standby. It was playing the role of Warm Standby.

However, from Oracle 11g, Physical Standby can be Recovery mode and read only mode, both at the same time, which now fulfill the definition of Hot Standby. With the complexity of managing the Standby, licensing thing comes in Picture.
PG9.0 onwards, PostgreSQL now has Cold Standby, Warm Standby and Hot Standby, with zero cost. Community is making the PostgreSQL Technology more advance with new features.

Tuesday, September 28, 2010

Hot Standby in PostgreSQL 9.0:

As per definition of Hot Standby, its a method of redundancy in which primay and secondary (Backup Server) runs simultaneously. The data is mirrored to secondary so that both should contain identical data in real time. With this user, would be able to execute the Query against Database while secondary is in archive recovery mode.

This is what introduce in PG9.0. In-built Hot Standby.

PostgreSQL Community always try to make the things simpler as much as possible, same they have proven in Hot Standby Implementation.

Following are the steps of configuring Hot Standby:
1. Make sure following parameters are set in Configuration file, postgresql.conf of Primary:
wal_level = ‘hot_standby’
archive_mode = on
archive_command = ‘cp %p /Library/PostgreSQL/9.0/data/archivelog/%f’
2. After setting the above parameters, Now take the hot backup of PG9.0 Instance. Steps of Hot Backup is simple:
a) Execute following command in Primary:
select pg_start_backup(‘Hot Standby Backup’);
b) Take the file system backup of PG 9.0 Data directory, as given below:
cp -r $PRIMARY/* $HTStandby/
c). Execute following command to stop the Hot Backup:
select pg_stop_backup();
3. After taking the Hot Backup and restoring it desired location, set following parameter in postgresql.conf file of Hot Standby:
hot_standby = on
4. Create a recovery.conf file in Hot Standby restore location and set the following parameters:
standby_mode = 'on'
restore_command = 'cp /Library/PostgreSQL/9.0/data/archivelog/%f %p'
5. Clean the pg_xlog and pg_xlog/archive_status directories of Hot Standby Data directory.
6. Remove the postmaster.pid file from Hot Standby Data Directory.
7. Now start the Instance using following command:
pg_ctl -D $HTStandby start
Now, hot standby is up and running.

Following are few snapshots (Primary Port is 5432 and Hot Standby Port 5433:

1. Here I have inserted one value in table "test" and created a new table test2 with some records:



2. Now on Standby Following is a snapshot:


Monday, September 27, 2010

pgAgent Configuration on Windows

Here are the steps which some one can use to configure the pgAgent. These steps are tested on my machine:

1. Login to system as test_user:

Please create a pgpass.conf file in test_user %APPDATA%\postgresql directory:

Entry for pgpass.conf should be as given below:
hostname:port:database:username:password
2. Connect to database as given below:
C:\”Program Files”\PostgresPlus\8.4\bin\psql.exe -U postgres 
(Above command will also verify that pgpass.conf is used by the psql command or not)
3. After connecting to database, create plpgsql language as given below:
CREATE LANGUAGE plpgsql;
4. Now, run the pgagent.sql file.

5. Create pgAgent service, as given below:
pgagent.exe INSTALL pgAgent -u test_user -p test hostaddr= dbname= user=
3. Start the created service using following command:
net start pgAgent
4. While Creating job, Please keep the output of following command in “Host Agent” field of “pgAgent Job” window:
select jagstation from pageant.pga_jobagent;

Monitor PG WarmStandBy

While working with Database, some people have asked to write a Monitoring script for WarmStandby. Here is a script which can be use for it:
#!/bin/bash

BIN=/opt/PostgresPlus/8.4SS/bin
WARMDATA=/sata/data
PSQL=$BIN/psql
PRIM_HOST=primarserver.ic
ST_LOGFILE=/usr/local/pgsql/pg_standby/standby.log
STANDBYLOGLOC=/sata/backups/pg8.4/standby_wal
echo -e ""
echo -e "+-----------------------------------------------------------+"
echo -e "|Finding the Minimum Recovery:                              |"
echo -e "+-----------------------------------------------------------+"
MINRECOV=`$BIN/pg_controldata $WARMDATA |grep 'Minimum recovery'|awk -F": " '{print $2}'|sed 's/ *//g'`
MINRECOVWAL=`$PSQL -h  $PRIM_HOST -t -p 5432 -c "select  pg_xlogfile_name('$MINRECOV');"|sed 's/ *//g'`
if [ -n $MINRECOVWAL ];then
echo -e "|Minimum Recovery WAL file require: $MINRECOVWAL|"
echo -e "+-----------------------------------------------------------+"
fi
ls -1 $STANDBYLOGLOC >/tmp/WAL.lis
WALNUM=`cat -n /tmp/WAL.lis|grep $MINRECOVWAL|awk '{print $1}'`
TOTALWAL=`ls -1 $STANDBYLOGLOC|wc -l`
NOTAPWAL=`expr $TOTALWAL - $WALNUM`
CURRENT_XLOG=`$PSQL -h $PRIM_HOST -t -p 5432 -c "select pg_xlogfile_name(pg_current_xlog_location());"|sed 's/ *//g'`
if [ $NOTAPWAL -eq 0 ];then
echo -e ""
echo -e "+-----------------------------------------+"
echo -e "|       WARM STANDBY is in SYNC           |"
echo -e "+-----------------------------------------+"
echo -e "|Current Status On Warm Standb:           |"
echo -e "+-----------------------------------------+"
echo -e "|All Archived WAL Restored on Warm Standby|"
echo -e "+-----------------------------------------+"
echo ""
echo -e "+-------------------------------------------------+"
echo -e "|Current Status On Production:                    |"
echo -e "+-------------------------------------------------+"
echo -e "|Current_XLOG of Primary: $CURRENT_XLOG|"
echo -e "|Unarchived WAL SEGMENT: $CURRENT_XLOG |"
echo -e "+-------------------------------------------------+"
else
echo -e "+-----------------------------------------------------+"
echo -e "|WARM STANDBY IS NOT in Sync                          |"
echo -e "+-----------------------------------------------------+"
echo -e "|List of Archived WAL which has not been implemented: |"
echo -e "+-----------------------------------------------------+"
grep -A $NOTAPWAL $MINRECOVWAL /tmp/WAL.lis|grep -v $MINRECOVWAL|awk '{printf "|%s                             |\n",$1}'
echo -e "+-----------------------------------------------------+"
fi


Above script will give you the following information:
+-----------------------------------------------------------+
|Finding the Minimum Recovery: |
+-----------------------------------------------------------+
|Minimum Recovery WAL file require: 000000010000015A00000071|
+-----------------------------------------------------------+

+-----------------------------------------+
| WARM STANDBY is in SYNC |
+-----------------------------------------+
|Current Status On Warm Standb: |
+-----------------------------------------+
|All Archived WAL Restored on Warm Standby|
+-----------------------------------------+

+-------------------------------------------------+
|Current Status On Production: |
+-------------------------------------------------+
|Current_XLOG of Primary: 000000010000015A00000072|
|Unarchived WAL SEGMENT: 000000010000015A00000072 |
+-------------------------------------------------+

Friday, January 1, 2010

EnterpriseDB's Postgres Plus Advanced Server

EnterpriseDB's Postgres Plus Advanced Server. It is really a nice product which has been launched by company.

I have worked on it and found suitable and replacement of the Oracle.

With Learning of Postgres Plus Advanced Server, people also learn about the PostgreSQL, world's most advanced Open source database.

It has almost all feature, which is require for Oracle Developers.

Here is a link, which talks about PostgreSQL and Postgres Plus
http://www.enterprisedb.com/products/postgres_plus/overview.do

And following is a link, which has description of Oracle Compatibility provided by EnterpriseDB in Postgres Plus Advanced Server:
http://www.enterprisedb.com/docs/en/8.3R2/oracompat/Table%20of%20Contents.htm