Saturday, November 27, 2010

Some Handy SQLs for PG DBAs

For using these query DBA has to install the adminpack contrib module of PostgreSQL. If you are using one-click installer then user can find the adminpack.sql in /PostgreSQL Installation Directory/share/postgresql/contrib directory.

Following are some SQLs Based on adminpack contrib module DBA can use to find the last logfile, size of the logfile and can ready using PG Client applications (psql,pgAdmin etc.)
SELECT * FROM ( SELECT pg_ls_dir('pg_log')) AS t (filename)ORDER BY 1 DESC LIMIT 1 

SELECT size FROM pg_stat_file('pg_log/postgresql-2010-05-13.log')

SELECT pg_read_file('pg_log/postgresql-2010-05-13.log', 0, 21604)
Sometimes, user doesn’t have the access to the Server and for reading the PG logfile user has to login on the server. Now they don;t have to. If you have adminpack.sql installed in PostgreSQL user can easily read the last logfile by creating following function:
CREATE OR REPLACE FUNCTION pg_read_last_logfile() returns text
AS
$$
   DECLARE
     fname text;
     fsize integer;
   BEGIN
        SELECT filename into fname FROM ( SELECT pg_ls_dir('pg_log')) AS tmp (filename)ORDER BY 1 DESC LIMIT 1; -- to get the latest filename
        fname='pg_log/'||fname;
        SELECT size into fsize FROM pg_stat_file(fname); -- to get the size of the latest filename
   RETURN pg_read_file(fname, 0, fsize);
  END;
$$ language plpgsql;
Output is given below:
postgres=# select pg_read_last_logfile();
                                                                                                                                                                                                                                                                                                                   pg_read_last_logfile                                                                                                                                                                                                                                                                                                                    
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
  2010-11-28 02:16:35 IST 3965   DBMESSAGELOG:  database system was shut down at 2010-11-24 03:31:24 IST
  2010-11-28 02:16:35 IST 3968   DBMESSAGELOG:  autovacuum launcher started
  2010-11-28 02:16:35 IST 3959   DBMESSAGELOG:  database system is ready to accept connections
  2010-11-28 02:16:40 IST 3973 postgres postgres DBMESSAGELOG:  statement: /* $PostgreSQL: pgsql/contrib/adminpack/adminpack.sql.in,v 1.6 2007/11/13 04:24:27 momjian Exp $ */
         /* ***********************************************
          * Administrative functions for PostgreSQL
          * *********************************************** */
         /* generic file access functions */
         CREATE OR REPLACE FUNCTION pg_catalog.pg_file_write(text, text, bool)
         RETURNS bigint
         AS '$libdir/adminpack', 'pg_file_write'
         LANGUAGE C VOLATILE STRICT;
  2010-11-28 02:16:40 IST 3973 postgres postgres DBMESSAGELOG:  duration: 332.562 ms
  2010-11-28 02:16:40 IST 3973 postgres postgres DBMESSAGELOG:  statement: CREATE OR REPLACE FUNCTION pg_catalog.pg_file_rename(text, text, text)
         RETURNS bool
         AS '$libdir/adminpack', 'pg_file_rename'
         LANGUAGE C VOLATILE;
  2010-11-28 02:16:40 IST 3973 postgres postgres DBMESSAGELOG:  duration: 1.004 ms
  2010-11-28 02:16:40 IST 3973 postgres postgres DBMESSAGELOG:  statement: CREATE OR REPLACE FUNCTION pg_catalog.pg_file_rename(text, text)
         RETURNS bool
         AS 'SELECT pg_catalog.pg_file_rename($1, $2, NULL::pg_catalog.text);'
         LANGUAGE SQL VOLATILE STRICT;
  2010-11-28 02:16:40 IST 3973 postgres postgres DBMESSAGELOG:  duration: 2.472 ms
  2010-11-28 02:16:40 IST 3973 postgres postgres DBMESSAGELOG:  statement: CREATE OR REPLACE FUNCTION pg_catalog.pg_file_unlink(text)
         RETURNS bool
         AS '$libdir/adminpack', 'pg_file_unlink'
         LANGUAGE C VOLATILE STRICT;
  2010-11-28 02:16:40 IST 3973 postgres postgres DBMESSAGELOG:  duration: 0.575 ms
  2010-11-28 02:16:40 IST 3973 postgres postgres DBMESSAGELOG:  statement: CREATE OR REPLACE FUNCTION pg_catalog.pg_logdir_ls()
         RETURNS setof record
         AS '$libdir/adminpack', 'pg_logdir_ls'
         LANGUAGE C VOLATILE STRICT;
  2010-11-28 02:16:40 IST 3973 postgres postgres DBMESSAGELOG:  duration: 41.991 ms
  2010-11-28 02:16:40 IST 3973 postgres postgres DBMESSAGELOG:  statement: /* Renaming of existing backend functions for pgAdmin compatibility */
         CREATE OR REPLACE FUNCTION pg_catalog.pg_file_read(text, bigint, bigint)



Some times DBA has to re-verify the Parameters and they stick to "show all" command or if they know the parameter name, then they can use show parametername command.

Best part of PostgreSQL is that it has catogerized parameters. To view the parameter category-wise, DBA can use following SQL:

SELECT name, setting, unit, short_desc, extra_desc, boot_val, reset_val, sourcefile, sourceline, context, vartype, source, min_val, max_val FROM pg_settings WHERE category='Version and Platform Compatibility / Previous PostgreSQL Versions' ORDER BY name;
where category of postgresql.conf are following:
Write-Ahead Log / Settings
Client Connection Defaults / Locale and Formatting
Version and Platform Compatibility / Other Platforms and Clients
Query Tuning / Genetic Query Optimizer
Resource Usage / Memory
Statistics / Monitoring
Reporting and Logging / Where to Log
Resource Usage / Kernel Resources
Preset Options
Reporting and Logging
Resource Usage
Reporting and Logging / What to Log.
Lock Management
Connections and Authentication / Security and Authentication
Autovacuum
Write-Ahead Log / Checkpoints
Query Tuning / Planner Method Configuration
Statistics / Query and Index Statistics Collector
Developer Options
Write-Ahead Log / Replication
Customized Options.
Query Tuning / Other Planner Options
File Locations
Client Connection Defaults / Statement Behavior
Reporting and Logging / When to Log
Query Tuning / Planner Cost Constants
Client Connection Defaults / Other Defaults
Version and Platform Compatibility / Previous PostgreSQL Versions.

Following are some more handy PG SQLs:
To get the detail information of Database User can use following command:
SELECT datname as Database, pg_get_userbyid(datdba) AS dba, pg_catalog.pg_encoding_to_char(encoding) AS encoding, datcollate as Collate, datctype as CTYPE, datistemplate, datallowconn, datconnlimit, datlastsysoid, datfrozenxid, spcname as tablespace, pg_size_pretty(pg_database_size(datname)) AS size, datacl, age(datfrozenxid) AS freezeage, ROUND(100*(age(datfrozenxid)/freez::float)) AS perc FROM pg_database, pg_tablespace JOIN (SELECT setting AS freez FROM pg_settings WHERE name = 'autovacuum_freeze_max_age') AS param ON (true) WHERE dattablespace = pg_tablespace.oid ORDER BY datname

To get the details of Non Idle Processes:
SELECT extract(epoch FROM (now() - query_start))::numeric(10,2) AS age, procpid, usename, client_addr, application_name, current_query FROM pg_stat_activity WHERE current_query <> '' ORDER BY 1

Cursors in Use:
SELECT name, statement, is_holdable, is_binary, is_scrollable, creation_time FROM pg_cursors ORDER BY name

To get the Prepared Statements information:
SELECT name, statement, prepare_time, from_sql FROM pg_prepared_statements ORDER BY name

Prepare Transactions Query:
SELECT transaction, gid, prepared, owner, database FROM pg_prepared_xacts ORDER BY owner, database

All Locks Information of a Database:
SELECT locktype, CASE WHEN datname IS NOT NULL THEN datname ELSE database::text END AS database, nspname, relname, page, tuple, virtualxid, transactionid, classid, objid, objsubid, virtualtransaction, pid, mode, granted FROM pg_locks LEFT JOIN pg_database ON pg_database.oid = database LEFT JOIN pg_class ON pg_class.oid = relation LEFT JOIN pg_namespace ON pg_namespace.oid=pg_class.relnamespace

Get Exclusive Lock Information:
SELECT locktype, CASE WHEN datname IS NOT NULL THEN datname ELSE database::text END AS database, nspname, relname, page, tuple, virtualxid, transactionid, classid, objid, objsubid, virtualtransaction, pid, granted FROM pg_locks LEFT JOIN pg_database ON pg_database.oid = database LEFT JOIN pg_class ON pg_class.oid = relation LEFT JOIN pg_namespace ON pg_namespace.oid=pg_class.relnamespace WHERE mode='ExclusiveLock' AND locktype NOT IN ('virtualxid', 'transactionid'

Thursday, November 25, 2010

xDB Replication from Oracle to PPAS

EnterpriseDB Replication tool has been now modified and add new functionality in it.

Previous Replication Console, which had been made, was using the DBA Management server and Publication & Subscriptions were dependent.

Now, xDB Replication has been divided into two components:
1. Publication Server : (Master Server)
2. Subscription Server: (Slave Server)

Some Background of Publication Server. Publication Server has following components:
1. JAVA Based Daemon.
2. MetaDatabase of Publication Server.

Publication Server keeps its information (Information about the Primary Database, Tables Details etc.) in the MetaDatabase under schema _edb_replicator_pub

Publication server is independent and has no dependency on Subscriptions Server. By default publication server uses the 9011 port. However, if user wants, [S]he can use different ports too.

For running Publication Server on different port, use the following command:
$EDBHOME/jre/bin/java -Djava.awt.headless=true -jar $EDBHOME/bin/edb-repserver.jar pubserver 

Similarly, Subscription Server has two components:
1. JAVA Based Daemon
2. MetaDatabase of Publication. (which Subscription server uses for Keeping the information of Subscriptions)

In Metadatabase, xDB Subscription server keeps the information in _edb_replicator_sub.

By Default xDB Replication runs on port 9012. However if user wants [S]he can run the daemon on different port too, by using following command:
$EDBHOME/jre/bin/java -Djava.awt.headless=true -jar $EDBHOME/bin/edb-repserver.jar subserver 
Now, Questions comes, How to use the daemon for Publication and Subscriptions?

EnterpriseDB provides a jar file edb-repcli.jar in $EDBHOME/bin which can be use.

edb-repcli.jar has been exhausted with lots of options using which user can Create a Publication/Subcriptions, can add the database etc.

To find more of its options, user can use following command:
$EDBHOME/jre/bin/java -jar $EDBHOME/bin/edb-repcli.jar -help

With above, user has to create a two configuration files:
1. Publication Configuration file
2. Subscription Configuration file.

Format of Configuration files of Publication and Subcription are same.
Following is a snapshot and description of configuration file:
user=enterprisedb # User using which xDB Publication/Subscription can connect with MetaDAtabase
port=9011 # Port on  which Publication/Subscription Daemon is running.
password= 
type=enterprisedb # There are two of database which xDB Replication Daemon can use one EnterpriseDB databases (Postgres Plus Advanced Or postgres (PostgreSQL)
host=localhost # Host on which Publication Daemon is running

With above configuration file of Publication/Subscription configuration files, there is one more xDB Server configuration file. This file by default exists in /etc/edb-repl.conf. This file contains the information of xDB Metadabase.
Following is a snapshot of this file:
user=enterprisedb
port=5445 (port on which xDB Replication is running)
type=enterprisedb
host=localhost
database=edb

Few Handy Commands of edb-repcli.jar is given below:

1. To view list the publications, user can use following command:
$EDBHOME/jre/bin/java -jar $EDBHOME/bin/edb-repcli.jar -printpublist -repsvrfile 
2. To view the published tables of a publication, following command:
$EDBHOME/jre/bin/java -jar $EDBHOME/bin/edb-repcli.jar -printpublishedtables  -repsvrfile 

3. To list the subscriptions of a particular subscription database, following command can be use:
$EDBHOME/jre/bin/java -jar $EDBHOME/bin/edb-repcli.jar -printsublist -repsvrfile  -subdbid 

4. To print the subscribed database ids following command can be use:
$EDBHOME/jre/bin/java -jar $EDBHOME/bin/edb-repcli.jar -printsubdbidsdetails -repsvrfile 

5. To add the Databases [Oracle|EnterpriseDB] for publication, following command can be use:
$EDBHOME/jre/bin/java -jar $EDBHOME/bin/edb-repcli.jar -addpubdb -repsvrfile  -dbtype {oracle | enterprisedb | postgresql} -dbhost  -dbport  -dbuser  -dbpassword  -database { | } [-oraconnectiontype {sid | servicename}]

6. To print the published Databases in Publication Server, following command can be use:
$EDBHOME/jre/bin/java -jar $EDBHOME/bin/edb-repcli.jar -printpubdbidsdetails -repsvrfile 

To control the behavior/performance of Publication/Subscription daemon, EnterpriseDB provides a two important files.
1. xdb_pubserver.conf
2. xdb_subserver.con

Location of above two files is $EDBHOME/etc.

Information of the contents of these two files can be find in following link:
http://www.enterprisedb.com/docs/en/8.4/repserver/Postgres_Plus_Advanced_Server_Replication_Server_Users_Guide-29.htm#TopOfPage


Details of Permitted Source and Target databases can be find in following link:
http://www.enterprisedb.com/docs/en/8.4/repserver/Postgres_Plus_Advanced_Server_Replication_Server_Users_Guide-27.htm#TopOfPage

Sunday, November 7, 2010

Making Dynamic Copy Command

There was a Question put by one of user about loading data in PostgreSQL skipping column which has default value using Copy Command.

With above requirement, he has also mentioned that table’s column get changed and default values too.

So, he wanted a plpgsql function code, which can be use for any table and file to load the data.

Following is a sample plpgsql code which can be use to make things happen:
CREATE OR REPLACE FUNCTION Copy_test(text,text,text) returns boolean
AS
$$
 DECLARE
    rec record;
    cmd text;
    colstring text;
 BEGIN
        cmd:='select array_to_string(array(select column_name::text from information_schema.columns where table_name='||''''||$2||''''||' and table_schema='||''''||$1||''''||' and column_default is null),'||''''||','||''''||')';
         EXECUTE  cmd into colstring;
 cmd:='COPY "'||$1||'"."'||$2||'"('||colstring||') from '||''''||$3||'''';
        EXECUTE  cmd;
     EXCEPTION when others then
 return false;
 Return true;
  END;
$$ language plpgsql;
Usage:
select copy_test('schemaname','tablename','filename');

Usage Example is given below:
postgres=# select * from test_copy;
 id | id1 
----+-----
(0 rows)

postgres=#select Copy_test('public','test_copy','/Users/postgres/test_data');
 copy_test 
-----------
 t
(1 row)

postgres=# select * from test_copy;                                                                                                                                      id | id1 
----+-----
  1 |   1
  2 |   2
  3 |   3
  4 |   4
  5 |   5
(5 rows)

Wednesday, November 3, 2010

Does UPDATE Change ROWID in Oracle?

Yesterday, there was a discussion which was going on OTN (Oracle Forum). I was also part of that discussion. Discussion was about the "Does UPDATE statement change rowid in Oracle?"

As per my comment of "UPDATE does not change rowid", one of the user has pointed me that UPDATE statement sometime changes the rowid in Oracle and given me following example.
SQL> ed
Wrote file afiedt.buf
 
  1  create table moving_rowid (
  2    col1 number primary key,
  3    col2 number,
  4    col3 number,
  5    col4 varchar2(10)
  6  )
  7* organization index
SQL> / 
 Table created.
SQL> insert into moving_rowid values( 1, 2, 3, 'foo' );
1 row created.

SQL> insert into moving_rowid values( 2, 3, 4, 'bar' );
1 row created.
  
SQL> select rowid, col1, col2, col3, col4
  2    from moving_rowid;
ROWID                  COL1       COL2       COL3 COL4
---------------- ---------- ---------- ---------- ----------
*BAEADxsCwQL+             1          2          3 foo
*BAEADxsCwQP+             2          3          4 bar
 
SQL> update moving_rowid set col1=col1+1;
2 rows updated.
 
SQL> commit;
Commit complete.
 
SQL> select rowid, col1, col2, col3, col4
  2    from moving_rowid;
ROWID                  COL1       COL2       COL3 COL4
---------------- ---------- ---------- ---------- ----------
*BAEADxsCwQP+             2          2          3 foo
*BAEADxsCwQT+             3          3          4 bar
Above example has made me to think and I have given the following answer on forum with example to show that UPDATE does not change the rowid:
SQL> select rowid, col1,col2,col3,col4 from moving_rowid;
ROWID    COL1  COL2 COL3 COL4
AAAFiAAAEAAAAhgAAA  1  2  3 foo
AAAFiAAAEAAAAhgAAB  2  3  4 bar

SQL> update moving_rowid set col1=col1+1;
2 rows updated.

SQL> commit;
Commit complete.

SQL> select rowid, col1,col2,col3,col4 from moving_rowid;
ROWID    COL1  COL2 COL3 COL4
AAAFiAAAEAAAAhgAAA  2  2  3 foo
AAAFiAAAEAAAAhgAAB  3  3  4 bar
No changes in rowid, reason there is no organization has been done as per the index.

Please note: if you change any indexed column and table has an index for organizing data then index will change. This is what happens with partition too.

Now, see the example given below with organization index:
SQL> update moving_rowid set col1=col1+1;
2 rows updated.

SQL> commit;
Commit complete.

SQL> select rowid,col1,col2,col3,col4 from moving_rowid;
ROWID   COL1  COL2 COL3  COL4
*BAEACGwCwQP+  2  2   3   foo
*BAEACGwCwQT+  3  3  4  bar

Now lets update non-index column:
SQL> select rowid,col1,col2,col3,col4 from moving_rowid;
ROWID   COL1  COL2  COL3 COL4
*BAEACGwCwQP+  2  3  3 foo
*BAEACGwCwQT+  3  4  4 bar
I hope above would have clear the behavior. UPDATE does not change the rowid. Users who see change in rowid is actually movement of data pointer. data pointer changes if the organization done on the basis of index (and each update changing the index too) or if row-movement has happened.

Therefore it’s not best practice to do update on the basis of rowid.

Forum link is given below:
http://forums.oracle.com/forums/message.jspa?messageID=9115665#9115665

Tuesday, November 2, 2010

Database Technologies: Some trigger and programming feature in PostgreSQL

Database Technologies: Some trigger and programming feature in PostgreSQL

Some trigger and programming feature in PostgreSQL

Anonymous Block Support:
Now, user would be able to create anonymous block using plpgsql as given below:
do
$$
declare
   id1 varchar;
begin
   select id into id1 from test_sort limit 1;
   raise notice '%',id1;
end;
$$ language plpgsql;
Output:
NOTICE:  1.1
DO

Column Trigger:
PG9.0 is also supports the column level and When Triggers.

Following is an example:
create or replace function test_col_trigger() returns trigger
as
$$
  BEGIN
      RAISE NOTICE 'INSERTED VALUE in Column a %',new.id;
       RETURN null;
  END;
$$ language plpgsql;

create trigger col_trigger after update of "a" on test_def FOR EACH ROW EXECUTE procedure test_col_trigger();
Output:
postgres=# update test_def set a='Second';
NOTICE:  INSERTED VALUE in Column a 1
UPDATE 1

When Triggers in PG9.0:

PG9.0 has "When trigger" which allowed user to create trigger to execute procedure for a particular condition.
create or replace function test_col_trigger() returns trigger
as
$$
  BEGIN
      RAISE NOTICE 'INSERTED VALUE in Column a %',new.id;
       RETURN null;
  END;
$$ language plpgsql;

create trigger trigger_when AFTER update on test_Def for each row when (old.col1 = new.col2) execute procedure test_col_trigger();
Output:
postgres=# update test_def set a='Second';
NOTICE:  SAME Result inserted a 1
UPDATE 1

There are some other new features has been added in PG9.0.

Important features which DBA must know are following:

1. New VACUUM FULL

Prior to PG9.0, VACUUM FULL was slow, since it used to move the blocks to a block which is closer to beginning of the table. This process used to consume lot of CPU and does the lot of random I/O. With this, after VACUUM FULL, DBA has to reindex the indexes associated with table since, VACUUM FULL used to leave some Bloats/Fragmentation in Indexes.

Overall it was not an efficient method.

However from PG9.0, VACUUM FULL has been coded to work smartly. At the place of moving the tuples, VACUUM FULL will now create a fresh table with new indexes and will destroy the old bloated table. This method is faster than the old method.

2. New column added in pg_stat_activity

application_name is a new column which has been added in pg_stat_activity, which would be giving the information about the application from where connections has been made.

Example is given below:
postgres=# select * from pg_stat_activity where procpid=pg_backend_pid();
 datid | datname  | procpid | usesysid | usename  | application_name | client_addr | client_port |          backend_start           |            xact_start            |           query_start            | waiting |                         current_query                          
-------+----------+---------+----------+----------+------------------+-------------+-------------+----------------------------------+----------------------------------+----------------------------------+---------+----------------------------------------------------------------
 11874 | postgres |    4533 |       10 | postgres | psql             |             |          -1 | 2010-11-02 11:39:13.170051+05:30 | 2010-11-02 12:55:04.278363+05:30 | 2010-11-02 12:55:04.278363+05:30 | f       | select * from pg_stat_activity where procpid=pg_backend_pid();
3. Per database and per role configuration.

4. --analyze-only new option has been added in vacuumdb command.
This option would allow the user to run analyze only using vacuumdb command.
Following is an output of vacuumed —help shows:
-Z, --analyze-only              only update optimizer statistics

For more information other feature in PG9.0, following blog is helpful:
http://wiki.postgresql.org/wiki/Illustrated_9_0

Monday, November 1, 2010

pg_get_tabledef function in Postgres Plus

Till Advanced Server 8.3, there was a function pg_get_tabledef, which some people were using to get the definition of table.

This function is no more part of new versions of Advanced Server 8.3 (8.3R2AS and 8.4AS). So, they would be missing this function by default.

However, they can get the same functionality using following function, which provides similar result as of pg_get_tabledef.

Function code is given below:
CREATE OR REPLACE Function pg_get_tabledef(text) RETURNS text
AS
$$
  DECLARE
     tabledef TEXT;
     dotpos integer;
     tablename text;
     schemaname text;
     prevcol text;
     coltype text;
     notnull1 boolean;
     rec record;
     oidcheck boolean;
  BEGIN
   dotpos:=strpos($1,'.');
   if dotpos = 0 then
     schemaname:='public';
     tablename:=substr($1,dotpos+1);
   else
     schemaname:=substr($1,1,dotpos-1);
     tablename:=substr($1,dotpos+1);
  end if;
  select relhasoids into oidcheck from pg_class,pg_namespace where pg_class.relnamespace=pg_namespace.oid and pg_namespace.nspname=schemaname and pg_class.relname=tablename and pg_class.relkind='r';
   if not found then
     tabledef:='Table Does not exists!';
     return tabledef;
   end if;
  tabledef:= 'CREATE TABLE '|| schemaname||'.'||tablename;
   for rec in SELECT a.attname as columnname ,pg_catalog.format_type(a.atttypid, a.atttypmod) as coltype, (SELECT substring(pg_catalog.pg_get_expr(d.adbin, d.adrelid) for 128) FROM pg_catalog.pg_attrdef d WHERE d.adrelid = a.attrelid AND d.adnum = a.attnum AND a.atthasdef),a.attnotnull as notnull, a.attnum FROM pg_catalog.pg_attribute a WHERE a.attrelid = (select pg_class.oid from pg_class,pg_namespace where relname=tablename and pg_class.relnamespace=pg_namespace.oid and pg_namespace.nspname=schemaname) AND a.attnum > 0 AND NOT a.attisdropped ORDER BY a.attnum  
  loop
      if prevcol is null  then
 tabledef:=tabledef||' (';
        prevcol:=rec.columnname;
        coltype:=rec.coltype;
        notnull1:=rec.notnull;
      elsif notnull1 then
        tabledef:=tabledef||' '||E'\n'||prevcol||' '||coltype||' NOT NULL ,';
        prevcol:=rec.columnname;
        coltype:=rec.coltype;
        notnull1:=rec.notnull;
     else
        tabledef:=tabledef||' '||E'\n'||prevcol||' '||coltype||' ,';
        prevcol:=rec.columnname;
        coltype:=rec.coltype;
        notnull1:=rec.notnull;
     end if;
   end loop;
      if oidcheck = true and notnull1 = true then
        tabledef:=tabledef||E'\n'||prevcol||' '||coltype||' NOT NULL ) WITH OIDS;';
      elsif oidcheck = true and notnull1 = false then
        tabledef:=tabledef||E'\n'||prevcol||' '||coltype||' NOT NULL ) WITH OIDS;';
      elsif oidcheck=false and notnull1=true then
        tabledef:=tabledef||E'\n'|| prevcol||' '||coltype||' NOT NULL ) WITHOUT OIDS;';
      else
        tabledef:=tabledef||E'\n'||prevcol||' '||coltype||' ) WITHOUT OIDS;';
      end if;
   
   return tabledef;
   end;
$$ language plpgsql;   

Sample Output is given below:

postgres=# select pg_get_tabledef('public.test_def');                                                                                                                            pg_get_tabledef         
---------------------------------
 CREATE TABLE public.test_def ( +
 id numeric(9,0) NOT NULL ,     +
 a text ) WITHOUT OIDS;
(1 row)

postgres=# select pg_get_tabledef('public.test_des');
    pg_get_tabledef     
------------------------
 Table Does not exists!
(1 row)
Note:: Function would not give the information of any constraint defined on table and index definitions. However, if some body wants then user can keep modify this to get the result.