Sunday, October 7, 2012

New in Postgres Plus Advanced Server 9.2

Good News Postgres Plus Advanced Server Beta Version is now availabale, which has all the new feature of PostgreSQL 9.2, and it also has new features which are specific to Advanced Server.
I will cover/explain New features of PostgreSQL 9.2 later as per release Notes. However, the following link covers major features of PostgreSQL 9.2 and can be taken as reference.
http://wiki.postgresql.org/wiki/What's_new_in_PostgreSQL_9.2
In this Blog, I am going to cover only specific features of Advanced Server 9.2 Core. Which are:

1. INSERT APPEN HINT in PPAS 9.2
PPAS 9.2, now supports INSERT append hint. This is a very interesting feature and it is very useful for users who frequently delete records in Bulk and do bulk of INSERTs. This hint can provide some benefits in INSERTs. This hint makes PPAS not use Free Space Map and Append the rows at the end of relation(table). Its usage is given below:
INSERT /*+append*/ INTO tab21115 VALUES(1,'abc',sysdate);
2. Procedure Called like Function Call.
PPAS 9.2 now allows calling procedure with following syntax:
SELECT * FROM procedure(arg1,arg2,…);
This type of Procedure call is allowed for Procedures which have OUT/INOUT Params. With this, user can use exec Function.

lets see how it works:

a. Create a Procedure as Given below:
CREATE OR REPLACE PROCEDURE foo_proc(A IN INT, B INOUT INT, C OUT INT)
AS
BEGIN
   b:=a+b;
   c:=b+1;
END;
Till 9.1 and even in 9.2, user can do something like given below:
DECLARE
  d int:=1;
  e int:=2;
  f int;
BEGIN
   foo_proc(d,e,f);
   DBMS_OUTPUT.PUT_LINE('e = '||e);
   DBMS_OUTPUT.PUT_LINE('d = '||d);
   DBMS_OUTPUT.PUT_LINE('f= '||f);
END;
e = 3
d = 1
f= 4
Now in 9.2, user can also do this:
edb=# select * from foo_proc(1,2);
 b | c 
---+---
 3 | 4
(1 row)
In case the user has Procedure and Function with a same name and the user wants to execute procedure using select command, then SELECT command is going to give preference to Function over Procedure as given below:
Function Definition:
CREATE OR REPLACE FUNCTION foo(a INT) RETURN INT
AS
BEGIN
  RETURN 2;
END;

CREATE PROCEDURE foo(A int, B OUT int)
AS
BEGIN
   B:=a+1;
END;

edb=# select * from foo(4);
 foo 
-----
   2
However, user can still use EXEC command to execute Procedure as given below:
edb=# exec foo(4);
 ?column? 
----------
        5
(1 row)

Or if you are using Anonymous function then PERFORM will also work with Procedure as given below:

edb=# DECLARE
edb-#     a int:=4;
edb$#     b int;
edb$# BEGIN
edb$#    PERFORM foo(a,b);
edb$#    DBMS_OUTPUT.PUT_LINE('b = '||b);
edb$# END;
b = 5

EDB-SPL Procedure successfully completed
3. Object Type Support:
PPAS already has Object Type support. However, the new version is coming with a new enhancement in that support. In New Version, user would be able to create Objects with Attributes, Functions, and Procedures. This gives an advantage of reduced coding in terms of defining Object types and maintaining it.

Example is given below:

a. Define a Type:
    CREATE OR REPLACE TYPE PersonObj AS OBJECT (
  first_name  VARCHAR2(50),
  last_name   VARCHAR2(50),
  date_of_birth  DATE,
  MEMBER FUNCTION getAge RETURN NUMBER
);
b. Define a Type Body:
CREATE OR REPLACE TYPE BODY PersonObj AS
  MEMBER FUNCTION getAge RETURN NUMBER AS
  BEGIN
    RETURN Trunc(Months_Between(Sysdate, date_of_birth)/12);
  END getAge;
END;
c. Define a Table based on Body type:
CREATE TABLE people (
  id      NUMBER(10) NOT NULL,
  person  PersonObj
);
d. To insert Data, Default constructor can be use as given below:
INSERT INTO people
VALUES (1, PersonObj('John','Doe',
        TO_DATE('01/01/1999','DD/MM/YYYY')));
e. With the following way, a user can access the data in the table:
SELECT p.id,
       (p.person).first_name,
       p.person.getAge() age
FROM   people p;
 id | first_name | age 
----+------------+-----
  1 | John       |  13
  2 | Jane       |  13
(2 rows)
4. PL/SQL Subtypes:
Subtypes is considered as a TYPE, which is defined on base/user defined types with some constraints, syntax for declaring SUBTYPE in PL/SQL is given below:
SUBTYPE  IS  [(constraint)] [NOT NULL];
Example is given below:
Example 1:
DECLARE
  SUBTYPE INT2 IS NUMBER;
  v2 INT2 (3,0);
BEGIN
  v2:= 123;
  DBMS_OUTPUT.PUT_LINE ('V2 = '|| v2);
END;
Example 2:
DECLARE
  SUBTYPE v_word IS VARCHAR2(10) NOT NULL;
  verb  v_word  := 'verb';
  noun  v_word  := 'noun';
BEGIN
  noun :='n1';
  DBMS_OUTPUT.PUT_LINE (UPPER(verb));
  DBMS_OUTPUT.PUT_LINE (UPPER(noun));
END;
5. DROP TABLE CASCADE CONSTRAINTS in 9.2:
PPAS 9.2 now supports DROP TABLE CASCADE CONSTRAINTS, this is a very intersting feature and is not available in PostgreSQL 9.2.

Using this command, user would be able to drop all referential integrity constraints that refer to primary and unique keys in the dropped table. If you omit this clause and such referential integrity constraints exist, then the database returns an error and does not drop the table.

This is useful, when somebody wants to drop Master table but doesn't want to drop child table, which has Foreign Key Constraint and is referencing the Master.

An example is given below:

CREATE TABLE master(id numeric primary key);
CREATE table child(id numeric references master(id));
insert into master values(1);
insert into child values(1);
edb=# drop table master cascade constraints;
NOTICE:  drop cascades to constraint child_id_fkey on table child
DROP TABLE
edb=# \d child
  Table "enterprisedb.child"
 Column |  Type   | Modifiers 
--------+---------+-----------
 id     | numeric | 
As you can see above, I have dropped the Master table, on which child table has referential integrity constaint. However, my Child table still exists. PostgreSQL 9.2, doesn't have CASCADE CONSTRAINTS. Therefore, user has to first drop the constraints on table which has referential integrity constraints and then drop the referenced table. This process requires user to find all tables which has referential integrity constraints that refer to primary key of master table. This commands helps DBAs/Users not to worry about finding the child tables.

6. Allow throwing Multiple Errors as Warning, while compiling Procedures/Functions using SPL.
This is also a very interesting enhancement found for SPL. Whenever the user wants to compile a function/procedure in SPL, it will show all the error messages in the particular Function/Procedure Body, so that the user doesn't have to re-visit their code everytime to fix the other issue. An example is given below:
CREATE OR REPLACE FUNCTION foo() RETURN integer
AS
BEGIN
   a:=a=1
;
B:=b + 1;
I am making fool;
END;

Output:
ERROR:  "a" is not a known variable
LINE 4:    a:=a=1
           ^
ERROR:  "b" is not a known variable
LINE 6: B:=b + 1;
        ^
ERROR:  syntax error at or near "I"
LINE 7: I am making fool;
        ^
ERROR:  compilation of SPL function/procedure "foo" failed due to 3 errors
As you can see above, it has given all the mistakes that I made in Function Body, and now I can fix all of them in one go, to reduce the overhead of executing again and again.
This is very useful for Developers/DBAs.

7. DEFAULT for Types declared in Packages:
Now, in PPAS 9.2, user would be able to define DEFAULT value for particular type, as given below:
CREATE OR REPLACE PACKAGE INITPKG_pro_b4_default AS
PROCEDURE show_default_values;
 n1 NUMBER DEFAULT 20;
 c1 VARCHAR2 DEFAULT 'Default';
END;

CREATE OR REPLACE PACKAGE BODY INITPKG_pro_b4_default AS
 PROCEDURE show_default_values IS
  n number;
 BEGIN
  dbms_output.put_line(c1);
  dbms_output.put_line(n1);
  n1 := n1*n1;
  n1 := SQRT(n1);
  dbms_output.put_line(n1);
 END;
END;

edb=# exec INITPKG_pro_b4_default.show_default_values;
Default
20
20.000000000000000

EDB-SPL Procedure successfully completed

8. TABLE Expression support for Nested Tables.
Now, in 9.2, user would be able to use TABLE Expressions for Nested tables. This feature was missing and asked by many Oracle DBAs. Table Expressions allows the user to query a collection in the FROM Clause like a Table.
edb=# CREATE OR REPLACE TYPE comp_typ_str_21189 IS TABLE OF VARCHAR2(100);
CREATE TYPE
edb=# 
edb=# select * from table(comp_typ_str_21189('PPP','QQQ')) ;
 column_value 
--------------
 PPP
 QQQ
(2 rows)

9. INPUT/OUTPUT Functions for NESTED TABLES:
PPAS 9.2 supports user defined input/output functions for NESTED TABLE. This feature was missing in 9.1.
Some examples are given below:
edb=# create or replace type mytab as table of varchar2(90);
CREATE TYPE
edb=# Create or replace function fun return mytab
edb-# as
edb$# begin
edb$#  return mytab('a','b',3);
edb$# end;
CREATE FUNCTION
edb=# select fun;
 fun21168 
----------
 {a,b,3}
(1 row)

edb=# 
edb=# create or replace function fun return mytab
edb-# as
edb$#  nvar mytab;
edb$# begin
edb$#  nvar := mytab();
edb$#  nvar.extend(4);
edb$#  nvar(1) := 'foo';
edb$#  nvar(2) := NULL;
edb$#  nvar(3) := 'deleteme';
edb$#  nvar(4) := 'bar';
edb$#  return nvar;
edb$# end;
CREATE FUNCTION
edb=# --Verify User's is able to see that data
edb=# select fun;
        fun21168         
-------------------------
 {foo,NULL,deleteme,bar}
(1 row)

9. LOG is no more a reserved keyword for functions.

Till PPAS 9.1, LOG was a reserved keyword and users were not allowed to create functions using this Keyword. In 9.2, a user would be able to use this keyword.
edb=# CREATE OR REPLACE FUNCTION log( a int ) return int as
edb$# BEGIN
edb$# dbms_output.put_line('Function LOG is called');
edb$# return a*2;
edb$# END;
CREATE FUNCTION
edb=# SELECT LOG(10);
Function LOG is called
 log 
-----
  20
(1 row)
10. Variables can be named as current_date.
We know that current_date is special function which returns current date of PostgreSQL/PPAS DB. Till PPAS 9.1, users were not allowed to use this reserved name in variables. Now, in 9.2, users would be able to use it. As given below:
edb=# create table t_currentdate(current_date int);
CREATE TABLE
edb=# desc t_currentdate
    Table "enterprisedb.t_currentdate"
    Column    |  Type   | Modifiers 
--------------+---------+-----------
 current_date | integer | 

create or replace procedure proc_currentdate(current_date date) is
begin
 dbms_output.put_line(current_date);
end;

edb=# exec proc_currentdate(current_date);
05-OCT-12 00:00:00

EDB-SPL Procedure successfully completed
11. New Data Type STRING.
Since Oracle supports string as data type and this datatype wasn't available in PPAS 9.1, we have included it in 9.2. String is an alias to VARCHAR Data type. Example is given below:
edb=# CREATE TABLE test_string(col string)
edb-# ;
CREATE TABLE
edb=# insert into test_string values('As an Example');
INSERT 0 1
edb=# select * from test_string;
      col      
---------------
 As an Example
(1 row)

edb=# \d test_string
    Table "enterprisedb.test_string"
 Column |       Type        | Modifiers 
--------+-------------------+-----------
 col    | character varying | 
12. NVARCHAR2 data type support in 9.2
PPAS 9.2 allowes users to use NVARCHAR2 as a datatype. In PPAS 9.2, NVARCHAR2 is mapped to VARCHAR data type. This is only for an Oracle user, who always asked question on having this in PPAS.
Example is given below:
edb=# CREATE TABLE test_nvarchar(col NVARCHAR2(10))
edb-# ;
CREATE TABLE
edb=# \d test_nvarchar
     Table "enterprisedb.test_nvarchar"
 Column |         Type          | Modifiers 
--------+-----------------------+-----------
 col    | character varying(10) | 
13. MultiByte and string delimiter support in EDB*loader.
EDB*Loader in PPAS 9.2, is more efficient, compared to 9.1, in loading data. There were some cases, in which it has been observed that it is difficult to find single character delimiter for data in file as the data in a file has all possible characters. To load those kind of data files, users can now define there own STRING Delimiter. An example is given below:
Data File: data.log:
1$$abc
2$$ccc
3$$dddddddd

Control FIle:
LOAD DATA
INFILE '/tmp/data.log'
BADFILE '/tmp/data.bad'
truncate INTO table edb_loader
FIELDS TERMINATED BY '$$' optionally enclosed by '"'
(id , col )

-bash-4.1$ edbldr userid=enterprisedb/ control=/tmp/edb.ctl log=/tmp/edb.log
EDB*Loader: Copyright (c) 2007-2012, EnterpriseDB Corporation.

Successfully processed (3) records
-bash-4.1$ psql
psql (9.2.0.1)
Type "help" for help.

edb=# select * from edb_loader ;
 id |   col    
----+----------
  1 | abc
  2 | ccc
  3 | dddddddd
(3 rows)

Sunday, July 29, 2012

List user privileges in PostgreSQL/PPAS 9.1

PostgreSQL has some useful functions which can be use to know about the privilege of a user on a particular Database object. Those functions is available in following link:
http://www.postgresql.org/docs/9.1/static/functions-info.html

Functions has_*_privilege in PostgreSQL/PPAS is good to know about privilege a user has on one database objects and these function returns boolean value true or false.

Since, DBAs/Users are interested in listing objects and privileges of a Database User and currently PostgreSQL doesn't have a view, which DBA can use to list users privileges on objects for a particular database. Therefore, I thought about making some functions, which can be used to list users privileges, based on what is available in PostgreSQL/PPAS 9.1. These are basic functions and can be expanded, as per need, to show more privileges like WITH GRANT OPTION.

Following are functions which can use to get the privileges of a particular user:

1. Function for table privileges:
CREATE OR REPLACE FUNCTION table_privs(text) RETURNS table(username text, relname regclass, privs text[])
AS
$$
SELECT  $1,c.oid::regclass, array(select privs from unnest(ARRAY [ 
( CASE WHEN has_table_privilege($1,c.oid,'SELECT') THEN 'SELECT' ELSE NULL END),
(CASE WHEN has_table_privilege($1,c.oid,'INSERT') THEN 'INSERT' ELSE NULL END),
(CASE WHEN has_table_privilege($1,c.oid,'UPDATE') THEN 'UPDATE' ELSE NULL END),
(CASE WHEN has_table_privilege($1,c.oid,'DELETE') THEN 'DELETE' ELSE NULL END),
(CASE WHEN has_table_privilege($1,c.oid,'TRUNCATE') THEN 'TRUNCATE' ELSE NULL END),
(CASE WHEN has_table_privilege($1,c.oid,'REFERENCES') THEN 'REFERENCES' ELSE NULL END),
(CASE WHEN has_table_privilege($1,c.oid,'TRIGGER') THEN 'TRIGGER' ELSE NULL END)]) foo(privs) where privs is not null) FROM pg_class c JOIN pg_namespace n on c.relnamespace=n.oid where n.nspname not in ('information_schema','pg_catalog','sys') and nspparent=0 and c.relkind='r' and
has_table_privilege($1,c.oid,'SELECT, INSERT,UPDATE,DELETE,TRUNCATE,REFRENCES,TRIGGER') AND has_schema_privilege($1,c.relnamespace,'USAGE')
$$ language sql;

Example:
edb=# select * from table_privs('test_user');
 username  | relname |         privs          
-----------+---------+------------------------
 test_user | test_id | {SELECT,UPDATE,DELETE}
(1 row)

2. Database privileges:
CREATE OR REPLACE FUNCTION database_privs(text) RETURNS table(username text,dbname name,privileges  text[])
AS
$$
SELECT $1, datname, array(select privs from unnest(ARRAY[
( CASE WHEN has_database_privilege($1,c.oid,'CONNECT') THEN 'CONNECT' ELSE NULL END),
(CASE WHEN has_database_privilege($1,c.oid,'CREATE') THEN 'CREATE' ELSE NULL END),
(CASE WHEN has_database_privilege($1,c.oid,'TEMPORARY') THEN 'TEMPORARY' ELSE NULL END),
(CASE WHEN has_database_privilege($1,c.oid,'TEMP') THEN 'CONNECT' ELSE NULL END)])foo(privs) WHERE privs IS NOT NULL) FROM pg_database c WHERE 
has_database_privilege($1,c.oid,'CONNECT,CREATE,TEMPORARY,TEMP') AND datname <> 'template0';
$$ language sql;

Example:
edb=# select * from database_privs('test_user');
 username  |  dbname   |         privileges          
-----------+-----------+-----------------------------
 test_user | template1 | {CONNECT}
 test_user | edb       | {CONNECT,TEMPORARY,CONNECT}
(2 rows)

3. Tablespace privileges:
CREATE OR REPLACE FUNCTION tablespace_privs(text) RETURNS table(username text,spcname name,privileges text[])
AS
$$
   SELECT $1, spcname, ARRAY[
(CASE WHEN has_tablespace_privilege($1,spcname,'CREATE') THEN 'CREATE' ELSE NULL END)] FROM pg_tablespace WHERE has_tablespace_privilege($1,spcname,'CREATE');
$$ language sql;
Example:
edb=# select * from tablespace_privs('test_user');
 username  | spcname | privileges 
-----------+---------+------------
 test_user | test    | {CREATE}
(1 row)

4. Foreign Dataa Wrapper privileges
CREATE OR REPLACE FUNCTION fdw_wrapper_privs(text) RETURNS table(username text,fdwname name,privleges text[])
AS
$$
  SELECT $1, fdwname, ARRAY[
(CASE WHEN has_foreign_data_wrapper_privilege($1,fdwname,'USAGE') THEN 'USAGE' ELSE NULL END)] FROM pg_catalog.pg_foreign_data_wrapper WHERE has_foreign_data_wrapper_privilege($1,fdwname,'USAGE');
$$ language sql;

Example:
edb=# select * from fdw_wrapper_privs('test_user');
 username  |   fdwname    | privleges 
-----------+--------------+-----------
 test_user | libpq_dblink | {USAGE}
(1 row)

5. To find foreign server privileges following functions can be use:
CREATE OR REPLACE FUNCTION foreign_server_privs(text) RETURNS table(username text, srvname name, privileges text[])
AS
$$
  SELECT $1, s.srvname ,  ARRAY[
(CASE WHEN has_server_privilege($1,srvname,'USAGE') THEN 'USAGE' ELSE NULL END)] from pg_catalog.pg_foreign_server s  WHERE has_server_privilege ($1,srvname,'USAGE');
$$
language sql;

6. To find language priveleges, following function can be use:
CREATE OR REPLACE FUNCTION language_privs(text) RETURNS table(username text,srvname name, privileges text[])
AS
$$
SELECT $1, l.lanname, ARRAY[(CASE WHEN has_language_privilege($1,lanname,'USAGE') THEN 'USAGE' ELSE NULL END)] FROM pg_catalog.pg_language l where has_language_privilege($1,lanname,'USAGE');
$$ language sql;

7. To find schema privileges of a user following can be use:
CREATE OR REPLACE FUNCTION schema_privs(text) RETURNS table(username text, schemaname name, privieleges text[])
AS
$$
  SELECT $1, c.nspname, array(select privs from unnest(ARRAY[
( CASE WHEN has_schema_privilege($1,c.oid,'CREATE') THEN 'CREATE' ELSE NULL END),
(CASE WHEN has_schema_privilege($1,c.oid,'USAGE') THEN 'USAGE' ELSE NULL END)])foo(privs) WHERE privs IS NOT NULL)
FROM pg_namespace c where has_schema_privilege($1,c.oid,'CREATE,USAGE') AND c.nspparent=0;
$$ language sql;


Note:: Above function can be use in Advanced Server. For schema_privilege in PostgreSQL, user can try following function:
CREATE OR REPLACE FUNCTION schema_privs(text) RETURNS table(username text, schemaname name, privieleges text[])
AS
$$
  SELECT $1, c.nspname, array(select privs from unnest(ARRAY[
( CASE WHEN has_schema_privilege($1,c.oid,'CREATE') THEN 'CREATE' ELSE NULL END),
(CASE WHEN has_schema_privilege($1,c.oid,'USAGE') THEN 'USAGE' ELSE NULL END)])foo(privs) WHERE privs IS NOT NULL)
FROM pg_namespace c where has_schema_privilege($1,c.oid,'CREATE,USAGE');
$$ language sql;

8. To get privilege of a particular with view name, following function can be use:
CREATE OR REPLACE FUNCTION view_privs(text) returns table(username text, viewname regclass, privileges text[])
AS
$$
SELECT  $1, c.oid::regclass, array(select privs from unnest(ARRAY [ 
( CASE WHEN has_table_privilege($1,c.oid,'SELECT') THEN 'SELECT' ELSE NULL END),
(CASE WHEN has_table_privilege($1,c.oid,'INSERT') THEN 'INSERT' ELSE NULL END),
(CASE WHEN has_table_privilege($1,c.oid,'UPDATE') THEN 'UPDATE' ELSE NULL END),
(CASE WHEN has_table_privilege($1,c.oid,'DELETE') THEN 'DELETE' ELSE NULL END),
(CASE WHEN has_table_privilege($1,c.oid,'TRUNCATE') THEN 'TRUNCATE' ELSE NULL END),
(CASE WHEN has_table_privilege($1,c.oid,'REFERENCES') THEN 'REFERENCES' ELSE NULL END),
(CASE WHEN has_table_privilege($1,c.oid,'TRIGGER') THEN 'TRIGGER' ELSE NULL END)]) foo(privs) where privs is not null) FROM pg_class c JOIN pg_namespace n on c.relnamespace=n.oid where n.nspname not in ('information_schema','pg_catalog','sys') and nspparent=0 and c.relkind='v' and has_table_privilege($1,c.oid,'SELECT, INSERT,UPDATE,DELETE,TRUNCATE,REFRENCES,TRIGGER') AND has_schema_privilege($1,c.relnamespace,'USAGE')
$$ language sql;

9. For Sequence Privilege following function can be use:
CREATE OR REPLACE FUNCTION sequence_privs(text) RETURNS table(username text, sequence regclass, privileges text[])
AS
$$
  SELECT $1, c.oid::regclass, array(select privs from unnest(ARRAY [ 
( CASE WHEN has_table_privilege($1,c.oid,'SELECT') THEN 'SELECT' ELSE NULL END),
(CASE WHEN has_table_privilege($1,c.oid,'UPDATE') THEN 'UPDATE' ELSE NULL END)]),
(CASE WHEN has_table_privilege($1,c.oid,'USAGE') THEN 'UPDATE' ELSE NULL END) foo(privs) where privs is not null) FROM pg_class c JOIN pg_namespace n on c.relnamespace=n.oid where n.nspname not in ('information_schema','pg_catalog','sys') and nspparent=0 and c.relkind='S' and 
has_table_privilege($1,c.oid,'SELECT,UPDATE,USAGE')  AND has_schema_privilege($1,c.relnamespace,'USAGE')
$$ language sql;

User can also make join on above functions to view the privileges on objects something like given below:
select * from ( 
select username,'SCHEMA' as object_type,schemaname as object_name,privieleges 
    FROM schema_privs('test_user') 
 UNION ALL
SELECT username,'TABLE' as object_type ,relname::name as object_name ,privs 
    FROM table_privs('test_user')
 ) order by 2;

Above will give table and schema privileges of a user test_user. Below is output:
edb=# select * from  
edb-# (select username,'SCHEMA' as object_type,schemaname as object_name,privieleges FROM schema_privs('test_user') 
edb(#  UNION ALL
edb(# SELECT username,'TABLE' as object_type ,relname::name as object_name ,privs  FROM table_privs('test_user')
edb(# ) order by 2;

 username  | object_type |    object_name     |      privieleges       
-----------+-------------+--------------------+------------------------
 test_user | SCHEMA      | pg_catalog         | {USAGE}
 test_user | SCHEMA      | public             | {CREATE,USAGE}
 test_user | SCHEMA      | information_schema | {USAGE}
 test_user | SCHEMA      | sys                | {USAGE}
 test_user | SCHEMA      | dbo                | {USAGE}
 test_user | SCHEMA      | test               | {USAGE}
 test_user | TABLE       | test_id            | {SELECT,UPDATE,DELETE}
(7 rows)

Monday, July 23, 2012

Monitor CPU and MEMORY percentage used by each process in PostgreSQL/PPAS 9.1

PostgreSQL has pg_stat_activity view which can be use to get the session details. This view gives following information:
1. datid: database OID
2. datname: database name
3. procpid: process ID
4. usesysid: user OID 
5. usename: user name
6. application_name: application name
7. client_addr: client's address
8. client_hostname: host name (if available)
9. client_port: Clients port number
10. backend_start: time at which the server process started
11. xact_start: time at which current transaction started
12: query_start: time at which current query began execution started
13: waiting:  process's waiting status
14. current_query: text of the current query.
Above is good for monitoring sessions in postgresql. However, suppose user wants to know about percentage of CPU & Memory used by a particular session/user, then he can get this information by using plperlu function.

To Create the pleperlu function, user needs to have plperlu language installed in DB.
For creating the plperlu, connect to Database as super user and execute following command:
edb=# CREATE EXTENSION plperlu;
CREATE EXTENSION

Now create following functions:
CREATE OR REPLACE FUNCTION get_pid_cpu_mem(int) returns table(PID INT,CPU_perc float,MEM_perc float) 
as
$$
  my $ps = "ps aux";
  my $awk = "awk '{if (\$2==".$_[0]."){print \$2\":\"\$3\":\"\$4}}'";
  my $cmd = $ps."|".$awk;
  $output = `$cmd 2>&1`;
  @output = split(/[\n\r]+/,$output);
  foreach $out (@output)
  { 
    my @line = split(/:/,$out);
    return_next{'pid' => $line[0],'cpu_perc' => $line[1], 'mem_perc' => $line[2]};
    return undef;
  }
   return;
 $$ language plperlu;
Note:: Above function is made for PostgreSQL/PPAS running on Linux/Unix System.

Now user can use above function with pg_stat_activity to monitor the percentage of cpu and memory used by particular user/process. 

Following is one snapshot:
edb=# select procpid,usename,application_name, get_pid_cpu_mem(procpid).cpu_perc,get_pid_cpu_mem(procpid).mem_perc,current_query from pg_stat_activity;
-[ RECORD 1 ]----+--------------------------------------------------------------------------------------------------------------------------------------------------
procpid          | 12991
usename          | enterprisedb
application_name | psql
cpu_perc         | 0
mem_perc         | 0.6
current_query    | select procpid,usename,application_name, get_pid_cpu_mem(procpid).cpu_perc,get_pid_cpu_mem(procpid).mem_perc,current_query from pg_stat_activity;

Monday, June 11, 2012

Creating user probe and alert in PEM 2.1

This is also one type of thing in which people are very interested. How to create probes other than PEM inbuilt probe? And how to create alert based on probe?
Well answer is simple. Understand what probe is in PEM and then understand PEM Data detail.
Probe in PEM requires following:
1. SQL Code which can be use to gathering data by pemagent.
2. Table in pemdata schema, which will be use for storing Current status/data of SQL.
3. History table (specially in pemhistory schema) where all history data will reside.

Suppose user wants to monitor the Slony Replication using PEM (since, PEM doesn't have slony replication moniotring), so user can do following:
1. SQL Code which can be use for slony replication monitoring.
We know that slony replication monitoring can be done using view sl_status of slony. So, user can create a SQL Code as given below for replication.
select '_test_slony' as schema_name, st_origin,st_received, st_lag_num_events , st_lag_time from _test_slony.sl_status
Now, you must be thinking why I am including Slony SCHEMA name in SQL Code. Well, reason is that slony_schema name represents Slony Cluster name and if user is having more than one slony replication in Database, then those can be monitor with small modification of SQL Code. Also, PEM data table and PEM History can be use for monitoring multiple slony replication cluster.

Lets start with creating probe based on SQL Code:
 Standard way of creating probe in Postgres Enterprise Manager:
1. Insert the detail of probe in probe table with SQL Code for monitoring:
INSERT INTO probe(display_name,
internal_name,
collection_method, 
target_type_id, 
applies_to_id,
probe_code,
enabled_by_default, default_execution_frequency,default_lifetime,any_server_version,force_enabled,probe_key_list) 
VALUES('slon replication monitor','slony_replication','s',300,600,'select ''_test_slony'' as schema_name, st_origin,st_received, st_lag_num_events , st_lag_time from  _test_slony.sl_status',false,300,180,true,false,ARRAY['st_origin','st_received']);
Comlumn Description of probe table is given below:
display_name: Name as presented in Alerting Box.
internal_name: Internal Name of probe to PEM server. Based on that function: pem.create_data_and_history_tables(), creates pemdata and pemhistory table
collection_method: is Collection based on SQL: 's' or based on internal code of pemagent: 'i' target_type_id: Target type id, Type id and its description below:
  Global  : 50
  Agent   : Agent
  Server  : 200
  Database: 300
  Schema :   400
  Table :   500
  Index :  600
  Sequence: 700
  Function: 800
If probe is for database level then target_type_id would be 300.
applies_to_id: probe is apply to which type id, i.e if user can make database level probe which can be apply to table (like IndexSize can be database level and can be table level alert).
probe_code: SQL Code/function to collect data e
nabled_by_default: if true then it will be enable for all pemagent servers
default_execution_frequency: Interval
default_lifetime: Data retention period
any_server_version: Is it PPAS specific alert or PG/PPAS both (true/false)
force_enabled: By default enable (true/false).
probe_key_list: Key columns

2. Insert the detail of data column, returns by SQL Code, in probe column, as given below:
INSERT INTO pem.probe_column (probe_id, internal_name, display_name, display_position, classification,
       sql_data_type, unit_of_value, calculate_pit, discard_history, pit_by_default, is_graphable)

SELECT 
   (SELECT max(id) FROM PEM.probe), v.internal_name, v.display_name, v.display_position, v.classification,
       v.sql_data_type, v.unit_of_value, v.calculate_pit, v.discard_history, v.pit_by_default, v.is_graphable FROM (
    VALUES('database_name','Database Name',1,'k','text','',false,false,false,false),
    ('schema_name','Slony Cluster',2,'k','text','',false,false,false,false),
    ('st_origin','Master Node ID',3,'k','INTEGER','',false,false,false,false),
    ('st_received','Slave Node ID',4,'k','INTEGER','',false,false,false,false),
    ('st_lag_num_events','Lag Events',5,'m','INTEGER','',false,false,false,false),
    ('st_lag_time','Lag Interval',6,'m','INTERVAL','',false,false,false,false)
)
v(internal_name, display_name, display_position, classification,
               sql_data_type, unit_of_value, calculate_pit, discard_history, pit_by_default, is_graphable);
Description of columns are given below:
probe_id : Assigned probe_id for column (its max(id) of pem.probe column.
internal_name : Internal name to PEM server.
display_name : Column Display name to user.
display_position : Position of column in SQL Code.
classification : If column consider as primary key i.e based on this record can be identified, then value would 'k' else 'm'.
sql_data_type : Data type of Column
unit_of_value : Unit of data of column.
calculate_pit : Point in time data
discard_history : Discard any history.
pit_by_default : Default Point in time representation true/false
is_graphable : Can be use for graph (always keep false).

3. Now use PEM server function to create data and history table in pemdata and pemhistory schema of PEM server.
SELECT pem.create_data_and_history_tables();
Above will create table with internal_name mentioned in probe_column, like pemdata.slony_replication and pemhistory.slony_replication. 

Above are three simple steps to create probe in PEM Server. Now, user can see that if you know SQL language, you can create probe in PEM. To verify the probe you can see the probe in PEM Client:
Open PEM Client -> Go to PEM Server directory (in Left pane) -> Connect to PostgreSQL Cluster -> expand the databases -> Right click on any database -> select Probe Configuration.
which will popup "Probe configuration" and you would be able to see the slony replication.

Since, we have created probe and based on probe, we can create template alert.
Creating Template Alert is simple. For new template alert, you have to do following:
1. Identify the right SQL code for monitoring based on probe, as given below for slony replication:
SELECT extract ('epoch' from st_lag_time) from  pemdata.slony_replication WHERE server_id=${server_id} AND database_name='${database_name}' AND st_origin='${param_1}' AND st_received='${param_2}' and st_lag_time >=interval '1 sec' AND schema_name='_test_slony'
In above I am using epoch function to calculate seconds and based on that I can have alert which would be having threshold value for Low, Medium and High alert.

2. Use PEM server function, pem.create_alert_template(), to create template alert as given below:
SELECT pem.create_alert_template('Slony Replication','Slony Replication Monitoring Alert',$SQL$ SELECT extract ('epoch' from st_lag_time) from  pemdata.slony_replication WHERE server_id=${server_id} AND database_name='${database_name}' AND st_origin='${param_1}' AND st_received='${param_2}' and st_lag_time >=interval '1 sec' AND schema_name='_test_slony' $SQL$,300,ARRAY['st_origin','st_received'],ARRAY['INTEGER','INTEGER']::alert_param_type[],NULL,'seconds',ARRAY['slony_replication'],2);

Argument description of pem.create_alert_template() function is given below:
1.name : Name of alert in single quotes
2.description : Small Description of Alert
3.sql : SQL code for getting threshold
4.object_type : Alert is of Agent LevelServer Level,
5.param_names : SQL Code is dependent on any parameter name, (like in my SQL Code, its depend '${param_1}' (origin) and '{param_2}' (subscriber node id)
6.param_types : Data type of each parameter (As per SQL code for slony, this would be NULL)
7.param_units : Assigned unit
8.threshold_unit : Unit of threshold (since I am doing Checking the Time lag, there it would seconds).
9.probe_dependency_list: Dependency on Pemdata table (Its slony_replication)
10.snmp_oid : Oid for snmp
11.applicable_on_server: Is this applicable for Advanced Server or ALL
12.default_check_frequency: Interval
13.default_history_retention: data retention in number of days.

Thats it! Now, you can see creating new user defined alert and probe in PEM is very simple. Only thing which user has to do is to focus on identifying right SQL Code for alert/probe.

Have fun with Postgres Enterprise Manager!

Monday, June 4, 2012

Automating Binding of Servers with PEM Agent in Postgres Enterprise Manager 2.1

This is second post of Postgres Enterprise Manager 2.1 series.

Question which people ask, Is Postgres Enterprise Manager tool useful for Companies, which provide services for PostgreSQL? Answer is yes. What about companies, which provide infrastructure support, where they provide server and PostgreSQL database, for them, is this is a good tool for monitoring PostgreSQL? Answer is yes, you can use it.

Companies, which are giving Infrastructure support and providing Database As service always want everything to be automated. i.e with provisioning server, server should have installed PostgreSQL and its components plus they want automatic installation of pemagent, which is also acheivable. However, they stumped on Automatic Bidning of PEM Agent with PostgreSQL.

For binding PostgreSQL with PEM Agent, Simple Method is using PEM Client. In PEM Client, Add a PostgreSQL in PEM Directory and then user can bind the PEM agent with PostgreSQL Cluster.
File -> Add Server -> which will popup screen for "New serve Registeration" -> After adding server information (Also click on Store PEM Server) 
Then in "New server Registeration Window" ->
 Click on tab "PEM Agent" -> Choose agent from drop down window of "Bound Agent", Then user can include the information. 
Well above is one method. However for Company which provides infrastructure support doesn't want to do this manual work. They want this activity to be automated. For automating this process, user can do following:

1. Install the PEM agent in unattended mode. For detail of installing pemagent in unattended mode, user can use command like:
./pem_agent-2.1.0.2-linux-x64.run --prefix /opt/PEM --mode unattended --pghost 'pem server hostname' --pguser 'pem server username'  --pgpassword 'password' --agent_description 'Agent Hostname'
2. Create .pgpass file in root/admin user home directory. Like on linux root home directory is /root.
   
# cat .pgpass
 #hostname:port:database:username:password
 *:5432:*:postgres:'your postgres password'
Change the permission on .pgpass:
  
 chmod 600 .pgpass
3. After creating the .pgpass file in home directory, execute following SQL:
/opt/PEM/agent/bin/edb-psql -h 'PEM Server hostname' -p 5432 -c "
INSERT INTO pem.server(description,server,port,database) VALUES('New server','IP Address',5432,'postgres',2);" -U postgres

/opt/PEM/agent/bin/edb-psql -h 'PEM Server hostname' -p 5432 -c "INSERT INTO pem.server_option(server_id,pem_user,username,server_group) VALUES((SELECT id FROM pem.server WHERE server='IP address' AND active=true),'postgres','postgres','PEM Server Directory');" -U postgres

/opt/PEM/agent/bin/edb-psql -h 'PEM Server hostname' -p 5432 -c "INSERT INTO pem.agent_server_binding(agent_id,server_id,server,port,username,database)
VALUES((SELECT id FROM pem.agent WHERE description ='Agent Hostname' AND active=true),(SELECT id FROM pem.server WHERE server='IP address' AND active=true),5432,'postgres','postgres');" -U postgres
After executing above SQLs, you have successfully binded agent with Server for monitoring.
User can also include above steps in shell script and can execute while provisioning new server for their client.

Have Fun!

Saturday, June 2, 2012

Types of Alerts in Postgres Enterprise Manager: 2.1.

Its being two months, that I didn't blog on any topic. So, I thought to share some information on PEM.

This post is for users, who always look for type of inbuild alerts of PEM. In this Blog, I am listing categories of Alerts and List of alerts in each category, which one can find in Postgres Enterprise Manager 2.1.
Also, if you have list of alerts then you can decide which alert you would like to configure.

Postgres Enterprise Manager (PEM) is very useful tool for monitoring PostgreSQL and it has all the alerts which is require for monitoring postgresql. User can define this alerts in following categories:
1. Server Level Alerts.
2. PG Cluster Level Alerts.
3. Database Level Alerts.
4. Schema Level Alerts
5. Table Level Alerts.
All the alerts which has been made/defined keeping in mind of postgreSQL. Let see what are the alerts in each categories:
1. Server Level Alerts: These alerts are made for monitoring Server Components like Memory,Disk and CPU and Server Level monitoring of PostgreSQL:
 1. Average table bloat on host (Bloats impact CPU and Disk)
 2. CPU utilization
 3. Database size on host
 4. Disk Available
 5. Disk busy percentage
 6. Disk Consumption
 7. Disk consumption percentage
 8. Free memory percentage
 9. Highest table bloat on host 
 10. Load Average (15 minutes)
 11. Load Average (1 minute)
 12. Load Average (5 minutes)
 13. Load Average per CPU Core (15 minutes)
 14. Load Average per CPU Core (1 minutes)
 15. Load Average per CPU Core (5 minutes)
 16. Memory used percentage
 17. Most used disk percentage
 18. Number of CPUs running higher than a threshold
 19. Swap consumption
 20. Swap consumption percentage
 21. Table size on host
 22. Total table bloat on host
User can see above alerts covered server Monitoring which includes: Memory,Disk,CPU and Bloats & Biggest table on host.


2. PG Cluster Level Alert: These alerts are made for Monitoring PostgreSQL, Cluster Level. i.e. Number of Connections, Database Level Stats, User monitoring etc… Following is list of PG Cluster Level alerts:
 1. A user expires in N days
 2. Average table bloat in server
 3. Buffers allocated per second
 4. Buffers written per second
 5. Committed transactions percentage
 6. Connections in idle-in-transaction state
 7. Connections in idle-in-transaction state, as a percentage of max_connections
 8. Connections in idle state
 9. Database size in server
 10. Dead Tuples
 11. Dead tuples percentage
 12. Function Count
 13. Highest table bloat in server
 14. Hot update percentage
 15. Index Scans
 16. Index size as a percentage of table size
 17. InfiniteCache buffers hit percentage
 18. Largest index by table-size percentage
 19. Largest table (by multiple of unbloated size)
 20. Last Analyze
 21. Last AutoAnalyze
 22. Last AutoVacuum
 23. Last Vacuum
 24. Live Tuples
 25. Long-running autovacuums
 26. Long-running idle connections
 27. Long-running idle connections and idle transactions
 28. Long-running idle transactions
 29. Long-running queries
 30. Long-running transactions
 31. Long-running vacuums
 32. Number of prepared transactions
 33. Number of WAL files
 34. Percentage of buffers written by backends
 35. Percentage of buffers written by backends over last N minutes
 36. Percentage of buffers written by checkpoint
 37. Sequence Count
 38. Sequential Scans
 39. Shared buffers hit percentage
 40. Table Count
 41. Table size in server
 42. Total connections
 43. Total connections as percentage of max_connections
 44. Total table bloat in server
 45. Tuples deleted
 46. Tuples fetched
 47. Tuples hot updated
 48. Tuples inserted
 49. Tuples returned
 50. Tuples updated
 51. Ungranted locks
 52. Unused, non-superuser connections
 53. Unused, non-superuser connections as percentage of max_connections


3. Database Level Alerts: These alerts for Monitoring Specific Database in PostgreSQL Cluster. This is useful when you have database, which is important for your Bussiness and monitoring of that database is important for you:
 1. Average table bloat in database
 2. Committed transactions percentage
 3. Connections in idle-in-transaction state
 4. Connections in idle-in-transaction state, as a percentage of max_connections
 5. Connections in idle state
 6. Database Frozen XID
 7. Database size
 8. Dead Tuples
 9. Dead tuples percentage
 10. Function Count
 11. Highest table bloat in database
 12. Hot update percentage
 13. Index Scans
 14. Index size as a percentage of table size
 15. InfiniteCache buffers hit percentage
 16. Largest index by table-size percentage
 17. Largest table (by multiple of unbloated size)
 18. Last Analyze
 19. Last AutoAnalyze
 20. Last AutoVacuum
 21. Last Vacuum
 22. Live Tuples
 23. Long-running autovacuums
 24. Long-running idle connections
 25. Long-running idle connections and idle transactions
 26. Long-running idle transactions
 27. Long-running queries
 28. Long-running transactions
 29. Long-running vacuums
 30. Sequence Count
 31. Sequential Scans
 32. Shared buffers hit percentage
 33. Table Count
 34. Table size in database
 35. Total connections
 36. Total connections as percentage of max_connections
 37. Total table bloat in database
 38. Tuples deleted
 39. Tuples fetched
 40. Tuples hot updated
 41. Tuples inserted
 42. Tuples returned
 43. Tuples updated
 44. Ungranted locks


4. Schema Level Alerts: User can also configure alerts for specific schema in Database. This is important when you have a schema, related to important Business Objects and you have to monitor the performance of tables in schema. List of those alerts is given below:
 1. Average table bloat in schema
 2. Dead Tuples
 3. Dead tuples percentage
 4. Function Count
 5. Highest table bloat in schema
 6. Hot update percentage
 7. Index Scans
 8. Index size as a percentage of table size
 9. Largest index by table-size percentage
 10. Largest table (by multiple of unbloated size)
 11. Last Analyze
 12. Last AutoAnalyze
 13. Last AutoVacuum
 14. Last Vacuum
 15. Live Tuples
 16. Sequence Count
 17. Sequential Scans
 18. Table Count
 19. Table size in schema
 20. Total table bloat in schema
 21. Tuples deleted
 22. Tuples hot updated
 23. Tuples inserted
 24. Tuples updated


5. Table Level Alerts: User can also create alert table level. Some times, user are interested in monitoring important/specific table which has business importance. For them these alerts are important for maintaining the performance of PG.
 1. Dead Tuples
 2. Dead tuples percentage
 3. Hot update percentage
 4. Index Scans
 5. Index size as a percentage of table size
 6. Last Analyze
 7. Last AutoAnalyze
 8. Last AutoVacuum
 9. Last Vacuum
 10. Live Tuples
 11. Row Count
 12. Sequential Scans
 13. Table bloat
 14. Table Frozen XID
 15. Table size
 16. Table size as a multiple of ubloated size
 17. Tuples deleted
 18. Tuples hot updated
 19. Tuples inserted
 20. Tuples updated
You can see that Postgres Enterprise Manager covers PostgreSQL monitoring from all aspects which directly/indirectly responsible for PostgreSQL Performance/Monitoring.

Now, reader of my blog must be thinking, these are the list of alerts. How can they get more information/description on these alerts. Answer is simple, you can get more detail about above alerts by three methods:
1. Using PEM Client HELP
 Open PEM Client -> Go to Help


2. Using Alerting:
  Open PEM Client ->
   For server Level, go to -> PEM Agents -> Right click on particular agent -> click on Alerting
For PG Cluster Level,
go to -> PEM Server Directory -> Right Click on particular PG Cluster -> select Alerting 
Similarly For Database alerts and Table Alerts.

3. Using SQL: Connect to PEM Server Database:
   psql -p 5432 -U postgres pem
And use following SQLs:
Server Level:
SELECT row_number() over (order by display_name)||'. '|| at.display_name AS display_name,description FROM pem.alert_template at WHERE (at.object_type = 100) ORDER BY at.display_name;

CLuster Level:
SELECT row_number() over (order by display_name)||'. '|| at.display_name, description FROM pem.alert_template at WHERE (at.object_type = 200) AND at.applicable_on_server IN ('ALL' , 'POSTGRES_SERVER') ORDER BY at.display_name;
DB Level:
SELECT row_number() over (order by display_name)||'. '|| at.display_name, description FROM pem.alert_template at WHERE (at.object_type = 300) AND at.applicable_on_server IN ('ALL' , 'POSTGRES_SERVER') ORDER BY at.display_name;

Schema Level:
SELECT row_number() over (order by display_name)||'. '|| at.display_name, description FROM pem.alert_template at WHERE (at.object_type = 400) AND at.applicable_on_server IN ('ALL' , 'POSTGRES_SERVER') ORDER BY at.display_name;

Table Level:
SELECT row_number() over (order by display_name)||'. '|| at.display_name, description FROM pem.alert_template at WHERE (at.object_type = 500) AND at.applicable_on_server IN ('ALL' , 'POSTGRES_SERVER') ORDER BY at.display_name;
Have Fun! and Have perfect Monitoring of PostgreSQL.

Wednesday, March 21, 2012

READ-ONLY user,READ-ONLY Database,READ-ONLY backup user in PostgreSQL/PPAS

This has been asked me many times. Many users has asked me how they can create READ-ONLY user, READ-ONLY Database or How they can create backup user which can be use only for backup (no modification)

Well answer is in parameter called default_transaction_read_only.

If you want to make a user READ-ONLY, then you can follow steps given below:
1. CREATE normal user.
2. Use ALTER USER command to set this parameter for this user as given below:
ALTER USER  set default_transaction_read_only = on;
3. GRANT SELECT to this user for tables/object using GRANT. And you are good to go.
Below is snapshot:
postgres=# create user readonly password 'test';
CREATE ROLE
postgres=# alter user readonly set default_transaction_read_only = on;
ALTER ROLE
postgres=# GRANT select on employees to readonly;
GRANT
edbs-MacBook-Pro:data postgres$ psql -U readonly -W
Password for user readonly: 
psql (9.1.1)
Type "help" for help.

postgres=> select * from employees ;
 employee_name | entity_name 
---------------+-------------
 Smith         | HR
 Jones         | HR
 Taylor        | SALES
 Brown         | SALES
(4 rows)

postgres=> CREATE table test_readonly(id numeric);
ERROR:  cannot execute CREATE TABLE in a read-only transaction
postgres=> 
Similarly, If you want to make a Database READ-ONLY, then you can use following command
ALTER DATABASE  set default_transaction_read_only=on;
Below is snapshot:
postgres=# CREATE database readonly;
CREATE DATABASE
postgres=# alter database readonly set default_transaction_read_only = on;
ALTER DATABASE
postgres=# \q
edbs-MacBook-Pro:data postgres$ psql readonly
psql (9.1.1)
Type "help" for help.

readonly=# create table test_readonly(id numeric);
ERROR:  cannot execute CREATE TABLE in a read-only transaction
readonly=# 
Now, if you want a seperate backup user which you want to use for Online/Hot Backup,Logical Backup (using pg_dump), then you can create a super user with default_transaction_read_only = on and can use it for backup purpose. As given below:
CREATE USER backupuser SUPERUSER  password 'backupuser';
ALTER USER backupuser set default_transaction_read_only = on;
Using default_transaction_read_only parameter, user can also make a Session Readonly by executing following command:
set default_transaction_read_only=on;

I hope above would help someone who is interested in having READ-ONLY user,database or backupuser.

Monday, February 13, 2012

Partition Syntax Support in PPAS 9.1

In Series of New Features in Advanced Server 9.1, today I tought to write about Parition Table Syntax Supported in PPAS. In PostgreSQL and till PPAS 9.0, user has to follow method given below for partitioning a table:
1. CREATE PARENT Table,
2. Create Child tables using Inherit feature
3. Create Trigger on Partition on Parent, so that it can re-direct insert to Right Partition.
4. And if user has to add new child table, then it has to do 2 and 3 steps again.


Now, in PPAS 9.0, user doesn't have to perform above activities. PPAS 9.0 supports PARTITION TABLE syntax.

Lets see how PARTITION syntax in PPAS can make users/DBAs life easier.

We know that PPAS/PostgreSQL supports two types of partition (Range and List). So, we will see how its simple with CREATE PARTITION SYNTAX.
Lets CREATE RANGE PARTITION as we used to do in PPAS 9.0/PostgreSQL
1. Create Master table as given below:
CREATE TABLE partition_master(id numeric primary key,val text);
CREATE TABLE partition_child1(CHECK (id > 0 and id <=10)) inherits(partition_master);
CREATE TABLE partition_child2(CHECK (id >10 and id <=20)) inherits(partition_master);
2. Then create check constriants on Both Child tables
alter table partition_child1 add primary key (id);
alter table partition_child2 add primary key (id);
3. Then Create Trigger, which redirect Inserts to right master:
CREATE OR REPLACE FUNCTION part_trig_insrt() RETURNS trigger
AS
$$
BEGIN
   IF TG_OP='INSERT' THEN
      IF NEW.id >0 and NEW.id <=10 THEN
           INSERT INTO partition_child1 VALUES(NEW.*);
      ELSIF NEW.id >10 and NEW.id <=20 THEN
           INSERT INTO partition_child2 VALUES(NEW.*);
      ELSE
          RAISE 'inserted partition key doesnt map to any partition';      
     END IF;
  END IF;
END;
$$ language plpgsql;
CREATE TRIGGER partition_trig_insert BEFORE INSERT ON partition_master FOR EACH ROW execute procedure part_trig_insrt();
similarly you have to write trigger which can handle partition key update. And wheneven you want to add new partition update the trigger function and create new partition table etc. Now, in 9.1AS on-wards, user can run single command for partition and PPAS will take care of all things as given below:
CREATE TABLE partition_master (id numeric primary key,val text)
PARTITION BY RANGE(id)
(PARTITION partition_child1 VALUES LESS THAN (11),
PARTITION partition_child2 VALUES LESS THAN (21));

edb=# insert into partition_master values(1,'First');
INSERT 0 0
edb=# insert into partition_master values(11,'Eleventh');
INSERT 0 0
edb=# select * from partition_master;
 id |   val    
----+----------
  1 | First
 11 | Eleventh
(2 rows)

edb=# select * from only partition_master_partition_child1;
 id |  val  
----+-------
  1 | First
(1 row)

edb=# select * from partition_master_partition_child2;
 id |   val    
----+----------
 11 | Eleventh
(1 row)
That was easy. With single command you can have your partition table in PPAS. Suppose later, user wants to add one more partition then he can execute following single command:
ALTER TABLE partition_master add partition partition_child3 VALUES LESS THAN (31);

edb=# insert into partition_master values(30,'Thirty');
INSERT 0 0
edb=# select * from partition_master;
 id |   val    
----+----------
  1 | First
 11 | Eleventh
 30 | Thirty
(3 rows)

edb=# select * from partition_master_partition_child3;
 id |  val   
----+--------
 30 | Thirty
(1 row)
Thats simple. isn't? With this PPAS 9.1 Partition Syntax also allows swaping an existing table with a partition or subpartition, as given below:
CREATE TABLE single_table (id numeric primary key,val text)
insert into single_master select generate_series(1,10);
INSERT 0 10

ALTER TABLE partition_master 
EXCHANGE PARTITION partition_child1
WITH TABLE single_table;
Other syntax which are supported is given below:
1.  ALTER TABLE… ADD PARTITION
2.  ALTER TABLE… ADD SUBPARTITION
3.  ALTER TABLE… DROP PARTITION
4.  ALTER TABLE… DROP SUBPARTITION
5.  ALTER TABLE… SPLIT PARTITION
6.  ALTER TABLE… SPLIT SUBPARTITION
7.  ALTER TABLE… TRUNCATE PARTITION
8.  ALTER TABLE… TRUNCATE SUBPARTITION
9.  ALTER TABLE… EXCHANGE PARTITION
10. ALTER TABLE… MOVE PARTITION
11. ALTER TABLE… RENAME PARTITION
Support of above partition syntaxes have really made management of partition table easier in PPAS 9.1!

Friday, February 10, 2012

Virtual Private Database (VPD) in PPAS 9.1

Great News is Postgres Plus Advanced 9.1 is now available for users. So, I thought to write something about Virtual Private Database features, which is part of 9.1

Virtual Private Database (VPD) is a feature which enables Administrator to create security around actual data (i.e row/columns) so that multiple users can access data which is relevant to them. Steps which is require to create Virtual Private database is given below:

1. Create an Application Context
2. Create security policies functions
3. Apply security policies to tables

Lets see how user can implement it in Advanced Server.
1. Setup an environment as given below:
CREATE user merry identified by edb;
CREATE user john identified by edb;
CREATE TABLE public.john_merry(userid varchar2(200),val numeric);
grant select,update,delete,insert on john_merry to john;
grant select,update,delete,insert on john_merry to merry;
2. Now create a Policy Function as given below:
CREATE OR REPLACE FUNCTION verify_user (
    p_schema        VARCHAR2,
    p_object        VARCHAR2
)
RETURN VARCHAR2
IS
BEGIN
    RETURN 'userid = SYS_CONTEXT(''USERENV'', ''SESSION_USER'')';
END;

3. Apply Security Policy using Policy Functions as given below:
DECLARE
    v_object_schema         VARCHAR2(30) := 'public';
    v_object_name           VARCHAR2(30) := 'john_merry';
    v_policy_name           VARCHAR2(30) := 'secure_data';
    v_function_schema       VARCHAR2(30) := 'enterprisedb';
    v_policy_function       VARCHAR2(30) := 'verify_user';
    v_statement_types       VARCHAR2(30) := 'INSERT,UPDATE,DELETE,SELECT';
    v_update_check          BOOLEAN      := TRUE;
    v_enable                BOOLEAN      := TRUE;
BEGIN
    DBMS_RLS.ADD_POLICY(
        v_object_schema,
        v_object_name,
        v_policy_name,
        v_function_schema,
        v_policy_function,
        v_statement_types,
        v_update_check,
        v_enable
    );
END;

Now, lets see how it works.
First insert some records for John and Merry as given below:
1. Connect as John user and do some insert as john, as given below:
edb=> insert into john_merry values('john',1);                 
INSERT 0 1
edb=> insert into john_merry values('john',2); 
INSERT 0 1
edb=> insert into john_merry values('john',3);
INSERT 0 1
2. Now connect as merry and insert some records as merry:
edb=> insert into john_merry values('merry',1);
INSERT 0 1
edb=> insert into john_merry values('merry',2);
INSERT 0 1
edb=> insert into john_merry values('merry',3);
INSERT 0 1
Ok. Now we have some data for both users. lets see how VPD works:
1. Connect as John User and try to insert some record for merry:
edb=> insert into john_merry values('merry',4);
ERROR:  policy with check option violation
DETAIL:  Policy predicate was evaluated to FALSE with the updated values
hmm this is expected, VPD security policy won't allow me to do this activity.

2. Now try to update Merry data as John User:
edb=> update john_merry set val=4 where val=3 and userid='merry';
UPDATE 0
No rows updated. This is expected since as per security policy merry's data is not visible to john. However, john can update his record as given below:
edb=> update john_merry set val=4 where val=3 and userid='john'; 
UPDATE 1
3. Now, lets execute SELECT query on table to view data:
edb=> select current_user;
 current_user 
--------------
 john
(1 row)

edb=> select * from john_merry ;
 userid | val 
--------+-----
 john   |   1
 john   |   2
 john   |   4
(3 rows)

As you can see, john can see his records not merry's. Similary user can try for Merry. Below is a snapshot of SELECT query executed by merry:
using dumb terminal settings.
edb=> select current_user;      
 current_user 
--------------
 merry
(1 row)

edb=> select * from john_merry ;
 userid | val 
--------+-----
 merry  |   1
 merry  |   2
 merry  |   3
(3 rows)
Interesting!!