Posts

Showing posts from 2010

Drop user which objects are in different databases of PG Instance.

One of my colleague has asked about this. She was doing some testing and wanted to drop a user which has objects across the databases. While dropping user/role, DBA/Admin will get following error messages: postgres=# drop user test; ERROR: role "test" cannot be dropped because some objects depend on it DETAIL: 2 objects in database test Above error messages gives the sufficient information to Admin that there are some objects depend/Owned by the user exists in other database. For dropping such user, there are two methods: 1. Reassign all the objects owned by the user to some other user and then drop the user. Above is very useful, if employee, who left the company, has written some Procedure/objects, which is getting used in Application/process. Command Which can be are following: REASSIGN OWNED BY old_role to new_role; DROP USER old_role; Note:: reassign command need to be executed for all the databases under one PG instance. 2. First Drop all the objects own

Rebuilding Pkey and Indexes without locking table in PG 8.4.

On production System, some times DBAs have to rebuild the indices, since, Rebuilding indices lock the table therefore DBA can use Option CONCURRENTLY. For normal Indices CONCURRENTLY is a best option, however Primary Key Indices Rebuild require Lock on table. To overcome this issue, I have made following function: SWAP Index with Pkey: CREATE OR REPLACE FUNCTION swap_for_pkey(text,text,text) returns integer AS $$ DECLARE cmd text; oid1 integer; oid2 integer; filenode1 integer; filenode2 integer; relation text; BEGIN select oid::integer into oid1 from pg_class where relname=$2 and relnamespace = (select oid from pg_namespace where nspname=$1); RAISE NOTICE 'PKEY OID: %',oid1; select relfilenode::integer into filenode1 from pg_class where oid=oid1; select oid::integer into oid2 from pg_class where relname=$3 and relnamespace = (select oid from pg_namespace where nspname=$1); RAISE NOTICE 'PKEY OID: %',oi

Some Handy SQLs for PG DBAs

For using these query DBA has to install the adminpack contrib module of PostgreSQL. If you are using one-click installer then user can find the adminpack.sql in /PostgreSQL Installation Directory/share/postgresql/contrib directory. Following are some SQLs Based on adminpack contrib module DBA can use to find the last logfile, size of the logfile and can ready using PG Client applications (psql,pgAdmin etc.) SELECT * FROM ( SELECT pg_ls_dir('pg_log')) AS t (filename)ORDER BY 1 DESC LIMIT 1 SELECT size FROM pg_stat_file('pg_log/postgresql-2010-05-13.log') SELECT pg_read_file('pg_log/postgresql-2010-05-13.log', 0, 21604) Sometimes, user doesn’t have the access to the Server and for reading the PG logfile user has to login on the server. Now they don;t have to. If you have adminpack.sql installed in PostgreSQL user can easily read the last logfile by creating following function: CREATE OR REPLACE FUNCTION pg_read_last_logfile() returns text AS $$ DECLARE

xDB Replication from Oracle to PPAS

EnterpriseDB Replication tool has been now modified and add new functionality in it. Previous Replication Console, which had been made, was using the DBA Management server and Publication & Subscriptions were dependent. Now, xDB Replication has been divided into two components: 1. Publication Server : (Master Server) 2. Subscription Server: (Slave Server) Some Background of Publication Server. Publication Server has following components: 1. JAVA Based Daemon. 2. MetaDatabase of Publication Server. Publication Server keeps its information (Information about the Primary Database, Tables Details etc.) in the MetaDatabase under schema _edb_replicator_pub Publication server is independent and has no dependency on Subscriptions Server. By default publication server uses the 9011 port. However, if user wants, [S]he can use different ports too. For running Publication Server on different port, use the following command: $EDBHOME/jre/bin/java -Djava.awt.headless=true -jar $

Making Dynamic Copy Command

There was a Question put by one of user about loading data in PostgreSQL skipping column which has default value using Copy Command. With above requirement, he has also mentioned that table’s column get changed and default values too. So, he wanted a plpgsql function code, which can be use for any table and file to load the data. Following is a sample plpgsql code which can be use to make things happen: CREATE OR REPLACE FUNCTION Copy_test(text,text,text) returns boolean AS $$ DECLARE rec record; cmd text; colstring text; BEGIN cmd:='select array_to_string(array(select column_name::text from information_schema.columns where table_name='||''''||$2||''''||' and table_schema='||''''||$1||''''||' and column_default is null),'||''''||','||''''||')'; EXECUTE cmd into colstring; cmd:='COPY "'||$1||'&qu

Does UPDATE Change ROWID in Oracle?

Yesterday, there was a discussion which was going on OTN (Oracle Forum). I was also part of that discussion. Discussion was about the "Does UPDATE statement change rowid in Oracle?" As per my comment of "UPDATE does not change rowid", one of the user has pointed me that UPDATE statement sometime changes the rowid in Oracle and given me following example. SQL> ed Wrote file afiedt.buf 1 create table moving_rowid ( 2 col1 number primary key, 3 col2 number, 4 col3 number, 5 col4 varchar2(10) 6 ) 7* organization index SQL> / Table created. SQL> insert into moving_rowid values( 1, 2, 3, 'foo' ); 1 row created. SQL> insert into moving_rowid values( 2, 3, 4, 'bar' ); 1 row created. SQL> select rowid, col1, col2, col3, col4 2 from moving_rowid; ROWID COL1 COL2 COL3 COL4 ---------------- ---------- ---------- ---------- ---------- *BAEADxsCwQL+ 1 2

Database Technologies: Some trigger and programming feature in PostgreSQL

Database Technologies: Some trigger and programming feature in PostgreSQL

Some trigger and programming feature in PostgreSQL

Anonymous Block Support: Now, user would be able to create anonymous block using plpgsql as given below: do $$ declare id1 varchar; begin select id into id1 from test_sort limit 1; raise notice '%',id1; end; $$ language plpgsql; Output: NOTICE: 1.1 DO Column Trigger: PG9.0 is also supports the column level and When Triggers. Following is an example: create or replace function test_col_trigger() returns trigger as $$ BEGIN RAISE NOTICE 'INSERTED VALUE in Column a %',new.id; RETURN null; END; $$ language plpgsql; create trigger col_trigger after update of "a" on test_def FOR EACH ROW EXECUTE procedure test_col_trigger(); Output: postgres=# update test_def set a='Second'; NOTICE: INSERTED VALUE in Column a 1 UPDATE 1 When Triggers in PG9.0: PG9.0 has "When trigger" which allowed user to create trigger to execute procedure for a particular condition. create or replace function test_col_trigger() returns t

pg_get_tabledef function in Postgres Plus

Till Advanced Server 8.3, there was a function pg_get_tabledef, which some people were using to get the definition of table. This function is no more part of new versions of Advanced Server 8.3 (8.3R2AS and 8.4AS). So, they would be missing this function by default. However, they can get the same functionality using following function, which provides similar result as of pg_get_tabledef. Function code is given below: CREATE OR REPLACE Function pg_get_tabledef(text) RETURNS text AS $$ DECLARE tabledef TEXT; dotpos integer; tablename text; schemaname text; prevcol text; coltype text; notnull1 boolean; rec record; oidcheck boolean; BEGIN dotpos:=strpos($1,'.'); if dotpos = 0 then schemaname:='public'; tablename:=substr($1,dotpos+1); else schemaname:=substr($1,1,dotpos-1); tablename:=substr($1,dotpos+1); end if; select relhasoids into oidcheck from pg_class,pg_namespace where pg_class.r

Some UNIX Utilities for performance monitoring

vmstat command: vmstat is a command which can be use to display system statistics. Syntax is given below: vmstat <# second> < #number of times> Example vmstat 1 10 Above Command is to display the system statistics every second, 10 times. Sample Output of the command: vmstat procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 1 0 0 950888 57536 512868 0 0 559 45 479 671 9 2 73 15 Lets understand the column definition: Procs Column would be having three subcolumns r: count (processes waiting for run time) b: count( Processes in uninterruptible sleep) w: Count (Process which are swapped out and runnable) (on RHEL) If any process in b and w column then DBA/System admin has to check the system. Memory Column would be having following subcolumns: swap: swap space currently available.(in kBs) free: Amount of idle memory(kBs) buff: Memory

Some New Security Features in PG9.0

Some New Security Features in PG9.0 1. New Grant and Revoke in PG9.0 In Previous version of PG (7.x,8.x), all the DBAs and Users used to miss the GRANT and REVOKE command which can be use to give permissions on all the tables inside the Schema. Now, they don;t have to. From PG9.0, user can execute single GRANT and REVOKE command to give the permission on all the tables in a SCHEMA. GRANT SELECT ON ALL TABLES in SCHEMA TEST to test_user; Here is output of query which shows that above command has given SELECT privileges on all the tables in SCHEMA Test. postgres=# select * from information_schema.table_privileges where grantee ='test_user';; grantor | grantee | table_catalog | table_schema | table_name | privilege_type | is_grantable | with_hierarchy ----------+-----------+---------------+--------------+------------+----------------+--------------+---------------- postgres | test_user | postgres | test | test | SELECT | NO | NO

Understanding Real Application Cluster Processes

With all the Processes of Stand-alone Oracle Database, Real Application Cluster provides some other processes which plays important role. These Processes are: 1. LMON: Globa Enqueue Service Monitor Responsibilities Assigned: This Process is responsible for monitoring the entire cluster global enqueues and the resources. It also manages the instances failure and process failures and the associated recovery for Global Cache service and Global Enqueue Service. Main responsibility of this process is recovery of Global Resources. Services provided by this process is also known as Cluster Group Services. View:: So, we have one watcher and recovery agent for Global Resources. 2. LMDx: Global Enqueue Service Daemon (Lock Agent) Responsibility Assigned: This process controls the access of Global Enqueue and resources. It also handles the Deadlock detection and remote enqueue requests (i.e requests from instances). View: Hmm, we have one recorder agent and resource informa

Making Slony source compatible with EnterpriseDB Advanced Server

Since, edb-replication which comes with advanced Server is more compactly build with one particular version, therefore some times user does not able to replicate the data between two different version of Advanced Servers. For replicating data between two advanced Server versions, it is important to have same version of Slony on Source and target database. I did research around the slony source code and tried to make it compatible with Advanced Server. If user plainly comile the slony source code against the Advanced Server database, then user will start to get messages like : version/ while configuring the Replication. Reason for such messages is that slony did not able to parse the Advanced Server version, therefore it would not be able to continue further for replication. I did some research and gone through the slony source code. Slony uses a program called dbutil*.c for finding the version of PostgreSQL or checking the compatibility of PostgreSQL version with Slony. F

pg_hotbackup utility for Backup of PG

Users always look for a Backup utility which can give some Good options and a utility can be use with all the instances of PG to take the backup. I have thought in same way and created a pg_hotbackup script. pg_hotbackup utility which I worked on is now a server side utility which takes backup on server side and keep the backups in Backup directory. Options which I have included in it, are following: Compulsory Options: -H (This is for Bin Directory location, utility will use psql command of PG Instance ) -b (Directory where user wants to keep the backup) -p (Port number of PG Instance) -U username (Username) -P passwd (Password). Some other options: -a: Archive Only Option [1|0] -r: Retention Policy [ in days ] -l: List Backups -n: Backup File Name -v: Validate Only [1|0] -R: Retention Only [|0] \?: Help So, I have all the options with me. Now, lets understand what do we need: 1. We need a catalog file, in which utility can keep the backups information and validate the

ROLLUP Analytical function in PostgreSQL.

Currently, there is no version of PG, which supports the rollup. However, people look for this analytical function features. ROLLUP queries result can be achieve using the UNION of Queries. First Let's Understand what does rollup do: If SQL Query has col1,col2,col3,aggregate(col4) then rollup Processing would be something like this. 1. Show the aggregate of col4 as per the col1,col2,col3 2. Then rollup will do the subtotal and will show the result as per the as aggregate of based on col1,col2 3. Then it will show the aggregate/subtotal as per the col1. 4. And at end Total/Sum In short, it creates progressively higher-level subtotals, moving from right to left through the list of grouping columns. Finally, it creates a grand total. In PG, this can be achieve by writing a SubQueries and and UNION those. So, if the rollup query is something like given below: select col1,col2,col3,agg(col4) from relation group by rollup(col1,col2,col3) Then in PG above can be writte

PG_REORG Utility for VACUUM FULL online

pg_reorg is a utility made by NTT for reorganizing the table structure. Concept is simple, if you have all the require pointers and data in same page, then accessing those is much more faster. This is what pg_reorg provides to a user. Following are some options, which pg_reorg provides. -o [ —order-by] columns: This option makes pg_reorg to oraganise the table data as per the mentioned column. At the backend pg_reorg will creates a new table using CTAS and SELECT Query include ORDER BY clause with columns mentioned with -o. -n [—no-order] tablename: When this option is being used, then pg_reorg, does the VACUUM FULL ONLINE. Now, question is how it must be doing. Simple Concept, create a new table using CTAS and create a trigger on current table to track the DML. As the New table got created play those tracked DML on new table. It works well. This option is only for table which has primary key. pg_reorg by default does the CLUSTER of tables and it follows same concept, i.e wit

Slony Vs PG9.0 Built in Streaming Replication.

People Generally asked such kind of Questions as PG9.0 Comes with Streaming Replication. Following are some points which people need to think before deciding which replication, they should follow: 1. Slony has a some overhead on database than the Streaming replication+HotStandby in 9.0 2. All the changes must be apply via SLONIK Command 3. Slony gives advantage of replicating some tables and allows to ignore others 4. Slony also gives the advantage of replication between Different version of PG and PG on different OS.

PG9.0:: Monitoring Hot Standby

Now PG9.0 is in Market with new feature of Hot Standby and Streaming Replication. So, I have started to explore the way of monitoring the Hot Standby. I was in process of writing my own code for Monitoring the Hot Standby. For this purpose I have written a shell script to find the way of calculating lag. In pgpool-II, Developer has used following formula to calculate the lagging: lsn = xlogid * 16 * 1024 * 1024 * 255 + xrecoff; Following is an explanation of meaning of xlogid and xrecoff: postgres=# select pg_current_xlog_location(); pg_current_xlog_location -------------------------- 0/13000078 (1 row) 0: is xlogid and xrecoff is 13000078 With this, Concept of implementation of finding the lagging is to calculate the replication lag by comparing the current WAL write location on the primary with the last WAL location received/replayed by the standby. These can be find using pg_current_xlog_location function on the primary and the pg_last_xlog_receive_location/pg_last_xl

pgFouine: PostgreSQL Log Analyzer

Image
PgFouine is a interesting PostgreSQL Analyzer Tool which is available for Generating the Reports in html format. Advantage of using this tool is that user gets report in text or HTML format, which is easy to analyze. Using pgFouine user can make following types of Reports: 1. Error Reports 2. Slow Query reports 3. Vacuum Verbose Reports etc... Installation of pgFouine is simple. Download the source from following location: http://pgfoundry.org/frs/download.php/2575/pgfouine-1.2.tar.gz and then extract the pgfouine source using following command: tar zxvf pgfouine-1.2.tar.gz. Please note, before using pgfouine user has to make sure that it has php installed on his server. pgFouine has some restriction over analyzing the log file. It analyzes the PostgreSQL logfiles, if the log_line_prefix has following format: log_line_prefix = 'user=%u,db=%d ' ( Filter on database with user with syslog ) log_line_prefix = '%t [%p]: [%l-1] ' ( For standard errors) log_l

Physical Standby Vs Hot Standby

Some thoughts always come in mind about standby terminologies. Once, Someone has asked question about Physical Standby of Oracle10g. Is Oracle10g Physical Standby a Hot Standby Or WarmStandby? Till Oracle 10g, Physical Standby of Oracle is a standby which has two mode: 1. Managed Recovery Mode 2. Read Only Mode. It cannot be in both mode at same time. If the standby is in recovery mode then, it's a Warm Standby and when its read only mode then, then it's lagging from Primary and would be able to response SELECT queries. Either way, till Oracle 10g physical standby was not meeting the requirement of Hot Standby. It was playing the role of Warm Standby. However, from Oracle 11g, Physical Standby can be Recovery mode and read only mode, both at the same time, which now fulfill the definition of Hot Standby. With the complexity of managing the Standby, licensing thing comes in Picture. PG9.0 onwards, PostgreSQL now has Cold Standby, Warm Standby and Hot Standby, with zero cost.

Hot Standby in PostgreSQL 9.0:

Image
As per definition of Hot Standby, its a method of redundancy in which primay and secondary (Backup Server) runs simultaneously. The data is mirrored to secondary so that both should contain identical data in real time. With this user, would be able to execute the Query against Database while secondary is in archive recovery mode. This is what introduce in PG9.0. In-built Hot Standby. PostgreSQL Community always try to make the things simpler as much as possible, same they have proven in Hot Standby Implementation. Following are the steps of configuring Hot Standby: 1. Make sure following parameters are set in Configuration file, postgresql.conf of Primary: wal_level = ‘hot_standby’ archive_mode = on archive_command = ‘cp %p /Library/PostgreSQL/9.0/data/archivelog/%f’ 2. After setting the above parameters, Now take the hot backup of PG9.0 Instance. Steps of Hot Backup is simple: a) Execute following command in Primary: select pg_start_backup(‘Hot Standby Backup’); b) Take the fi

pgAgent Configuration on Windows

Here are the steps which some one can use to configure the pgAgent. These steps are tested on my machine: 1. Login to system as test_user: Please create a pgpass.conf file in test_user %APPDATA%\postgresql directory: Entry for pgpass.conf should be as given below: hostname:port:database:username:password 2. Connect to database as given below: C:\”Program Files”\PostgresPlus\8.4\bin\psql.exe -U postgres (Above command will also verify that pgpass.conf is used by the psql command or not) 3. After connecting to database, create plpgsql language as given below: CREATE LANGUAGE plpgsql; 4. Now, run the pgagent.sql file. 5. Create pgAgent service, as given below: pgagent.exe INSTALL pgAgent -u test_user -p test hostaddr= dbname= user= 3. Start the created service using following command: net start pgAgent 4. While Creating job, Please keep the output of following command in “Host Agent” field of “pgAgent Job” window: select jagstation from pageant.pga_jobagent;

Monitor PG WarmStandBy

While working with Database, some people have asked to write a Monitoring script for WarmStandby. Here is a script which can be use for it: #!/bin/bash BIN=/opt/PostgresPlus/8.4SS/bin WARMDATA=/sata/data PSQL=$BIN/psql PRIM_HOST=primarserver.ic ST_LOGFILE=/usr/local/pgsql/pg_standby/standby.log STANDBYLOGLOC=/sata/backups/pg8.4/standby_wal echo -e "" echo -e "+-----------------------------------------------------------+" echo -e "|Finding the Minimum Recovery: |" echo -e "+-----------------------------------------------------------+" MINRECOV=`$BIN/pg_controldata $WARMDATA |grep 'Minimum recovery'|awk -F": " '{print $2}'|sed 's/ *//g'` MINRECOVWAL=`$PSQL -h $PRIM_HOST -t -p 5432 -c "select pg_xlogfile_name('$MINRECOV');"|sed 's/ *//g'` if [ -n $MINRECOVWAL ];then echo -e "|Minimum Recovery WAL file require: $MINRECOVWAL|" echo -e "+--------

EnterpriseDB's Postgres Plus Advanced Server

EnterpriseDB's Postgres Plus Advanced Server. It is really a nice product which has been launched by company. I have worked on it and found suitable and replacement of the Oracle. With Learning of Postgres Plus Advanced Server, people also learn about the PostgreSQL, world's most advanced Open source database. It has almost all feature, which is require for Oracle Developers. Here is a link, which talks about PostgreSQL and Postgres Plus http://www.enterprisedb.com/products/postgres_plus/overview.do And following is a link, which has description of Oracle Compatibility provided by EnterpriseDB in Postgres Plus Advanced Server: http://www.enterprisedb.com/docs/en/8.3R2/oracompat/Table%20of%20Contents.htm