Posts

Showing posts from October, 2010

Some UNIX Utilities for performance monitoring

vmstat command: vmstat is a command which can be use to display system statistics. Syntax is given below: vmstat <# second> < #number of times> Example vmstat 1 10 Above Command is to display the system statistics every second, 10 times. Sample Output of the command: vmstat procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 1 0 0 950888 57536 512868 0 0 559 45 479 671 9 2 73 15 Lets understand the column definition: Procs Column would be having three subcolumns r: count (processes waiting for run time) b: count( Processes in uninterruptible sleep) w: Count (Process which are swapped out and runnable) (on RHEL) If any process in b and w column then DBA/System admin has to check the system. Memory Column would be having following subcolumns: swap: swap space currently available.(in kBs) free: Amount of idle memory(kBs) buff: Memory

Some New Security Features in PG9.0

Some New Security Features in PG9.0 1. New Grant and Revoke in PG9.0 In Previous version of PG (7.x,8.x), all the DBAs and Users used to miss the GRANT and REVOKE command which can be use to give permissions on all the tables inside the Schema. Now, they don;t have to. From PG9.0, user can execute single GRANT and REVOKE command to give the permission on all the tables in a SCHEMA. GRANT SELECT ON ALL TABLES in SCHEMA TEST to test_user; Here is output of query which shows that above command has given SELECT privileges on all the tables in SCHEMA Test. postgres=# select * from information_schema.table_privileges where grantee ='test_user';; grantor | grantee | table_catalog | table_schema | table_name | privilege_type | is_grantable | with_hierarchy ----------+-----------+---------------+--------------+------------+----------------+--------------+---------------- postgres | test_user | postgres | test | test | SELECT | NO | NO

Understanding Real Application Cluster Processes

With all the Processes of Stand-alone Oracle Database, Real Application Cluster provides some other processes which plays important role. These Processes are: 1. LMON: Globa Enqueue Service Monitor Responsibilities Assigned: This Process is responsible for monitoring the entire cluster global enqueues and the resources. It also manages the instances failure and process failures and the associated recovery for Global Cache service and Global Enqueue Service. Main responsibility of this process is recovery of Global Resources. Services provided by this process is also known as Cluster Group Services. View:: So, we have one watcher and recovery agent for Global Resources. 2. LMDx: Global Enqueue Service Daemon (Lock Agent) Responsibility Assigned: This process controls the access of Global Enqueue and resources. It also handles the Deadlock detection and remote enqueue requests (i.e requests from instances). View: Hmm, we have one recorder agent and resource informa

Making Slony source compatible with EnterpriseDB Advanced Server

Since, edb-replication which comes with advanced Server is more compactly build with one particular version, therefore some times user does not able to replicate the data between two different version of Advanced Servers. For replicating data between two advanced Server versions, it is important to have same version of Slony on Source and target database. I did research around the slony source code and tried to make it compatible with Advanced Server. If user plainly comile the slony source code against the Advanced Server database, then user will start to get messages like : version/ while configuring the Replication. Reason for such messages is that slony did not able to parse the Advanced Server version, therefore it would not be able to continue further for replication. I did some research and gone through the slony source code. Slony uses a program called dbutil*.c for finding the version of PostgreSQL or checking the compatibility of PostgreSQL version with Slony. F

pg_hotbackup utility for Backup of PG

Users always look for a Backup utility which can give some Good options and a utility can be use with all the instances of PG to take the backup. I have thought in same way and created a pg_hotbackup script. pg_hotbackup utility which I worked on is now a server side utility which takes backup on server side and keep the backups in Backup directory. Options which I have included in it, are following: Compulsory Options: -H (This is for Bin Directory location, utility will use psql command of PG Instance ) -b (Directory where user wants to keep the backup) -p (Port number of PG Instance) -U username (Username) -P passwd (Password). Some other options: -a: Archive Only Option [1|0] -r: Retention Policy [ in days ] -l: List Backups -n: Backup File Name -v: Validate Only [1|0] -R: Retention Only [|0] \?: Help So, I have all the options with me. Now, lets understand what do we need: 1. We need a catalog file, in which utility can keep the backups information and validate the

ROLLUP Analytical function in PostgreSQL.

Currently, there is no version of PG, which supports the rollup. However, people look for this analytical function features. ROLLUP queries result can be achieve using the UNION of Queries. First Let's Understand what does rollup do: If SQL Query has col1,col2,col3,aggregate(col4) then rollup Processing would be something like this. 1. Show the aggregate of col4 as per the col1,col2,col3 2. Then rollup will do the subtotal and will show the result as per the as aggregate of based on col1,col2 3. Then it will show the aggregate/subtotal as per the col1. 4. And at end Total/Sum In short, it creates progressively higher-level subtotals, moving from right to left through the list of grouping columns. Finally, it creates a grand total. In PG, this can be achieve by writing a SubQueries and and UNION those. So, if the rollup query is something like given below: select col1,col2,col3,agg(col4) from relation group by rollup(col1,col2,col3) Then in PG above can be writte

PG_REORG Utility for VACUUM FULL online

pg_reorg is a utility made by NTT for reorganizing the table structure. Concept is simple, if you have all the require pointers and data in same page, then accessing those is much more faster. This is what pg_reorg provides to a user. Following are some options, which pg_reorg provides. -o [ —order-by] columns: This option makes pg_reorg to oraganise the table data as per the mentioned column. At the backend pg_reorg will creates a new table using CTAS and SELECT Query include ORDER BY clause with columns mentioned with -o. -n [—no-order] tablename: When this option is being used, then pg_reorg, does the VACUUM FULL ONLINE. Now, question is how it must be doing. Simple Concept, create a new table using CTAS and create a trigger on current table to track the DML. As the New table got created play those tracked DML on new table. It works well. This option is only for table which has primary key. pg_reorg by default does the CLUSTER of tables and it follows same concept, i.e wit

Slony Vs PG9.0 Built in Streaming Replication.

People Generally asked such kind of Questions as PG9.0 Comes with Streaming Replication. Following are some points which people need to think before deciding which replication, they should follow: 1. Slony has a some overhead on database than the Streaming replication+HotStandby in 9.0 2. All the changes must be apply via SLONIK Command 3. Slony gives advantage of replicating some tables and allows to ignore others 4. Slony also gives the advantage of replication between Different version of PG and PG on different OS.

PG9.0:: Monitoring Hot Standby

Now PG9.0 is in Market with new feature of Hot Standby and Streaming Replication. So, I have started to explore the way of monitoring the Hot Standby. I was in process of writing my own code for Monitoring the Hot Standby. For this purpose I have written a shell script to find the way of calculating lag. In pgpool-II, Developer has used following formula to calculate the lagging: lsn = xlogid * 16 * 1024 * 1024 * 255 + xrecoff; Following is an explanation of meaning of xlogid and xrecoff: postgres=# select pg_current_xlog_location(); pg_current_xlog_location -------------------------- 0/13000078 (1 row) 0: is xlogid and xrecoff is 13000078 With this, Concept of implementation of finding the lagging is to calculate the replication lag by comparing the current WAL write location on the primary with the last WAL location received/replayed by the standby. These can be find using pg_current_xlog_location function on the primary and the pg_last_xlog_receive_location/pg_last_xl

pgFouine: PostgreSQL Log Analyzer

Image
PgFouine is a interesting PostgreSQL Analyzer Tool which is available for Generating the Reports in html format. Advantage of using this tool is that user gets report in text or HTML format, which is easy to analyze. Using pgFouine user can make following types of Reports: 1. Error Reports 2. Slow Query reports 3. Vacuum Verbose Reports etc... Installation of pgFouine is simple. Download the source from following location: http://pgfoundry.org/frs/download.php/2575/pgfouine-1.2.tar.gz and then extract the pgfouine source using following command: tar zxvf pgfouine-1.2.tar.gz. Please note, before using pgfouine user has to make sure that it has php installed on his server. pgFouine has some restriction over analyzing the log file. It analyzes the PostgreSQL logfiles, if the log_line_prefix has following format: log_line_prefix = 'user=%u,db=%d ' ( Filter on database with user with syslog ) log_line_prefix = '%t [%p]: [%l-1] ' ( For standard errors) log_l