Posts

Some UNIX Utilities for performance monitoring

vmstat command: vmstat is a command which can be use to display system statistics. Syntax is given below: vmstat <# second> < #number of times> Example vmstat 1 10 Above Command is to display the system statistics every second, 10 times. Sample Output of the command: vmstat procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 1 0 0 950888 57536 512868 0 0 559 45 479 671 9 2 73 15 Lets understand the column definition: Procs Column would be having three subcolumns r: count (processes waiting for run time) b: count( Processes in uninterruptible sleep) w: Count (Process which are swapped out and runnable) (on RHEL) If any process in b and w column then DBA/System admin has to check the system. Memory Column would be having following subcolumns: swap: swap space currently available.(in kBs) free: Amount of idle memory(kBs) buff: Memory

Some New Security Features in PG9.0

Some New Security Features in PG9.0 1. New Grant and Revoke in PG9.0 In Previous version of PG (7.x,8.x), all the DBAs and Users used to miss the GRANT and REVOKE command which can be use to give permissions on all the tables inside the Schema. Now, they don;t have to. From PG9.0, user can execute single GRANT and REVOKE command to give the permission on all the tables in a SCHEMA. GRANT SELECT ON ALL TABLES in SCHEMA TEST to test_user; Here is output of query which shows that above command has given SELECT privileges on all the tables in SCHEMA Test. postgres=# select * from information_schema.table_privileges where grantee ='test_user';; grantor | grantee | table_catalog | table_schema | table_name | privilege_type | is_grantable | with_hierarchy ----------+-----------+---------------+--------------+------------+----------------+--------------+---------------- postgres | test_user | postgres | test | test | SELECT | NO | NO

Understanding Real Application Cluster Processes

With all the Processes of Stand-alone Oracle Database, Real Application Cluster provides some other processes which plays important role. These Processes are: 1. LMON: Globa Enqueue Service Monitor Responsibilities Assigned: This Process is responsible for monitoring the entire cluster global enqueues and the resources. It also manages the instances failure and process failures and the associated recovery for Global Cache service and Global Enqueue Service. Main responsibility of this process is recovery of Global Resources. Services provided by this process is also known as Cluster Group Services. View:: So, we have one watcher and recovery agent for Global Resources. 2. LMDx: Global Enqueue Service Daemon (Lock Agent) Responsibility Assigned: This process controls the access of Global Enqueue and resources. It also handles the Deadlock detection and remote enqueue requests (i.e requests from instances). View: Hmm, we have one recorder agent and resource informa

Making Slony source compatible with EnterpriseDB Advanced Server

Since, edb-replication which comes with advanced Server is more compactly build with one particular version, therefore some times user does not able to replicate the data between two different version of Advanced Servers. For replicating data between two advanced Server versions, it is important to have same version of Slony on Source and target database. I did research around the slony source code and tried to make it compatible with Advanced Server. If user plainly comile the slony source code against the Advanced Server database, then user will start to get messages like : version/ while configuring the Replication. Reason for such messages is that slony did not able to parse the Advanced Server version, therefore it would not be able to continue further for replication. I did some research and gone through the slony source code. Slony uses a program called dbutil*.c for finding the version of PostgreSQL or checking the compatibility of PostgreSQL version with Slony. F

pg_hotbackup utility for Backup of PG

Users always look for a Backup utility which can give some Good options and a utility can be use with all the instances of PG to take the backup. I have thought in same way and created a pg_hotbackup script. pg_hotbackup utility which I worked on is now a server side utility which takes backup on server side and keep the backups in Backup directory. Options which I have included in it, are following: Compulsory Options: -H (This is for Bin Directory location, utility will use psql command of PG Instance ) -b (Directory where user wants to keep the backup) -p (Port number of PG Instance) -U username (Username) -P passwd (Password). Some other options: -a: Archive Only Option [1|0] -r: Retention Policy [ in days ] -l: List Backups -n: Backup File Name -v: Validate Only [1|0] -R: Retention Only [|0] \?: Help So, I have all the options with me. Now, lets understand what do we need: 1. We need a catalog file, in which utility can keep the backups information and validate the

ROLLUP Analytical function in PostgreSQL.

Currently, there is no version of PG, which supports the rollup. However, people look for this analytical function features. ROLLUP queries result can be achieve using the UNION of Queries. First Let's Understand what does rollup do: If SQL Query has col1,col2,col3,aggregate(col4) then rollup Processing would be something like this. 1. Show the aggregate of col4 as per the col1,col2,col3 2. Then rollup will do the subtotal and will show the result as per the as aggregate of based on col1,col2 3. Then it will show the aggregate/subtotal as per the col1. 4. And at end Total/Sum In short, it creates progressively higher-level subtotals, moving from right to left through the list of grouping columns. Finally, it creates a grand total. In PG, this can be achieve by writing a SubQueries and and UNION those. So, if the rollup query is something like given below: select col1,col2,col3,agg(col4) from relation group by rollup(col1,col2,col3) Then in PG above can be writte

PG_REORG Utility for VACUUM FULL online

pg_reorg is a utility made by NTT for reorganizing the table structure. Concept is simple, if you have all the require pointers and data in same page, then accessing those is much more faster. This is what pg_reorg provides to a user. Following are some options, which pg_reorg provides. -o [ —order-by] columns: This option makes pg_reorg to oraganise the table data as per the mentioned column. At the backend pg_reorg will creates a new table using CTAS and SELECT Query include ORDER BY clause with columns mentioned with -o. -n [—no-order] tablename: When this option is being used, then pg_reorg, does the VACUUM FULL ONLINE. Now, question is how it must be doing. Simple Concept, create a new table using CTAS and create a trigger on current table to track the DML. As the New table got created play those tracked DML on new table. It works well. This option is only for table which has primary key. pg_reorg by default does the CLUSTER of tables and it follows same concept, i.e wit