Tuesday, December 21, 2010

SQL Query to show current activities of the database server at Postgresql

0 comments
I executed the following query at Postgresql 9.0.1:

SELECT pg_stat_activity.datname, pg_stat_activity.procpid,
pg_stat_activity.usename, 
pg_stat_activity.current_query,
pg_stat_activity.waiting,
pg_stat_activity.query_start,pg_stat_activity.client_addr
FROM pg_stat_activity
WHERE ((pg_stat_activity.procpid <> pg_backend_pid())
AND (pg_stat_activity.current_query <> ''))
ORDER BY pg_stat_activity.query_start;
Continue reading →
Thursday, December 9, 2010

Postgresql in Mission-Critical Financial System

1 comments
The following presentation is based on Caixa, 3rd largest Bank in Brazil. I found it very interesting in planning Postgresql for 24x7 system and hope that the readers will also enjoy it.

Continue reading →

Query to find unused indexes in Postgresql

2 comments
Following query finds out the unused indexes for top 20 most used tables in Postgresql:

SELECT
i.relname,i.indexrelname,i.idx_scan,i.idx_tup_read,i.idx_tup_fetch, c.reltuples, c.relpages
FROM
pg_stat_user_indexes as i
INNER JOIN pg_class as c
     ON i.indexrelname = c.relname
WHERE
i.relname IN
    (
    --top 20 most used tables
    SELECT c.relname as name
    FROM pg_stat_user_tables as t
    INNER JOIN pg_class as c
                ON t.relid = c.oid
    WHERE c.relkind ='r'
    ORDER BY
    COALESCE(t.idx_tup_fetch,0)+COALESCE(t.seq_tup_read,0) DESC
    LIMIT 20
    )
ORDER BY i.idx_scan ASC
Continue reading →
Thursday, December 2, 2010

How to get number of rows return by a query using MOVE in Postgresql

0 comments
Create and populate data:
CREATE TABLE emp
(
empno int,
empname varchar(100),
city varchar(100)
);

ALTER TABLE emp
ADD CONSTRAINT empno_pk PRIMARY KEY (empno);

INSERT INTO emp
select q.c as empno, 'testemp_'||q.c as empname, 'testcity_'||q.c as city
FROM
(SELECT generate_series(1,100) as c) AS q
;
Create function:
CREATE FUNCTION reffunc(refcursor) RETURNS refcursor AS $$
BEGIN
    OPEN $1 FOR
    SELECT  *
    FROM emp
    ;
    RETURN $1;
END;
$$ LANGUAGE plpgsql;
Get number of rows return:
DECLARE
  i INT;
BEGIN;
SELECT reffunc('funccursor');
MOVE ALL IN funccursor;
COMMIT;
Further Study:
http://www.postgresql.org/docs/current/static/sql-move.html
Continue reading →
Wednesday, November 17, 2010

Enable SYSLOG facility for Postgresql instance

0 comments
Modified values of the following parameters at postgresql.conf:
log_destination = ‘syslog’
syslog_facility = ‘LOCAL0’
silent_mode = on
Added lines at /etc/syslog.conf
LOCAL0.*              -/var/log/pgsql/pgsql.log
 Ignored PostgreSQL facility for the default log file otherwise you would log the queries twice:
*.info;...;local0.none        /var/log/messages

Restarted the SYSLOG service:
/etc/init.d/sysklogd restart
Now, we got postgresql log at /var/log/pgsql/pgsql.log 
Continue reading →

Top 20 tables acording to sequential scan in Postgresql

1 comments
select c.relname as name, c.reltuples as No_of_records, c.relpages as  No_of_pages,
   pg_size_pretty(pg_relation_size(c.relname)) as size,
   t.seq_scan, t.seq_tup_read, t.idx_scan, t.idx_tup_fetch, t.n_tup_ins, t.n_tup_upd, t.n_tup_del,
   COALESCE(t.idx_tup_fetch,0) + COALESCE(t.seq_tup_read,0) as total_read
from pg_stat_user_tables as t inner join pg_class as c
         on (t.relid = c.oid)
where c.relkind ='r'
order by seq_scan desc
limit 20
Continue reading →
Sunday, October 31, 2010

DB performance degrade and the action steps in Postgresql

0 comments
Once I had to improve performance for the CRM application running on Postgresql 8.1. Customers was complaining that after 1 pm the system got slow.
At first, I ensured that no background process was running at that time. Then I studied a lot and communicated with psql-archive writers. I planned some action steps and send mail to top management. The mail was as follows:-
The prime suggestion is to upgrade DB as only a year support available for 8.1.
As it is not possible right now, we may go for the following:

1. nohup psql crm -c "vacuum verbose analyze" > /tmp/vacuum_crm.log
- To get a vaccum log
- restart no required


2. Defragment the DB will increase performance. Take a dump and then restore. It'll reduce DB bloat. DB size'll be reduced to 3Gb from 12 gb.

DB wise:

3. default_statistics_target = 200
- increase this value to improve the quality of planner's estimates.
- restart not required
- set only for crm database
- current value 100

4. effective_cache_size = 17 gb then 20 gb incrementaly
- increase this value will force more index scan
- restart not required
- current value 1835010 ( 14 gb)
- statistics suggest we have free memory regularly
- this may cause paging for server, we need to check it out


System wide:

5. log_min_error_statement=error;
- To find out which statements are creating error in the DB.
- restart not required
- current value panic

6. log_min_messages=notice;
- To get more information about DB activities
- restart not required
- current value error

8. checkpoint_warning = 3600
- to find checkpoint time in peak time to find a good value for checkpoint_segment and checkpoint_timeout
- I got suggestion to go for 8-12, but I want to be confirm at first that checkpoint_segment is a performance issue in our case
- restart not required

9. max_fsm_pages = 2546960
- to reduce table bloat
- new value comes from vacuum verbose output
- required restart
- we are getting message in DB log every night to increase it during vaccum

10. shared_buffers = 2 gb
- the main shared memory for the DB
- doc suggests to increase it upto 25% of total RAM incrementaly
- current value 958 MB
- required restart

I have completed task #1 to find a good value for step 9.
I'll suggest to implement these changes incrementally so that we may find whether we are in correct way or not.
Here the suspicious steps are #3, #4, #9 and #10. We may go for step #9 at first.
Step #5, #6 and #7 are just DB log facility, so we should go for these as soon as possible.
Whale query tuning is an on-going process and will be continued.

So far, I modified two values - random_page_cost and work_mem and I got benefits.

I think the mail may help people in finding root cause for performance decrease.
Continue reading →
Thursday, October 21, 2010

List all tables from a specific schema in Postgresql

0 comments

List of schemas in my database;
postgres=# \dn
         List of schemas
        Name            Owner    
------------------------- -------------
 information_schema   postgres84
 pg_catalog           postgres84
 pg_toast             postgres84
 pg_toast_temp_1      postgres84
 public               postgres84
 statsinfo            postgres84
 statsrepo            postgres84
I need to get the list of tables from  the schema "statsrepo".

postgres=# set search_path to statsrepo;
postgres=# \d
                    List of relations
Schema Name Type Owner
--------- ----------------------- ----------- ------------
statsrepo activity table postgres84
statsrepo autoanalyze table postgres84
...


statsrepo tablespace table postgres84
Continue reading →
Monday, October 18, 2010

Install dblink from contrib in Postgres 8.4

0 comments
Postgresql 8.4.4 was installed from source at /var/lib/pgsql84, where all the source files were at /root/postgresql-8.4.4.
To install dblink module from contrib, I followed the steps:


cd /root/postgresql-8.4.4/contrib/dblink
make install
------
/bin/sh ../../config/mkinstalldirs '/var/lib/pgsql84/lib'
mkdir -p -- /var/lib/pgsql84/share/contrib
/bin/sh ../../config/install-sh -c -m 755  dblink.so '/var/lib/pgsql84/lib/dblink.so'
/bin/sh ../../config/install-sh -c -m 644 ./uninstall_dblink.sql '/var/lib/pgsql84/share/contrib'
/bin/sh ../../config/install-sh -c -m 644 dblink.sql '/var/lib/pgsql84/share/contrib'
------

ls
dblink.c  dblink.h  dblink.o  dblink.so  dblink.sql  dblink.sql.in  expected  Makefile  sql  uninstall_dblink.sql

Now dblink.sql was executed in order to create the required functions in template1 database.

/var/lib/pgsql84/bin/psql -U postgres84 template1 -f dblink.sql
/var/lib/pgsql84/bin/psql -U postgres84 template1
select dblink_connect('statcollector51','hostaddr=10.0.0.51 port=5432 dbname=testdb user=testuser password=testpwd');
Continue reading →

Install Postgresql 8.3 using YUM in CentOS 5

0 comments
In order to use Postgresql 8.3 using YUM in CentOS 5, pgdg repo configuration file can be used.
su - root
cd /etc/yum.repos.d
ls
C5base.repo   C5updates.repo   CentOS-Base.repo   CentOS-Media.repo

Download pgdg-centos-8.3-7.noarch from
yum.pgsqlrpms.org/reporpms/repoview/pgdg-centos.html.
rpm -ivh pgdg-centos-8.3-7.noarch
That is, I got  pgdg-83-centos.repo in my /etc/yum.repos.d and now I had to make sure yum installer use this repo configuration file for postgresql.

For this, I added
exclude=postgresql*
to the bottom of the each section of other repos in order to exclude distro packages.

vi C5base.repo
-------------------
[C5base]
name=C5base
baseurl=http://10.0.0.7/cobbler/repo_mirror/C5base
enabled=1
priority=1
gpgcheck=0
exclude=postgresql*
-------------------

vi CentOS-Base.repo
--------------------
[base]
name=CentOS-5 - Base
enabled=1
mirrorlist=http://mirrorlist.centos.org/?release=5&arch=i386&repo=os
#baseurl=http://mirror.centos.org/centos/5/os/i386/
gpgcheck=1
gpgkey=http://mirror.centos.org/centos/RPM-GPG-KEY-CentOS-5
priority=1
exclude=postgresql*

#released updates
[updates]
name=CentOS-5 - Updates
enabled=1
mirrorlist=http://mirrorlist.centos.org/?release=5&arch=i386&repo=updates
#baseurl=http://mirror.centos.org/centos/5/updates/i386/
gpgcheck=1
gpgkey=http://mirror.centos.org/centos/RPM-GPG-KEY-CentOS-5
priority=1
exclude=postgresql*
-------------------------
I checked that pgdg repo configuration file was using.
yum list postgres
...
Loading mirror speeds from cached hostfile
 * pgdg83: yum.pgrpms.org
...
postgresql.i386     8.3.12-1PGDG.rhel5
...
Everything was correct and I started Postgresql installation from yum.

yum install postgres
yum install postgres-server

adduser postgres
groupadd postgres
chown -R postgres:postgres /var/lib/pgsql
su - postgres
vi .bash_profile
----------------
...
export PGDATA=/var/lib/pgsql/data
...
----------------
Now, initdb is used in order to create postgresql cluster and other configuraton steps were followed according to the postgresql manual.

For reference, read the followings:
http://yum.pgrpms.org/howtoyum.php
http://www.postgresonline.com/journal/index.php?/archives/45-An-Almost-Idiots-Guide-to-PostgreSQL-YUM.html
Continue reading →
Sunday, October 17, 2010

Migrate a database from Postgresql 8.1 to 8.4

0 comments
I need to migrate a 11 GB database from Postgresql 8.1 to Postgresql 8.4. The database was for CRM application. 
At the source server:
Take backup of the Database:
pg_dump <db_name> -Fc -v > /tmp/db_backup.sqlc
Copy this backup to the new server:
scp /tmp/db_backup.sqlc postgres@x.x.x.x:/tmp/
At the destination server:
Connected to the new server using ssh client.

Installed Postgresql 8.4 using the following rpms in my Centos 5:
- postgresql-8.4.4-1PGDG.rhel5.x86_64.rpm
- postgresql-libs-8.4.4-1PGDG.rhel5.x86_64.rpm
- postgresql-server-8.4.4-1PGDG.rhel5.x86_64.rpm

Check that Postrgesql server is running:
$su - postgres
$pg_ctl status
Set maintenance_work_mem to a high value for improvement in DB restore operation. I set it to 1 GB.
-----------------------------
vi $PGDATA/postgresql.conf
...
maintenance_work_mem = 1024M
...
:wq
-----------------------------
 At psql console:
create user <my_user> with password 'password' superuser;
create database <db_name> owner <my_user>;
\q
At shell console:
pg_restore -d <db_name> -v /tmp/db_backup.sqlc
When this restore operation is complete, I found some FATAL ERROR in my application for query failure and these were occurred as Postgresql 8.4 does not have some implicit casting as 8.1.
To solve it, I executed the following commands at psql console:
---------------------------------------------------------------
CREATE FUNCTION pg_catalog.text(integer) RETURNS text STRICT IMMUTABLE LANGUAGE SQL AS 'SELECT textin(int4out($1));';
CREATE CAST (integer AS text) WITH FUNCTION pg_catalog.text(integer) AS IMPLICIT;
CREATE FUNCTION pg_catalog.text(smallint) RETURNS text STRICT IMMUTABLE LANGUAGE SQL AS 'SELECT textin(int2out($1));';
CREATE CAST (smallint AS text) WITH FUNCTION pg_catalog.text(smallint) AS IMPLICIT;
CREATE FUNCTION pg_catalog.text(oid) RETURNS text STRICT IMMUTABLE LANGUAGE SQL AS 'SELECT textin(oidout($1));';
CREATE CAST (oid AS text) WITH FUNCTION pg_catalog.text(oid) AS IMPLICIT;
CREATE FUNCTION pg_catalog.text(date) RETURNS text STRICT IMMUTABLE LANGUAGE SQL AS 'SELECT textin(date_out($1));';
CREATE CAST (date AS text) WITH FUNCTION pg_catalog.text(date) AS IMPLICIT;
CREATE FUNCTION pg_catalog.text(double precision) RETURNS text STRICT IMMUTABLE LANGUAGE SQL AS 'SELECT textin(float8out($1));';
CREATE CAST (double precision AS text) WITH FUNCTION pg_catalog.text(double precision) AS IMPLICIT;
CREATE FUNCTION pg_catalog.text(real) RETURNS text STRICT IMMUTABLE LANGUAGE SQL AS 'SELECT textin(float4out($1));';
CREATE CAST (real AS text) WITH FUNCTION pg_catalog.text(real) AS IMPLICIT;
CREATE FUNCTION pg_catalog.text(time with time zone) RETURNS text STRICT IMMUTABLE LANGUAGE SQL AS 'SELECT textin(timetz_out($1));';
CREATE CAST (time with time zone AS text) WITH FUNCTION pg_catalog.text(time with time zone) AS IMPLICIT;
CREATE FUNCTION pg_catalog.text(time without time zone) RETURNS text STRICT IMMUTABLE LANGUAGE SQL AS 'SELECT textin(time_out($1));';
CREATE CAST (time without time zone AS text) WITH FUNCTION pg_catalog.text(time without time zone) AS IMPLICIT;
CREATE FUNCTION pg_catalog.text(timestamp with time zone) RETURNS text STRICT IMMUTABLE LANGUAGE SQL AS 'SELECT textin(timestamptz_out($1));';
CREATE CAST (timestamp with time zone AS text) WITH FUNCTION pg_catalog.text(timestamp with time zone) AS IMPLICIT;
CREATE FUNCTION pg_catalog.text(interval) RETURNS text STRICT IMMUTABLE LANGUAGE SQL AS 'SELECT textin(interval_out($1));';
CREATE CAST (interval AS text) WITH FUNCTION pg_catalog.text(interval) AS IMPLICIT;
CREATE FUNCTION pg_catalog.text(bigint) RETURNS text STRICT IMMUTABLE LANGUAGE SQL AS 'SELECT textin(int8out($1));';
CREATE CAST (bigint AS text) WITH FUNCTION pg_catalog.text(bigint) AS IMPLICIT;
CREATE FUNCTION pg_catalog.text(numeric) RETURNS text STRICT IMMUTABLE LANGUAGE SQL AS 'SELECT textin(numeric_out($1));';
CREATE CAST (numeric AS text) WITH FUNCTION pg_catalog.text(numeric) AS IMPLICIT;
CREATE FUNCTION pg_catalog.text(timestamp without time zone) RETURNS text STRICT IMMUTABLE LANGUAGE SQL AS 'SELECT textin(timestamp_out($1));';
CREATE CAST (timestamp without time zone AS text) WITH FUNCTION pg_catalog.text(timestamp without time zone) AS IMPLICIT;
---------------------------------------------------------------
Continue reading →

Modify the value of DB parameters in Postgresql

0 comments
Often for different purposes we need to modify the value of database parameters.
Following command can be used from psql console to change the value of a parameter.
$psql –d bench_replication –U newuser
psql=#  show statement_timeout;
0ms
psql=# set statement_timeout=10;
psql=#  show statement_timeout;
10ms
Sometimes we need to restart the DB server after parameter’s value modification to take effect and sometimes a sighup signal is enough.  This is easily be identified when we need to restart DB server based on the context of the parameter.

Postgresql DB parameters can be categorized into 6 types on the basis of context and these are:
  • - user
  • - superuser
  • - sighup
  • - backend
  • - postmaster
  • - internal
Following query gives us the context of a parameter:
Select context
from pg_settings
where name = <parameter_name>
Following table shows us when we need to restart the DB server in order to take effect the parameter modification:

Continue reading →
Saturday, October 16, 2010

Modify all primary key ID columns' data type in Postgresql 8.1

0 comments
I had to modify all primary key ID columns' data type from integer to biginteger in Postgresql 8.1. I created two functions for this purpose.

Function unnest : 
Thia is used for expanding an array into a set of rows.
I got it from wiki.postgresql.org/wiki/Array_Unnest
CREATE OR REPLACE FUNCTION unnest(anyarray)
  RETURNS SETOF anyelement AS
$BODY$
SELECT $1[i] FROM
    generate_series(array_lower($1,1),
                    array_upper($1,1)) i;
$BODY$
  LANGUAGE 'sql' IMMUTABLE
Function modify_id_type_f:
This is function is used to modify the data type of the primary key columns.
CREATE OR REPLACE FUNCTION modify_id_type_f() RETURNS void AS $$
DECLARE
v_rec RECORD;

BEGIN

FOR v_rec IN
(
select tab_cols.table_name, tab_cols.col_name, tab_cols.col_type ,
'alter table '|| tab_cols.table_name ||' alter '|| tab_cols.col_name  ||' type '|| ' bigint ' as str
from
(
select relid as table_id,relname as table_name, a.attname as col_name,  t.typname as col_type, a.attnum, a.atttypid
from pg_stat_user_tables as c
inner join   pg_attribute as a  on c.relid = a.attrelid  and a.attnum > 0
inner join pg_type as t on a.atttypid = t.oid
) as tab_cols
inner join
(
select relid as table_id, relname, si.indexrelname, unnest(i.indkey) as ind_cols
from pg_stat_user_indexes as si
inner join pg_index as i on si.indexrelid  = i.indexrelid
where i.indisprimary = true
) as ind_cols
on tab_cols.table_id =  ind_cols.table_id and   tab_cols.attnum = ind_cols.ind_cols
where tab_cols.col_type = 'int4'
)
LOOP

--RAISE NOTICE 'Dname=%',v_rec.str;
EXECUTE v_rec.str;
END LOOP;
END;
$$ LANGUAGE plpgsql;
Continue reading →

Query to get database size and location in Postgresql

0 comments
Postgresql 8.1

select d.oid, datname,datdba, pg_size_pretty(pg_database_size(datname)) as size, t.*
from pg_database as d
inner join pg_tablespace as t on  ( d.dattablespace  = t.oid)
order by pg_database_size(datname)  desc
Continue reading →

Index tune using job scheduler in Oracle 10g

0 comments
Indexes are the key element for any Database Tuning. In Oracle 10g, REBUILD ONLINE is one of the key features that makes it possible to reorganize the indexes during production time.
An index should be considered for rebuilding under any of the following conditions:
• The percentage of deleted rows exceeds 30% of the total, i.e. if DEL_LF_ROWS / LF_ROWS > 0.3.
• If the ‘HEIGHT’ is greater than 4.
• If the number of rows in the index (‘LF_ROWS’) is significantly smaller than ‘LF_BLKS’ this can indicate a large number of deletes, indicating that the index should be rebuilt.
• BLOCK_GETS greater than 5

In our system we have 180000 indexes. So rebuilding manually is not an easy task. So, we considered job scheduler for index rebuilding.
Steps:
1. Create a table TEMP_INDEX_STAT, which is similar to INDEX_STATS
2. Create a procedure GEN_INDEX_STAT. It will analyze an index and store that data in TEMP_INDEX_STAT
3. Create a job class LOW_PRIORITY_CLASS with LOW_GROUP resource consumer group, so that the job might not hamper production transaction.
4. Create a job GET_INDEX_STAT_JOB
5. Create a table RUNNING_CMDS
6. Create a procedure REBUILD_INDEX, which will REBUILD the indexes
7. Create another job REBUILD_INDEX_JOB. This job will call the above procedure to rebuild the indexes.

Please be sure that the load in the database is not at the peak. If yes, you may get resource timeout error during the job execution period and the job may be failed.

Scripts:

1. A temporary table  TEMP_INDEX_STAT created to store the index statistics.

 CREATE TABLE TEMP_INDEX_STAT
AS
SELECT *
FOR INDEX_STATS
WHERE 1=2;

2. Create procedure GEN_INDEX_STAT to generate index statistics and store

CREATE OR REPLACE PROCEDURE GEN_INDEX_STAT
(PRM_SCHEMA VARCHAR2)
IS
CURSOR INDEX_CUR IS
SELECT INDEX_NAME
FROM DBA_INDEXES d
WHERE OWNER = prm_schema
AND NOT EXISTS
(
SELECT 1
FROM temp_index_stat b
WHERE d.index_Name = b.NAME
)
AND INDEX_TYPE = 'NORMAL';
;
v_str VARCHAR2(500);
BEGIN
FOR INDEX_REC IN INDEX_CUR LOOP
v_str := 'ANALYZE INDEX
'||PRM_SCHEMA||'.'||INDEX_REC.INDEX_NAME||'
VALIDATE STRUCTURE ';
EXECUTE IMMEDIATE v_str;
v_str := 'insert into TEMP_INDEX_STAT select * from index_stats';
EXECUTE IMMEDIATE v_str;
COMMIT;
END LOOP; --
END GEN_INDEX_STAT;

3. Create a job class

begin
dbms_scheduler.create_job_class(
job_class_name => 'LOW_PRIORITY_CLASS',
resource_consumer_group => 'LOW_GROUP',
logging_level => DBMS_SCHEDULER.LOGGING_FULL,
log_history => 60,
comments => 'LOW PRIORITY JOB CLASS');
end;

4. Create a job

You may set START_DATE if you want to schedule the following job.

BEGIN
dbms_scheduler.create_job
(
job_name=> 'GEN_INDEX_STAT_JOB',
job_type=> 'PLSQL_BLOCK',
job_action=>'begin GEN_INDEX_STAT(''T24''); end;',
START_DATE=>NULL,
REPEAT_INTERVAL=>NULL,
ENABLED=>TRUE,
AUTO_DROP=>FALSE,
COMMENTS=>'Generate Index Stat',
job_class=>'LOW_PRIORITY_CLASS'
);
END;

Upto this point, we have collected statistics for all indexes in our schema.
Now we need to REBUILD the indexes according to the conditions described above.

Create a table to log the operations

CREATE TABLE RUNNING_CMDS
(
CMD VARCHAR2(200)
)
This table will store the commands so that we could easily identify which index is in rebuild process during the running of the following job.

Procedure to rebuild index

CREATE OR REPLACE PROCEDURE REBUILD_INDEX(PRM_SCHEMA VARCHAR2)
IS
CURSOR CUR IS
SELECT NAME
FROM TEMP_INDEX_STAT a
WHERE (HEIGHT >= 4
OR
(del_lf_rows/lf_rows > 0.3 AND lf_rows > 0)
or (blks_gets_per_access > 4)
)
AND EXISTS
(
SELECT 1
FROM dba_indexes d
WHERE a.NAME = D.index_name
AND d.index_type <> 'LOB'
);
v_str varchar2(200);

BEGIN
execute immediate 'truncate table RUNNING_CMDS';
FOR REC IN CUR LOOP
v_str := 'ALTER INDEX '||PRM_SCHEMA||'.'||REC.NAME||' REBUILD ONLINE';
INSERT INTO RUNNING_CMDS
VALUES(V_STR);
COMMIT;
EXECUTE IMMEDIATE v_str;
END LOOP;
END REBUILD_INDEX;

8. Create a job to execute index rebuilding process

BEGIN
dbms_scheduler.create_job
(
job_name=> 'REBUILD_INDEX_JOB',
job_type=> 'PLSQL_BLOCK',
job_action=>'begin REBUILD_INDEX(''T24''); end;',
START_DATE=>NULL,
REPEAT_INTERVAL=>NULL,
ENABLED=>TRUE,
AUTO_DROP=>FALSE,
COMMENTS=>'Rebuild Index',
job_class=>'LOW_PRIORITY_CLASS'
);
END;

You may set START_TIME according to your requirement.
Continue reading →
Friday, October 15, 2010

Delete all records of all tables of a database schema in Oracle

0 comments
Once I found a question in a forum where an user, who was using Oracle, needed to delete all records from all tables in a schema and he had only DML privileges on that schema. Even he was not able to execute expdp in the database. I wrote the following two procedures for him and he was quite happy.

CREATE OR REPLACE PROCEDURE p_del_tab
 ( prm_table VARCHAR2 )

IS
    /*
    This procedure deletes all record from a table and if ORA-02292: occurs then recursively call the procedure again.
    */
    v_str VARCHAR2(500);

BEGIN

    EXECUTE IMMEDIATE 'DELETE FROM '||prm_table;

EXCEPTION

    WHEN OTHERS THEN

        --ORA-02292: integrity constraint (SCOTT.FK_DEPTNO) violated - child record found
        v_str := SQLERRM;
        v_str := SUBSTR(v_str,INSTR(v_str,'.')+1, INSTR(v_str,')')  -INSTR(v_str,'.')-1  );

    DECLARE

        CURSOR cur IS
        SELECT table_name
        FROM user_constraints
        WHERE constraint_name = v_str;

    BEGIN

        FOR rec IN cur LOOP
            P_DEL_tab(rec.table_name);
        END LOOP;
    END;

    p_del_tab(prm_table);

END p_del_tab;

CREATE OR REPLACE PROCEDURE p_del_all_data
IS
/* This procedure calls p_del_tab */

    CURSOR cur IS
    SELECT table_name
    FROM user_tables;

BEGIN

   FOR rec IN cur LOOP
       p_del_tab(rec.table_name);
   END LOOP;

EXCEPTION

    WHEN OTHERS THEN
       DBMS_OUTPUT.PUT_LINE('p_del_all_data: '||SQLERRM);

END
p_del_all_data
Now execute p_del_all_data:
BEGIN

    p_del_all_data;

END;

Remember you will be in trouble if you have many-to-many relationship like
DEPT has a column reference to COUNTRY and COUNTRY has a column reference to DEPT at that time you will be in infinite recursive call loop;
Continue reading →

Mysql-proxy installation and filter “SHOW TABLES”

0 comments
What is Mysql-proxy?

MYSQL Proxy is a simple program that sits between the client and MySQL server(s) that can monitor, analyze or transform their communication. Its flexibility allows for unlimited uses; common ones include:
• load balancing
• failover
• query analysis
• query filtering and modification
• ... and many more

Reason for mysql-proxy installation   

 I have been maintaining a 150 GB Mysql 5.1 Community Version Database with 1300000 tables. The database consists of all MYISAM tables. So, whenever a “SHOW TABLES” command is executed in the Database the server gets halt. As Mysql 5.1 has no privilege for SHOW TABLES, I used mysql-proxy between the web server and DB server.

Download required software

I downloaded mysql-proxy binary distribution for “Linux – Generic” from dev.mysql.com/downloads/mysql-proxy/ as my server’s operating was Centos 5.
LUA is a scripting language which is required for mysql-proxy scripting. I downloaded the following lua rpms from rpm.pbone.net.

Step 1: Install LUA

rpm -ivh lua50-5.0.3-1.i386.rpm lua50-devel-5.0.3-1.i386.rpm lua50  \               -libs-5.0.3-1.i386.rpm

Step 2: Extract mysql-proxy

cd /root/
tar -zxvf mysql-proxy-0.8.1-linux-glibc2.3-x86-32bit.tar.gz
cp -R mysql-proxy-0.8.1-linux-glibc2.3-x86-32bit /var/lib/
cd /var/lib/
mv mysql-proxy-0.8.1-linux-glibc2.3-x86-32bit mysql-proxy

Step 3: Edit $PATH variable

vi .bash_profile
----------
...
export PATH=$PATH:/var/lib/mysql-proxy/bin
export MYSQLPROXYHOME=/var/lib/mysql-proxy
----------

Step 4: Create mysql-proxy configuration file

cd $MYSQLPROXYHOME


vi mysql-proxy.cnf
-------------------
[mysql-proxy]
admin-address=10.0.0.44:4401
proxy-address=10.0.0.44:4402
proxy-backend-addresses=10.0.0.44:3306
admin-username=admin
admin-password=std123
admin-lua-script=/var/lib/mysql-proxy/lib/mysql-proxy/lua/admin.lua
proxy-lua-script=/var/lib/mysql-proxy/lib/mysql-proxy/lua/block_show_tables.lua
pid-file=/var/lib/mysql-proxy/mysqlproxy.pid
log-file=/var/log/mysql-proxy.log
log-level=message
-------------------

Here all the addresses contain same IP addresses as I installed mysql-proxy and mysql server in the same host.

Step 5: Add scripts to block SHOW TABLES


cd /var/lib/mysql/mysql-proxy/lib/mysql-proxy/lua/

vi block_show_tables.lua

------------------------------------------
...
local tokenizer = require("proxy.tokenizer")
---
-- make_regexp_from_command()
--
-- creates a regular expression for fast scanning of the command
--
-- @param cmd the command to be converted to regexp
--
function make_regexp_from_command(cmd)
local regexp= '^%s*';
for ch in cmd:gmatch('(.)') do
regexp = regexp .. '[' .. ch:upper() .. ch:lower() .. ']'
end
return regexp
end
local SHOW_REGEXP = make_regexp_from_command('show')
local EXECUTE_REGEXP = make_regexp_from_command('execute')
local SELECT_REGEXP = make_regexp_from_command('select')
queries_to_filter = {
{
prefix = SHOW_REGEXP,
keywords = { 'SHOW', 'TABLE', 'STATUS'} ,
},
{
prefix = SHOW_REGEXP,
keywords = { 'SHOW', 'TABLES'} ,
},
{
prefix = SELECT_REGEXP,
keywords = { 'SELECT', 'FROM', 'INFORMATION_SCHEMA', 'TABLES'} ,
},
{
prefix = SELECT_REGEXP,
keywords = { 'SELECT', 'FROM', 'INFORMATION_SCHEMA', 'COLUMNS'} ,
},
{
prefix = SELECT_REGEXP,
keywords = { 'SELECT', 'FROM', 'TABLES'} ,
},
{
prefix = SELECT_REGEXP,
keywords = { 'SELECT', 'FROM', 'COLUMNS'} ,
},
{
prefix = EXECUTE_REGEXP,
keywords = { 'EXECUTE'} ,
},
}
function error_result (msg)
proxy.response = {
type = proxy.MYSQLD_PACKET_ERR,
errmsg = msg,
errcode = 7777,
sqlstate = 'X7777',
}
return proxy.PROXY_SEND_RESULT
end
function read_query( packet )
if (packet:byte() ~= proxy.COM_QUERY) then
return
end
local query = packet:sub(2)
for i,query_to_filter in pairs(queries_to_filter)
do
if query:match(query_to_filter.prefix) then
local full_tokens = tokenizer.tokenize(query)
local tokens = tokenizer.bare_tokens(full_tokens, true)
local found = 0
local requested = #query_to_filter.keywords
for j,keyword in pairs(query_to_filter.keywords) do
for k, token in pairs(tokens) do
if token:upper() == keyword then
found = found + 1
break
end
end
end
if found == requested then -- to be filtered off
local log_file = os.getenv("MYSQLPROXYHOME") .. '/mysql-proxy.log'
print (log_file)
local str = os.date('%Y-%m-%d %H:%M:%S') .. '  : ' .. proxy.connection.client.src.name .. ' executed ' .. table.concat(query_to_filter.keywords,' ')  
local file = io.open(log_file,'a+')
file:write(string.format("%s \n",str))
file:flush()
return error_result('command <' .. table.concat(query_to_filter.keywords,' ') .. '> is not allowed' )
end
end
end
end
------------------------------------------

The above script is downloaded from http://forge.mysql.com/wiki/Lua_Scripts_For_MySQL_Proxy_Examples


Step 6: Set file permission

chown –R mysql:mysql $ MYSQLPROXYHOME
chmod 660 $MYSQLPROXYHOME/mysql-proxy.cnf
cd /var/lib/mysql/mysql-proxy/lib/mysql-proxy/lua/
chmod 644 block_show_tables.lua
ls -lh
-rw-r--r-- 1 mysql mysql 2741 Sep 3 16:52 admin.lua
-rw-r--r-- 1 mysql mysql 2979 Oct 12 17:00 block_show_tables.lua

Step 7: Check if mysql-proxy is starting

/var/lib/mysql-proxy/bin/mysql-proxy --defaults-file=/var/lib/mysql \ -proxy/mysql-proxy.cnf –daemon
pidof -s "$MYSQLPROXYHOME/libexec/mysql-proxy
6411
cat $MYSQLPROXYHOME/mysqlproxy.pid
6411

Step 8: Stop mysql-proxy

kill -9 6411

Step 9: Set mysql-proxy service script at /etc/init.d/

vi /etc/init.d/mysqlproxy
--------------------------
#!/bin/sh

# Source function library.
. /etc/rc.d/init.d/functions


# Find the name of the script
NAME=`basename $0`
if [ ${NAME:0:1} = "S" -o ${NAME:0:1} = "K" ]
then
NAME=${NAME:3}
fi


export MYSQLPROXYHOME=/var/lib/mysql-proxy
export MYSQLPROXYENGINE=/var/lib/mysql-proxy/bin
script_result=0
STARTLOG=$MYSQLPROXYHOME/startup_failure.log
i=0
message_display()
{
ret=$?
if [ $ret -eq 0 ]
then
#success "$DISPLAY"
echo_success
else
#failure "$DISPLAY"
echo_failure
script_result=1
fi
echo
}

start()
{
DISPLAY=$"Starting ${NAME} service: "
echo -n $DISPLAY
pid=`pidof -s "$MYSQLPROXYHOME/libexec/mysql-proxy"`
if [ $pid ] && [ -f "$MYSQLPROXYHOME/mysqlproxy.pid" ]
then
echo "ERROR: mysql-proxy already running at pid $pid " > $STARTLOG
sleep 1
failure "$DISPLAY"
echo
script_result=1
exit 1
fi
su -l mysql -c "$MYSQLPROXYENGINE/mysql-proxy --defaults-file=$MYSQLPROXYHOME/mysql-proxy.cnf --daemon"
sleep 2
pid=`pidof -s "$MYSQLPROXYHOME/libexec/mysql-proxy"`
if [ $pid ] && [ -f "$MYSQLPROXYHOME/mysqlproxy.pid" ]
then
success "$DISPLAY"
touch /var/lock/subsys/${NAME}
head -n 1 "$MYSQLPROXYHOME/mysqlproxy.pid" > "/var/run/mysqlproxy.pid"
echo
script_result=0
else
failure "$DISPLAY"
echo
script_result=1
fi
#message_display;
} #end of start

stop()
{
DISPLAY=$"Stopping ${NAME} service: "
echo -n $DISPLAY
sleep 1
pid=`pidof -s "$MYSQLPROXYHOME/libexec/mysql-proxy"`
if [ $pid ] && [ -f "$MYSQLPROXYHOME/mysqlproxy.pid" ]
then
kill -9 "$pid"
rm -f $MYSQLPROXYHOME/mysqlproxy.pid
rm -f /var/run/mysqlproxy.pid
rm -f "/var/lock/subsys/${NAME}"
success "$DISPLAY"
echo
else
failure "$DISPLAY"
echo
script_result=1
fi


#message_display;
} # end of stop

restart()
{
stop;
start;
} #end of restart


case $1 in
start) start;;
stop) stop;;
restart) restart;;

*)
echo $"Usage: $0 {start|stop|restart}"
exit 1
esac
exit $script_result
----------------------------------------------------------------

Step 10: Start and Stop mysql-proxy with the service script
/etc/init.d/mysqlproxy start
/etc/init.d/mysqlproxy stop

Filter everything that goes to the DB server and enjoy !!!

References



http://www.chriscalender.com/?tag=proxy-functions
http://lua-users.org/wiki/OsLibraryTutorial
http://dev.mysql.com/tech-resources/articles/proxy-gettingstarted.html


Continue reading →

Labels