Wednesday, March 13, 2013

Insert records instead of delete to reclaim space of a large table

0 comments
A few days back, I got a task to reclaim space for a database server by deleting a good number of records from a table. It was at Postgresql 9.1.
Initially, the developers planned to delete the rows and execute a vacuum full on the database.

delete from usage where cacheid in (
select cacheid from cache_tab
where distid in ('1','2','3')
AND createddatetime < '2011-05-01');

delete from cache_tab where distid in ('1','2','3') AND createddatetime < '2011-05-01';

Vacuum Full Analyze;

The table had  612613 records. The table description was -

search_engine2=# \d cache_tab
                                           Table "public.cache_tab"
         Column         |            Type             |                          Modifiers                          
------------------------+-----------------------------+--------------------------------------------------------------
 id                     | integer                     | not null default nextval('cache_tab_id_seq'::regclass)
 cachekey               | character varying(60)       | not null
 userid                 | bigint                      |
 col1                   | character varying(255)      |
 col2                   | bigint                      | not null
 query                  | character varying(5000)     | not null
 resultxml              | text                        | not null
 col4                   | character varying(50)       | not null
 createddatetime        | timestamp without time zone | not null default now()
 col6                   | character varying(255)      | not null
 distid                 | character varying(1000)     |
 col7                   | character varying(1000)     |
 col8                   | character varying(1000)     |
 col9                   | character varying(10)       |
 col10                  | timestamp without time zone | default now()
 col11                  | integer                     | not null default 0
 col12                  | bigint                      |
 col13                  | character varying(15)       |
 col14                  | timestamp without time zone |
 col15                  | boolean                     | default false
 col16                  | character varying(1000)     |
 col17                  | boolean                     | default false
Indexes:
    "cache_tab_pkey" PRIMARY KEY, btree (id)
    "cache_tab_ccol1_pkey" UNIQUE CONSTRAINT, btree (col1)
    "cache_tab_col2_index" btree (col2)
    "cache_tab_col3_index" btree (col3)
    "cache_tab_col4_index" btree (col4)
    "cache_tab_col5_index" btree (col5)
    "cache_tab_query_index" btree (query)
    "cache_tab_distid_index" btree (distid)

Referenced by:
    TABLE "usage" CONSTRAINT "usage_cacheid_fkey" FOREIGN KEY (cacheid) REFERENCES cache_tab(id)

The table size was 88 GB where the whole database size was 90 GB. Another table usage which was small, but the tricky part was it had a foreign key constaring references to cache_tab data.

 \d usage
                                          Table "public.usage"
        Column        |            Type             |                         Modifiers                        
----------------------+-----------------------------+-----------------------------------------------------------
 id                   | integer                     | not null default nextval('usage_id_seq'::regclass)
 hitid                | bigint                      | not null
 cacheid        | bigint                      | not null
 selectionref         | character varying(1000)     |
 createddatetime      | timestamp without time zone | not null default now()
 populateddatetime    | timestamp without time zone |
 populationstatuscode | character varying(20)       | default 'PENDING'::character varying
 lastmodifieddatetime | timestamp without time zone | default now()
Indexes:
    "usage_pkey" PRIMARY KEY, btree (id)
    "usage_hitid_index" btree (hitid)
    "usage_populationstatuscode_index" btree (populationstatuscode)
    "usage_cacheid_index" btree (cacheid)
Foreign-key constraints:
    "usage_hitid_fkey" FOREIGN KEY (hitid) REFERENCES sch_hit(id)
    "usage_cacheid_fkey" FOREIGN KEY (cacheid) REFERENCES cache_tab(id)

As vacuum full will lock the table and only delete would not going to reclaim space, I changed the plan to different way.

Instead of deleteing records, I planned to insert active records in a new table. My plan was -

1. Make a create table script for cache_tab using pg_dump
2. Disable Foreign Key with sch_hituse
3. Rename existing cache_tab to cache_tab_old
4. Create cache_tab from the script
5. Identify indexes from cache_tab_old and drop those indexes
6. Create missing indexes
7. Insert into cache_tab from schresultcache_old according to filter in small chunk
8. Delete data from sch_hituse that no longer needed
9. Make a dump backup of sch_hitresultcache_old and drop the table to reclaim space
10. Enable Foreign key

Step 1: Make a create table script for cache_tab using pg_dump

 

pg_dump search_engine2 -t cache_tab -s > cache_tab.sql.schema

 

Step 2: Disable Foreign Key at usage


ALTER TABLE usage  drop constraint usage_cacheid_fkey;

Step 3: Rename existing cache_tab to cache_tab_old

ALTER TABLE cache_tab rename to cache_tab_old;

 

Step 4. Create cache_tab from the script


psql search_engine2 < cache_tab.sql.schema

This created the table without index. It gave error that the indexes were already exist. I was aware of this error.
But I created the table at this moment to make the downtime very less. At this moment the application was ready to go and as the new table is very small, it would not effect the performance running withtout indexes.

Step 5: Identify indexes from cache_tab_old and drop those indexes


Created drop index script -
SELECT 'DROP INDEX '|| c2.relname || ';' as index_name ,pg_catalog.pg_get_indexdef(i.indexrelid, 0, true) as cmd
FROM pg_catalog.pg_class c, pg_catalog.pg_class c2, pg_catalog.pg_index i
  LEFT JOIN pg_catalog.pg_constraint con ON (conrelid = i.indrelid AND conindid = i.indexrelid AND contype IN ('p','u','x'))
WHERE c.relname = 'cache_tab' AND c.oid = i.indrelid AND i.indexrelid = c2.oid
ORDER BY i.indisprimary DESC, i.indisunique DESC, c2.relname;

Executed the drop index command.
Used the output of cmd for next step.

Step 6. Create missing indexes


Modified cache_tab_old to cache_tab in CREATE INDEX command and executed to create the indexes.

Step 7. Insert into cache_tab from cache_tab_old according to filter in small chunk


At this moment, I started to transfer data from cache_tab_old to cache_tab. I modified the WHERE condition given by the developers and also made a function to do this in small chunk.
insert into cache_tab
select * from cache_tab_old
where (createddatetime >= :START_TIME and  createddatetime < :END_TIME )
and distid not in ('1','2','3') 

I used time frame of two months like -
2011-05-01  - 2011-07-01
2011-07-01  - 2011-09-01
...
...
2013-03-01 - 2013-05-01

Before executing the insert statement, I created an index on cache_tab_old on (createddatetime,distid).

It took almost an hour to insert all 444279 records to the table.

Step 8. Delete data from sch_hituse that no longer needed


I created a backup copy of usage as  usage_bak and removed data from usage.

Step 9. Make a dump backup of cache_tab_old and drop the table to reclaim space


Before creating a dump backup of cache_tab_old for archiving, I deleted those records that are in cache_tab.
Again, instead of deletion, I created another table.
CREATE TABLE cache_tab_deleted_data as
select * from cache_tab_old where distid in ('1','2','3') AND createddatetime < '2011-05-01';
DROP TABLE cache_tab_old;

pg_dump search_engine2 -t cache_tab_deleted_data -Fc > cache_tab_deleted_data.sql.mar122013
pg_dump search_engine2 -t sch_hituse_old -Fc > sch_hituse_old.sql.mar122013

10. Enable Foreign key


Created the foreign key again that I disabled in step 2.

Now, the table cache_tab  size was 26 GB which was 88 GB earlier.

In this way, I reclaimed 62 GB of space without any downtime or locking.








Continue reading →
Thursday, March 7, 2013

Postgresql dynamic crosstab function

0 comments
Postgresql 9.0 has a contrib module named tablefunc which provide crosstab functionality.
http://www.postgresql.org/docs/9.1/static/tablefunc.html
It has two types of crosstab functions :-
  • crosstab(text sql)
  • crosstab(text source_sql, text category_sql)
We generally use the second one where we provide source and category sql.
source_sql is a SQL statement that produces the source set of data. This statement must return one row_name column, one category column, and one value column
category_sql is a SQL statement that produces the set of categories. This statement must return only one column. It must produce at least one row, or an error will be generated. Also, it must not produce duplicate values, or an error will be generated.
The biggest problem of this crosstab function is it needs to write all return column in sql queries to generate the report.
 
SELECT * FROM crosstab('...', '...')
    AS ct(row_name text, cat1 text, cat2 text, cat3 text, cat4 text);
One must know how many of categories are going to return as - cat1 text, cat2 text, cat3 text, cat4 text. So, it was not going with our idea to generate a report dynamically.
To overcome this, I create a function called crosstab_dynamic_sql_gen_fn. It takes following parameters as input :-
  • source_sql text - described above
  • category_sql text - described above
  • v_matrix_col_type varchar(100)  - data type for category values
  • v_matrix_rows_name_and_type varchar(100) - row name with data type like 'username text'
  • debug bool default false - to get debug output
And it returns the generated sql.
 
DROP FUNCTION crosstab_dynamic_sql_gen_fn (source_sql text, category_sql text, v_matrix_col_type varchar(100), v_matrix_rows_name_and_type varchar(100),debug bool);
CREATE OR REPLACE FUNCTION crosstab_dynamic_sql_gen_fn (source_sql text, category_sql text, v_matrix_col_type varchar(100), v_matrix_rows_name_and_type varchar(100),debug bool default false)
RETURNS text AS $$
DECLARE
v_sql text;
curs1 refcursor;
v_val text;
BEGIN
v_sql = v_matrix_rows_name_and_type;
OPEN curs1 FOR execute category_sql;
Loop
FETCH curs1 INTO v_val;
exit when v_val IS NULL;
v_sql = v_sql ||' , "'||v_val||'" '||v_matrix_col_type;
IF debug THEN
RAISE NOTICE 'v_val = %',v_val;
END IF;
END LOOP;
CLOSE curs1;
v_sql := 'SELECT * from crosstab(' || chr(10) || E' \''||source_sql || E'\','||chr(10) || E' \''||category_sql || E'\'' || chr(10)|| ' ) AS (' || v_sql ||')';
IF debug THEN
RAISE NOTICE 'v_sql = %',v_sql;
END IF;
RETURN v_sql;
END;
$$ language 'plpgsql';

Now, when I use the function, I get the sql to generate crosstab report.
select crosstab_dynamic_sql_gen_fn('select year, month, qty from sales order by 1','select distinct month from sales','int','year text');
crosstab_dynamic_sql_gen_fn
---------------------------------------------------------------------------------
SELECT * from crosstab( +
'select year, month, qty from sales order by 1', +
'select distinct month from sales' +
) AS (year text , "1" int , "5" int , "11" int , "12" int , "2" int , "7" int)
(1 row)
As long as you get the generated sql, you may do whatever you like with that. You may create function, view etc.
 
Up to this point, we can use this function for all our dynamic crosstab operation. Next turn is to use this for our specific purpose.
For this, I created another function populate_matrix_rep_fn which takes two parameters -
  • v_pkid int  - subsetid
  • v_outfile varchar(100) - output csv file path
It creates the csv file in the given path.
DROP FUNCTION populate_matrix_rep_fn(v_pkid int, v_outfile varchar(100));
CREATE OR REPLACE FUNCTION populate_matrix_rep_fn(v_pkid int, v_outfile varchar(100))
RETURNS void AS $$
DECLARE
v_source_sql text;
v_category_sql text;
v_sql text;
BEGIN
v_source_sql := 'SELECT rowname, ii.category, value '||
'FROM tab1 r, tab2 ii where ii.id=r.itemid '||
'and pkid=' || v_pkid ;

v_category_sql := 'SELECT distinct ii.category '||
'FROM tab1 r,tab2 ii where ii.id=r.itemid '||
'and pkid= '||v_pkid || ' order by 1 ';

v_sql := crosstab_dynamic_sql_gen_fn(v_source_sql,v_category_sql,'text','"user text"',false);
v_sql := E'COPY ( ' || v_sql || E' ) TO \'' || v_outfile || E'\' (FORMAT \'csv\', NULL \'0\' , HEADER)';
--RAISE NOTICE 'v_sql = %',v_sql;
EXECUTE v_sql;

END;
$$ Language 'plpgsql'
SET work_mem=1048576;

Now, simply, we can call the function to generate crosstab reports for different subsetid.
 
select populate_matrix_rep_fn(522219,'/tmp/rumman/out.csv');




Continue reading →

Labels