<?xml version='1.0' encoding='UTF-8'?><?xml-stylesheet href="http://www.blogger.com/styles/atom.css" type="text/css"?><feed xmlns='http://www.w3.org/2005/Atom' xmlns:openSearch='http://a9.com/-/spec/opensearchrss/1.0/' xmlns:blogger='http://schemas.google.com/blogger/2008' xmlns:georss='http://www.georss.org/georss' xmlns:gd="http://schemas.google.com/g/2005" xmlns:thr='http://purl.org/syndication/thread/1.0'><id>tag:blogger.com,1999:blog-6018614372585319965</id><updated>2026-04-20T05:01:24.302+02:00</updated><category term="postgresql"/><category term="java"/><category term="python"/><category term="linux"/><category term="Google Education"/><category term="PocketPC"/><category term="pgadmin"/><title type='text'>Valentine&#39;s tech log</title><subtitle type='html'></subtitle><link rel='http://schemas.google.com/g/2005#feed' type='application/atom+xml' href='http://tech.valgog.com/feeds/posts/default'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/6018614372585319965/posts/default/-/postgresql'/><link rel='alternate' type='text/html' href='http://tech.valgog.com/search/label/postgresql'/><link rel='hub' href='http://pubsubhubbub.appspot.com/'/><author><name>valgog</name><uri>http://www.blogger.com/profile/02521879454303479150</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><generator version='7.00' uri='http://www.blogger.com'>Blogger</generator><openSearch:totalResults>14</openSearch:totalResults><openSearch:startIndex>1</openSearch:startIndex><openSearch:itemsPerPage>25</openSearch:itemsPerPage><entry><id>tag:blogger.com,1999:blog-6018614372585319965.post-273141301142857511</id><published>2013-08-24T13:11:00.001+02:00</published><updated>2025-11-25T01:55:59.160+01:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="linux"/><category scheme="http://www.blogger.com/atom/ns#" term="postgresql"/><category scheme="http://www.blogger.com/atom/ns#" term="python"/><title type='text'>Real-time console based monitoring of PostgreSQL databases (pg_view)</title><content type='html'>&lt;div dir=&quot;ltr&quot; style=&quot;text-align: left;&quot; trbidi=&quot;on&quot;&gt;
In many cases, it is important to be able to keep your hand on the pulse of your database in real-time. For example when you are running a big migration task that can introduce some unexpected locks, or when you are trying to understand how the current long running query is influencing your IO subsystem.&lt;br /&gt;
&lt;br /&gt;
For a long time I was using a very simple bash alias that was injected from the &lt;code&gt;.bashrc&lt;/code&gt; script and that included the calls to system utilities like &lt;code&gt;watch&lt;/code&gt;, &lt;code&gt;iostat&lt;/code&gt;, &lt;code&gt;uptime&lt;/code&gt;, &lt;code&gt;df,&lt;/code&gt; some additional statistics from the &lt;code&gt;/proc/meminfo&lt;/code&gt; and &lt;code&gt;psql&lt;/code&gt; that was extracting information about currently running queries and if that queries are waiting for a lock. But this approach had several disadvantages. In many cases I was interested in the disk read/write information for query processes or PostgreSQL system processes, like WAL and archive writers. Also I wanted to have a really easy way to notice the queries that are waiting for locks and probably highlight them by color.
&lt;br /&gt;
Several weeks ago we finally &lt;a href=&quot;http://tech.zalando.com/getting-a-quick-view-of-your-postgresql-stats/&quot; target=&quot;_blank&quot;&gt;open-sourced our new tool&lt;/a&gt;, that makes our lives much easier. That tool combines all the feature requests that I was dreaming of for a long time. Here it is: &lt;a href=&quot;https://github.com/zalando/pg_view&quot; target=&quot;_blank&quot;&gt;pg_view&lt;/a&gt;.&lt;br /&gt;
&lt;br /&gt;
I already have some more feature requests actually and hope that Alexey will find some time to add them to the tool in nearest future. So if somebody wants to contribute or give some more ideas, please comment and open feature requests on the github page :)

&lt;br /&gt;
&lt;br /&gt;
&lt;img border=&quot;0&quot; src=&quot;https://camo.githubusercontent.com/f34019724e9bc5b3dc647b250940b40e24f2348bff656609a265a38863590f7b/68747470733a2f2f7261772e6769746875622e636f6d2f7a616c616e646f2f70675f766965772f6d61737465722f696d616765732f70675f766965775f73637265656e73686f745f6e65772e706e67&quot; width=&quot;100%&quot; /&gt;
&lt;/div&gt;
</content><link rel='replies' type='application/atom+xml' href='http://tech.valgog.com/feeds/273141301142857511/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/6018614372585319965/273141301142857511' title='12 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/6018614372585319965/posts/default/273141301142857511'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/6018614372585319965/posts/default/273141301142857511'/><link rel='alternate' type='text/html' href='http://tech.valgog.com/2013/08/pgview.html' title='Real-time console based monitoring of PostgreSQL databases (pg_view)'/><author><name>valgog</name><uri>http://www.blogger.com/profile/02521879454303479150</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>12</thr:total><georss:featurename>Berlin, Germany</georss:featurename><georss:point>52.519171 13.406091199999992</georss:point><georss:box>52.2099005 12.760644199999991 52.8284415 14.051538199999992</georss:box></entry><entry><id>tag:blogger.com,1999:blog-6018614372585319965.post-6493227707606246585</id><published>2012-01-22T20:25:00.001+01:00</published><updated>2012-01-26T11:38:36.541+01:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="postgresql"/><title type='text'>Schema based versioning and deployment for PostgreSQL</title><content type='html'>&lt;div dir=&quot;ltr&quot; style=&quot;text-align: left;&quot; trbidi=&quot;on&quot;&gt;
I am one of the supporters of keeping as much business logic in the database itself. This reduces the access layer of the application to mostly dumb transport and data transformation logic that can be implemented using different technologies and frameworks, without the need to re-implement critical data consistency and data distribution logic in several places, and gives an easy possibility to control what you are doing with your data and how your applications are allowed to access this data or even change exchange the underlying data structures transparently from the upper level of the code and without the need of a downtime. It also gives a possibility to add an additional layer of security allowing access to the data only through stored procedures, that can change their security execution context as needed (&lt;code&gt;&lt;a href=&quot;http://www.postgresql.org/docs/current/static/sql-createfunction.html&quot; target=&quot;_blank&quot;&gt;SECURITY DEFINER&lt;/a&gt;&lt;/code&gt; feature of PostgreSQL).&lt;br /&gt;
&lt;br /&gt;
This approach has some disadvantages of course. One of the biggest technical problems, that is very easily becoming an organizational problem if you have a relatively big teem of developers, a problem of how to rapidly rollout new features without touching old functioning stored procedures, so that old versions of your upper level applications can still access the previous versions of stored procedures, and newly rolled out nodes with new software stack on them, access new stored procedures doing something more, or less, or returning some other data sets compared to their previous versions. And of course hundreds of stored procedures that are there to access and manipulate data are enough to make any attempt to keep all new versions of them backwards compatible, a nightmare.&lt;br /&gt;
&lt;br /&gt;
&lt;i&gt;Classical&lt;/i&gt; way to do this, would be to keep all the changes backwards compatible and if it is not possible, then create a new version of a stored procedure with some version suffix like &lt;code&gt;_v2&lt;/code&gt;, mark the previous version as deprecated and after all your software stack is rolled out to use that new function, &lt;i&gt;just&lt;/i&gt; drop the previous version. &amp;nbsp;But if you are rolling out new version of the whole stack once of twice a week, the control of what is used and that is not becomes quite a&amp;nbsp;challenge... and&amp;nbsp;discipline&amp;nbsp;of all the developers should be really good as well. Stored procedures are not the only objects, that are changing&amp;nbsp;together&amp;nbsp;with them. &amp;nbsp;The return or input types can change as well. Changing of a return type, that is used by more then 2 stored procedures in a backwards compatible fashion is a pure horror if you want to do it without creating a new version of such a type and new versions of all the stored procedures, that use it. Dependency control becomes another problem.&lt;br /&gt;
&lt;br /&gt;
My solution to that problem was to introduce a schema based versioning of PostgreSQL stored procedures. It uses an idea of PostgreSQL schema and &lt;code&gt;search_path&lt;/code&gt; for a session.&lt;br /&gt;
&lt;br /&gt;
So all the stored procedures, that are exposed to the client software stack, are grouped in one API schema that contains only stored procedures and types needed by them.&lt;br /&gt;
&lt;br /&gt;
Schema name contains a version in it, like &lt;code&gt;&lt;i&gt;proj&lt;/i&gt;_api_v&lt;i&gt;X_Y_Z&lt;/i&gt;&lt;/code&gt;, where &lt;code&gt;&lt;i&gt;X_Y_Z&lt;/i&gt;&lt;/code&gt; is a version, that a software stack is targeted to.&amp;nbsp;Software stack does &lt;code&gt;SET search_path to &lt;i&gt;proj&lt;/i&gt;_api_v&lt;i&gt;X_Y_Z&lt;/i&gt;, public;&lt;/code&gt; immedeately after it gets a connection from the pool and all calls to the stored procedures are done without explicitly specifying a schema name for that API stored procedure and PostgreSQL finds the needed stored procedure from the specified schema.&lt;br /&gt;
&lt;br /&gt;
So when a branch is stable and branch version is fixed, it is used as a property that will be used when setting the default &lt;code&gt;search_path&lt;/code&gt; for the software, that is being deployed for that branch. For example in Java using BoneCP JDBC Pool, setting an &lt;code&gt;initSQL&lt;/code&gt; property of all the pools used to access &lt;i&gt;&lt;code&gt;proj&lt;/code&gt;&lt;/i&gt; database.&lt;br /&gt;
&lt;br /&gt;
We are storing the sources of all the stored procedures (and other database objects) in a special database directory structure that is checked in into a usual SCM system. All the files sorted in corresponding folders and are prefixed with a 2 digit numeric prefix to ensure the order of sorting (good old BASIC times :) ). Like&lt;br /&gt;
&lt;table border=&quot;0&quot; cellpadding=&quot;2&quot; cellspacing=&quot;1&quot;&gt;
 &lt;tbody&gt;
&lt;tr&gt;
  &lt;td&gt;&lt;code&gt;50_proj_api&lt;/code&gt;&lt;/td&gt;
  &lt;td&gt;&lt;/td&gt;
  &lt;td&gt;&lt;/td&gt;
 &lt;/tr&gt;
&lt;tr&gt;
  &lt;td&gt;&lt;/td&gt;
  &lt;td&gt;&lt;code&gt;00_create_schema.sql&lt;/code&gt;&lt;/td&gt;
  &lt;td&gt;&lt;/td&gt;
  
 &lt;/tr&gt;
&lt;tr style=&quot;mso-yfti-irow: 2;&quot;&gt;
  &lt;td&gt;&lt;/td&gt;
  &lt;td&gt;&lt;code&gt;20_types&lt;/code&gt;&lt;/td&gt;
  &lt;td&gt;&lt;/td&gt;
 &lt;/tr&gt;
&lt;tr&gt;
  &lt;td&gt;&lt;/td&gt;
  &lt;td&gt;&lt;/td&gt;
  &lt;td&gt;&lt;code&gt;20_simple_object_input_type.sql&lt;/code&gt;&lt;/td&gt;
 &lt;/tr&gt;
&lt;tr&gt;
  &lt;td&gt;&lt;/td&gt;
  &lt;td&gt;&lt;code&gt;30_stored_procedures&lt;/code&gt;&lt;/td&gt;
  &lt;td&gt;&lt;/td&gt;
 &lt;/tr&gt;
&lt;tr style=&quot;mso-yfti-irow: 5;&quot;&gt;
  &lt;td&gt;&lt;/td&gt;
  &lt;td&gt;&lt;/td&gt;
  &lt;td&gt;&lt;code&gt;20_get_object.sql&lt;/code&gt;&lt;/td&gt;
 &lt;/tr&gt;
&lt;tr&gt;
  &lt;td&gt;&lt;/td&gt;
  &lt;td&gt;&lt;/td&gt;
  &lt;td&gt;&lt;code&gt;20_set_object.sql&lt;/code&gt;&lt;/td&gt;
 &lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;br /&gt;
Here &lt;code&gt;00_create_schema.sql&lt;/code&gt; file is containing &lt;code&gt;CREATE SCHEMA proj_api;&lt;/code&gt; statement, statements to set default security options for newly created stored procedures and a &lt;code&gt;SET&amp;nbsp;search_path&amp;nbsp;TO&amp;nbsp;proj_api,&amp;nbsp;public;&lt;/code&gt; statement, that ensures, that all the objects, that are coming after that file are injected into the correct API schema. An example of &lt;code&gt;00_create_schema.sql&lt;/code&gt; file can look like:&lt;br /&gt;
&lt;pre class=&quot;brush: sql; collapse: true;&quot; title=&quot;00_create_schema.sql file example&quot;&gt;RESET role;

CREATE SCHEMA proj_api AUTHORIZATION proj_api_owner;

ALTER DEFAULT PRIVILEGES FOR ROLE postgres IN SCHEMA proj_api REVOKE EXECUTE ON FUNCTIONS FROM public;

GRANT USAGE ON SCHEMA proj_api TO proj_api_usage;

ALTER DEFAULT PRIVILEGES FOR ROLE postgres IN SCHEMA proj_api GRANT EXECUTE ON FUNCTIONS TO proj_api_executor;
ALTER DEFAULT PRIVILEGES FOR ROLE proj_api_owner IN SCHEMA proj_api GRANT EXECUTE ON FUNCTIONS TO proj_api_executor;
ALTER DEFAULT PRIVILEGES IN SCHEMA proj_api GRANT EXECUTE ON FUNCTIONS TO proj_api_executor;

SET search_path to proj_api, public;

DO $SQL$
BEGIN
  IF CURRENT_DATABASE() ~ &#39;^(prod|staging|integration)_proj_db$&#39; THEN
    -- change default search_path for production, staging and integration databases
    EXECUTE &#39;ALTER DATABASE &#39; || CURRENT_DATABASE() || &#39; SET search_path to proj_api, public;&#39;;
  END IF;
END
$SQL$;

SET role TO proj_api_owner;&lt;/pre&gt;
&lt;br /&gt;
This kind of layout gives a&amp;nbsp;possibility&amp;nbsp;to bootstrap API schema objects into a needed database easily and that is very important, to keep track of all the database logic changes in SCM system that lets you review and compare the changes between releases.&lt;br /&gt;
&lt;br /&gt;
Bootstrapping into a development database can be done by a very easy script like:&lt;br /&gt;
&lt;pre class=&quot;brush: bash; light: true;&quot;&gt;(
echo &#39;DROP SCHEMA proj_api CASCADE;&#39;
find 50_proj -type f -name &#39;*.sql&#39; \
  | sort \
  | xargs cat \
) | psql dev_proj_db -1 -f -
&lt;/pre&gt;
In case of development database, we are actually bootstrapping all the objects including tables into a freshly prepared database instance, so that integration tests can run and modify data as they want.&lt;br /&gt;
&lt;br /&gt;
Injecting into a production or staging database can be automated and implemented with different kind of additional checks, but at the end it is something like:&lt;br /&gt;
&lt;pre class=&quot;brush: bash; light: true;&quot;&gt;(
cat 50_proj/00_create_schema.sql | sed s/proj_api/proj_api_vX_Y_Z/g 
find 50_proj -type f -name &#39;*.sql&#39; ! -name &#39;00_create_schema.sql&#39; \
  | sort \
  | xargs cat \
) | psql prod_proj_db -1 -f - 
&lt;/pre&gt;
So after that, we have a fresh copy of the whole shiny API schema with all the dependencies rolled out to the production database. And this schema objects are only accessed by the software, that is supposed to do so, that is tested to run with this very combination and this versions of the stored procedures and depended types. And if we see any problems with the rollout, we can just rollback the software stack so it can still access our old stored procedures, located in a schema with previous version of out API.&lt;br /&gt;
&lt;br /&gt;
This method does not solve the problem of versioning of tables in our data schema (we would keep all the tables, related objects and low level transformation stored procedures in proj_data schema) but for that, there is a very simple, but very nice, solution,&amp;nbsp;&lt;a href=&quot;http://www.depesz.com/index.php/2010/08/22/versioning/&quot;&gt;http://www.depesz.com/index.php/2010/08/22/versioning/&lt;/a&gt; suggested and implemented by Depesz. Of cause, changes in table structure should be still kept backwards compatible and nicely written database diff rollout and rollback files should be written for every such change.&lt;br /&gt;
&lt;br /&gt;
I am not going into details about how to prepare Springs configuration of the JDBC pools for the java clients or how to configure the bootstrapping for integration testing in your Maven project configuration as this information will not add any real value to this blog post that became much longer then I expected from the beginning.&lt;br /&gt;
&lt;br /&gt;
NOTE: Because of a bug in PostgreSQL JDBC driver the types that are used as input parameters for stored procedures cannot be located in different schemas (TYPE OIDs are being searched only by name only, without consideration of a schema and search_path). Patching of the driver is very easy and we did so, in my company to be able to use the schema based versioning in our Java projects. I reported the bug twice already (&lt;a href=&quot;http://archives.postgresql.org/pgsql-jdbc/2011-03/msg00007.php&quot;&gt;http://archives.postgresql.org/pgsql-jdbc/2011-03/msg00007.php&lt;/a&gt;, &lt;a href=&quot;http://archives.postgresql.org/pgsql-jdbc/2011-12/msg00083.php&quot;&gt;http://archives.postgresql.org/pgsql-jdbc/2011-12/msg00083.php&lt;/a&gt;), but unfortunately no response from anybody. Probably have to submit a patch myself sometime. &amp;nbsp;&lt;/div&gt;</content><link rel='replies' type='application/atom+xml' href='http://tech.valgog.com/feeds/6493227707606246585/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/6018614372585319965/6493227707606246585' title='5 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/6018614372585319965/posts/default/6493227707606246585'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/6018614372585319965/posts/default/6493227707606246585'/><link rel='alternate' type='text/html' href='http://tech.valgog.com/2012/01/schema-based-versioning-and-deployment.html' title='Schema based versioning and deployment for PostgreSQL'/><author><name>valgog</name><uri>http://www.blogger.com/profile/02521879454303479150</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>5</thr:total></entry><entry><id>tag:blogger.com,1999:blog-6018614372585319965.post-7076524815797925804</id><published>2011-04-06T17:21:00.001+02:00</published><updated>2011-04-06T17:22:07.222+02:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="postgresql"/><title type='text'>Index sizes depending on the type of the field being indexed</title><content type='html'>The sizes of indexes (and tables of cause) are influencing the look up speed directly (the smaller are the corresponding file system files, the faster one can scan it, not saying anything about the sizes, needed to be kept in memory caches)&lt;br /&gt;
&lt;br /&gt;
So I did some simple experiment to demonstrate the effect of choosing different field types to be used as index fields to the size of the indexes and tables. &lt;br /&gt;
&lt;br /&gt;
&lt;pre class=&quot;brush: sql; light: true; collapse: false&quot; title=&quot;Script for generation of evaluation tables and indexes&quot;&gt;DROP DATABASE IF EXISTS eval;
CREATE DATABASE eval;
\c eval
*/
SET work_mem TO &#39;128MB&#39;;
SET maintenance_work_mem TO &#39;128MB&#39;;

DROP SCHEMA IF EXISTS eval_schema CASCADE;
CREATE SCHEMA eval_schema;

SET search_path to eval_schema;

CREATE TABLE eval_config ( table_name text, id_field_type text, id_field_expression text, row_count integer DEFAULT 100000 );
INSERT INTO eval_config VALUES 
( &#39;integer_short_id_table&#39;, &#39;integer&#39;, &#39;s.i&#39; ),
( &#39;integer_large_id_table&#39;, &#39;integer&#39;, &#39;s.i * 123&#39; ),
( &#39;bigint_short_id_table&#39;, &#39;bigint&#39;, &#39;s.i&#39; ),
( &#39;bigint_large_id_table&#39;, &#39;bigint&#39;, &#39;s.i * 123&#39; ),
( &#39;text_number_short_id_table&#39;, &#39;text&#39;, &#39;s.i&#39; ),
( &#39;text_number_large_id_table&#39;, &#39;text&#39;, &#39;s.i * 123&#39; ),
( &#39;numeric_short_id_table&#39;, &#39;numeric&#39;, &#39;s.i&#39; ),
( &#39;numeric_large_id_table&#39;, &#39;numeric&#39;, &#39;s.i * 123&#39; ),
( &#39;binary_md5_text_table&#39;, &#39;bytea&#39;, $$decode( md5( s.i::text || &#39;-text-filler&#39;), &#39;hex&#39; ) $$ ),
( &#39;md5_text_table&#39;, &#39;text&#39;, $$md5( s.i::text || &#39;-text-filler&#39; )$$ );

CREATE VIEW eval_table_stats AS
  SELECT t.relname as &quot;Table name&quot;,
       c.id_field_type as &quot;Indexed field type&quot;,
       c.id_field_expression as &quot;Expression&quot;,
       c.row_count as &quot;Row count&quot;,
       s.stawidth as &quot;Average id field width&quot;,
       pg_size_pretty(pg_table_size(t.oid)) as &quot;Table size without index&quot;,
       pg_size_pretty(pg_indexes_size(t.oid)) as &quot;Index size&quot; /*,
       n_tup_ins,
       n_tup_upd,
       n_tup_del,
       n_tup_hot_upd,
       n_live_tup,
       n_dead_tup */
  FROM eval_config as c
  JOIN pg_class as t
    ON c.table_name = t.relname
   AND t.relkind = &#39;r&#39;
  JOIN pg_namespace as n
    ON relnamespace = n.oid
   AND n.nspname = &#39;eval_schema&#39;
  LEFT 
  JOIN pg_statistic as s
    ON s.starelid = t.oid
   AND s.staattnum = 1
  LEFT
  JOIN pg_stat_user_tables as w
    ON w.relid = t.oid;

DO $SQL$
DECLARE 
  config record;
BEGIN
  FOR config IN SELECT * FROM eval_config
  LOOP
    RAISE INFO &#39;Creating table %&#39;, quote_ident( config.table_name );
    EXECUTE $$
    CREATE TABLE $$ || quote_ident( config.table_name ) || $$
    ( id $$ || config.id_field_type || $$, data text );
    $$;
    RAISE INFO &#39;Filling table %&#39;, quote_ident( config.table_name );
    EXECUTE $$
    INSERT INTO $$ || quote_ident( config.table_name ) || $$
    SELECT ( $$ || config.id_field_expression || $$ )::$$ || config.id_field_type || $$, &#39;some filling data&#39;
      FROM generate_series(1, $$ || config.row_count || $$) as s(i);
    $$;
    RAISE INFO &#39;Building index on %&#39;, quote_ident( config.table_name );
    EXECUTE $$
    CREATE INDEX ON $$ || quote_ident( config.table_name ) || $$ ( id );
    $$;
    RAISE INFO &#39;Analyzing table %&#39;, quote_ident( config.table_name );
    EXECUTE $$
    ANALYZE $$ || quote_ident( config.table_name ) || $$;
    $$;
  END LOOP;
END;
$SQL$;

SELECT * FROM eval_table_stats;

DO $SQL$
DECLARE 
  config record;
BEGIN
  FOR config IN SELECT * FROM eval_config
  LOOP
    RAISE INFO &#39;Bloating table % (phase 1)&#39;, quote_ident( config.table_name );
    EXECUTE $$
    UPDATE $$ || quote_ident( config.table_name ) || $$
       SET data = data
     WHERE random() &gt; 0.5;
    $$;
    RAISE INFO &#39;Bloating table % (phase 2)&#39;, quote_ident( config.table_name );
    EXECUTE $$
    UPDATE $$ || quote_ident( config.table_name ) || $$
       SET data = data
     WHERE random() &gt; 0.5;
    $$;
    RAISE INFO &#39;Analyzing table %&#39;, quote_ident( config.table_name );
    EXECUTE $$
    ANALYZE $$ || quote_ident( config.table_name ) || $$;
    $$;
  END LOOP;
END;
$SQL$;

SELECT * FROM eval_table_stats;
&lt;/pre&gt;&lt;br /&gt;
As a result of execution of this script, we got several tables and some statistics on the table and index sizes. &lt;br /&gt;
I created the tables with &lt;code&gt;id&lt;/code&gt; field with types: &lt;code&gt;integer&lt;/code&gt;, &lt;code&gt;bigint&lt;/code&gt;, &lt;code&gt;text&lt;/code&gt; and &lt;code&gt;numeric&lt;/code&gt;, additionally &lt;code&gt;bytea&lt;/code&gt; and &lt;code&gt;text&lt;/code&gt; for the md5 hash indexes. Actually the table size is including the filler text data, so it&#39;s size is not only the size of the fields being evaluated:&lt;br /&gt;
&lt;br /&gt;
Table sizes just after insertion of 100T rows&lt;br /&gt;
&lt;br /&gt;
&lt;table border=1 cellspacing=0 cellpadding=1&gt;&lt;tr align=&quot;center&quot;&gt; &lt;th&gt;         Table name         &lt;/th&gt;&lt;th&gt; Indexed field type &lt;/th&gt;&lt;th&gt;                     Expression                      &lt;/th&gt;&lt;th&gt; Row count &lt;/th&gt;&lt;th&gt; Average id field width &lt;/th&gt;&lt;th&gt; Table size without index &lt;/th&gt;&lt;th&gt; Index size&lt;br /&gt;
&lt;/th&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt; integer_short_id_table     &lt;/td&gt;&lt;td&gt; integer            &lt;/td&gt;&lt;td&gt; s.i                                                 &lt;/td&gt;&lt;td&gt;    100000 &lt;/td&gt;&lt;td&gt;                      4 &lt;/td&gt;&lt;td&gt; 5128 kB                  &lt;/td&gt;&lt;td&gt; 1768 kB&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt; integer_large_id_table     &lt;/td&gt;&lt;td&gt; integer            &lt;/td&gt;&lt;td&gt; s.i * 1234                                          &lt;/td&gt;&lt;td&gt;    100000 &lt;/td&gt;&lt;td&gt;                      4 &lt;/td&gt;&lt;td&gt; 5128 kB                  &lt;/td&gt;&lt;td&gt; 1768 kB&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt; bigint_short_id_table      &lt;/td&gt;&lt;td&gt; bigint             &lt;/td&gt;&lt;td&gt; s.i                                                 &lt;/td&gt;&lt;td&gt;    100000 &lt;/td&gt;&lt;td&gt;                      8 &lt;/td&gt;&lt;td&gt; 5552 kB                  &lt;/td&gt;&lt;td&gt; 2208 kB&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt; bigint_large_id_table      &lt;/td&gt;&lt;td&gt; bigint             &lt;/td&gt;&lt;td&gt; s.i * 1234                                          &lt;/td&gt;&lt;td&gt;    100000 &lt;/td&gt;&lt;td&gt;                      8 &lt;/td&gt;&lt;td&gt; 5552 kB                  &lt;/td&gt;&lt;td&gt; 2208 kB&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt; text_number_short_id_table &lt;/td&gt;&lt;td&gt; text               &lt;/td&gt;&lt;td&gt; s.i                                                 &lt;/td&gt;&lt;td&gt;    100000 &lt;/td&gt;&lt;td&gt;                      5 &lt;/td&gt;&lt;td&gt; 5128 kB                  &lt;/td&gt;&lt;td&gt; 2200 kB&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt; text_number_large_id_table &lt;/td&gt;&lt;td&gt; text               &lt;/td&gt;&lt;td&gt; s.i * 1234                                          &lt;/td&gt;&lt;td&gt;    100000 &lt;/td&gt;&lt;td&gt;                      9 &lt;/td&gt;&lt;td&gt; 5552 kB                  &lt;/td&gt;&lt;td&gt; 2624 kB&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt; numeric_short_id_table     &lt;/td&gt;&lt;td&gt; numeric            &lt;/td&gt;&lt;td&gt; s.i                                                 &lt;/td&gt;&lt;td&gt;    100000 &lt;/td&gt;&lt;td&gt;                      8 &lt;/td&gt;&lt;td&gt; 5552 kB                  &lt;/td&gt;&lt;td&gt; 2616 kB&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt; numeric_large_id_table     &lt;/td&gt;&lt;td&gt; numeric            &lt;/td&gt;&lt;td&gt; s.i * 1234                                          &lt;/td&gt;&lt;td&gt;    100000 &lt;/td&gt;&lt;td&gt;                      9 &lt;/td&gt;&lt;td&gt; 5624 kB                  &lt;/td&gt;&lt;td&gt; 2656 kB&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt; binary_md5_text_table      &lt;/td&gt;&lt;td&gt; bytea              &lt;/td&gt;&lt;td&gt; decode( md5( s.i::text || &#39;-text-filler&#39;), &#39;hex&#39; )  &lt;/td&gt;&lt;td&gt;    100000 &lt;/td&gt;&lt;td&gt;                     17 &lt;/td&gt;&lt;td&gt; 6336 kB                  &lt;/td&gt;&lt;td&gt; 3552 kB&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt; md5_text_table             &lt;/td&gt;&lt;td&gt; text               &lt;/td&gt;&lt;td&gt; md5( s.i::text || &#39;-text-filler&#39; )                  &lt;/td&gt;&lt;td&gt;    100000 &lt;/td&gt;&lt;td&gt;                     33 &lt;/td&gt;&lt;td&gt; 7880 kB                  &lt;/td&gt;&lt;td&gt; 5328 kB&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;&lt;br /&gt;
Table sizes after random bloating (2 times random update of 50% of rows)&lt;br /&gt;
&lt;br /&gt;
&lt;table border=1 cellspacing=0 cellpadding=1&gt;&lt;tr align=&quot;center&quot;&gt; &lt;th&gt;         Table name         &lt;/th&gt;&lt;th&gt; Indexed field type &lt;/th&gt;&lt;th&gt;                     Expression                      &lt;/th&gt;&lt;th&gt; Row count &lt;/th&gt;&lt;th&gt; Average id field width &lt;/th&gt;&lt;th&gt; Table size without index &lt;/th&gt;&lt;th&gt; Index size&lt;br /&gt;
&lt;/th&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt; integer_short_id_table     &lt;/td&gt;&lt;td&gt; integer            &lt;/td&gt;&lt;td&gt; s.i                                                 &lt;/td&gt;&lt;td&gt;    100000 &lt;/td&gt;&lt;td&gt;                      4 &lt;/td&gt;&lt;td&gt; 10232 kB                 &lt;/td&gt;&lt;td&gt; 5288 kB&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt; integer_large_id_table     &lt;/td&gt;&lt;td&gt; integer            &lt;/td&gt;&lt;td&gt; s.i * 1234                                          &lt;/td&gt;&lt;td&gt;    100000 &lt;/td&gt;&lt;td&gt;                      4 &lt;/td&gt;&lt;td&gt; 10232 kB                 &lt;/td&gt;&lt;td&gt; 5272 kB&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt; bigint_short_id_table      &lt;/td&gt;&lt;td&gt; bigint             &lt;/td&gt;&lt;td&gt; s.i                                                 &lt;/td&gt;&lt;td&gt;    100000 &lt;/td&gt;&lt;td&gt;                      8 &lt;/td&gt;&lt;td&gt; 11 MB                    &lt;/td&gt;&lt;td&gt; 6592 kB&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt; bigint_large_id_table      &lt;/td&gt;&lt;td&gt; bigint             &lt;/td&gt;&lt;td&gt; s.i * 1234                                          &lt;/td&gt;&lt;td&gt;    100000 &lt;/td&gt;&lt;td&gt;                      8 &lt;/td&gt;&lt;td&gt; 11 MB                    &lt;/td&gt;&lt;td&gt; 6560 kB&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt; text_number_short_id_table &lt;/td&gt;&lt;td&gt; text               &lt;/td&gt;&lt;td&gt; s.i                                                 &lt;/td&gt;&lt;td&gt;    100000 &lt;/td&gt;&lt;td&gt;                      5 &lt;/td&gt;&lt;td&gt; 10232 kB                 &lt;/td&gt;&lt;td&gt; 5896 kB&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt; text_number_large_id_table &lt;/td&gt;&lt;td&gt; text               &lt;/td&gt;&lt;td&gt; s.i * 1234                                          &lt;/td&gt;&lt;td&gt;    100000 &lt;/td&gt;&lt;td&gt;                      9 &lt;/td&gt;&lt;td&gt; 11 MB                    &lt;/td&gt;&lt;td&gt; 7096 kB&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt; numeric_short_id_table     &lt;/td&gt;&lt;td&gt; numeric            &lt;/td&gt;&lt;td&gt; s.i                                                 &lt;/td&gt;&lt;td&gt;    100000 &lt;/td&gt;&lt;td&gt;                      8 &lt;/td&gt;&lt;td&gt; 11 MB                    &lt;/td&gt;&lt;td&gt; 7752 kB&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt; numeric_large_id_table     &lt;/td&gt;&lt;td&gt; numeric            &lt;/td&gt;&lt;td&gt; s.i * 1234                                          &lt;/td&gt;&lt;td&gt;    100000 &lt;/td&gt;&lt;td&gt;                      9 &lt;/td&gt;&lt;td&gt; 11 MB                    &lt;/td&gt;&lt;td&gt; 7880 kB&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt; binary_md5_text_table      &lt;/td&gt;&lt;td&gt; bytea              &lt;/td&gt;&lt;td&gt; decode( md5( s.i::text || &#39;-text-filler&#39;), &#39;hex&#39; )  &lt;/td&gt;&lt;td&gt;    100000 &lt;/td&gt;&lt;td&gt;                     17 &lt;/td&gt;&lt;td&gt; 12 MB                    &lt;/td&gt;&lt;td&gt; 7336 kB&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt; md5_text_table             &lt;/td&gt;&lt;td&gt; text               &lt;/td&gt;&lt;td&gt; md5( s.i::text || &#39;-text-filler&#39; )                  &lt;/td&gt;&lt;td&gt;    100000 &lt;/td&gt;&lt;td&gt;                     33 &lt;/td&gt;&lt;td&gt; 15 MB                    &lt;/td&gt;&lt;td&gt; 11 MB&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;&lt;br /&gt;
Table sizes just after insertion of 5M rows&lt;br /&gt;
&lt;br /&gt;
&lt;table border=1 cellspacing=0 cellpadding=1&gt;&lt;tr align=&quot;center&quot;&gt; &lt;th&gt;         Table name         &lt;/th&gt;&lt;th&gt; Indexed field type &lt;/th&gt;&lt;th&gt;                     Expression                      &lt;/th&gt;&lt;th&gt; Row count &lt;/th&gt;&lt;th&gt; Average id field width &lt;/th&gt;&lt;th&gt; Table size without index &lt;/th&gt;&lt;th&gt; Index size&lt;br /&gt;
&lt;/th&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt; integer_short_id_table     &lt;/td&gt;&lt;td&gt; integer            &lt;/td&gt;&lt;td&gt; s.i                                                 &lt;/td&gt;&lt;td&gt;   5000000 &lt;/td&gt;&lt;td&gt;                      4 &lt;/td&gt;&lt;td&gt; 249 MB                   &lt;/td&gt;&lt;td&gt; 86 MB&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt; integer_large_id_table     &lt;/td&gt;&lt;td&gt; integer            &lt;/td&gt;&lt;td&gt; s.i * 123                                           &lt;/td&gt;&lt;td&gt;   5000000 &lt;/td&gt;&lt;td&gt;                      4 &lt;/td&gt;&lt;td&gt; 249 MB                   &lt;/td&gt;&lt;td&gt; 86 MB&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt; bigint_short_id_table      &lt;/td&gt;&lt;td&gt; bigint             &lt;/td&gt;&lt;td&gt; s.i                                                 &lt;/td&gt;&lt;td&gt;   5000000 &lt;/td&gt;&lt;td&gt;                      8 &lt;/td&gt;&lt;td&gt; 269 MB                   &lt;/td&gt;&lt;td&gt; 107 MB&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt; bigint_large_id_table      &lt;/td&gt;&lt;td&gt; bigint             &lt;/td&gt;&lt;td&gt; s.i * 123                                           &lt;/td&gt;&lt;td&gt;   5000000 &lt;/td&gt;&lt;td&gt;                      8 &lt;/td&gt;&lt;td&gt; 269 MB                   &lt;/td&gt;&lt;td&gt; 107 MB&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt; text_number_short_id_table &lt;/td&gt;&lt;td&gt; text               &lt;/td&gt;&lt;td&gt; s.i                                                 &lt;/td&gt;&lt;td&gt;   5000000 &lt;/td&gt;&lt;td&gt;                      7 &lt;/td&gt;&lt;td&gt; 269 MB                   &lt;/td&gt;&lt;td&gt; 107 MB&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt; text_number_large_id_table &lt;/td&gt;&lt;td&gt; text               &lt;/td&gt;&lt;td&gt; s.i * 123                                           &lt;/td&gt;&lt;td&gt;   5000000 &lt;/td&gt;&lt;td&gt;                      9 &lt;/td&gt;&lt;td&gt; 269 MB                   &lt;/td&gt;&lt;td&gt; 128 MB&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt; numeric_short_id_table     &lt;/td&gt;&lt;td&gt; numeric            &lt;/td&gt;&lt;td&gt; s.i                                                 &lt;/td&gt;&lt;td&gt;   5000000 &lt;/td&gt;&lt;td&gt;                      8 &lt;/td&gt;&lt;td&gt; 269 MB                   &lt;/td&gt;&lt;td&gt; 129 MB&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt; numeric_large_id_table     &lt;/td&gt;&lt;td&gt; numeric            &lt;/td&gt;&lt;td&gt; s.i * 123                                           &lt;/td&gt;&lt;td&gt;   5000000 &lt;/td&gt;&lt;td&gt;                     10 &lt;/td&gt;&lt;td&gt; 284 MB                   &lt;/td&gt;&lt;td&gt; 129 MB&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt; binary_md5_text_table      &lt;/td&gt;&lt;td&gt; bytea              &lt;/td&gt;&lt;td&gt; decode( md5( s.i::text || &#39;-text-filler&#39;), &#39;hex&#39; )  &lt;/td&gt;&lt;td&gt;   5000000 &lt;/td&gt;&lt;td&gt;                     17 &lt;/td&gt;&lt;td&gt; 308 MB                   &lt;/td&gt;&lt;td&gt; 172 MB&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt; md5_text_table             &lt;/td&gt;&lt;td&gt; text               &lt;/td&gt;&lt;td&gt; md5( s.i::text || &#39;-text-filler&#39; )                  &lt;/td&gt;&lt;td&gt;   5000000 &lt;/td&gt;&lt;td&gt;                     33 &lt;/td&gt;&lt;td&gt; 383 MB                   &lt;/td&gt;&lt;td&gt; 259 MB&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;&lt;br /&gt;
Table sizes after random bloating (2 times random update of 50% of rows)&lt;br /&gt;
&lt;br /&gt;
&lt;table border=1 cellspacing=0 cellpadding=1&gt;&lt;tr align=&quot;center&quot;&gt; &lt;th&gt;         Table name         &lt;/th&gt;&lt;th&gt; Indexed field type &lt;/th&gt;&lt;th&gt;                     Expression                      &lt;/th&gt;&lt;th&gt; Row count &lt;/th&gt;&lt;th&gt; Average id field width &lt;/th&gt;&lt;th&gt; Table size without index &lt;/th&gt;&lt;th&gt; Index size&lt;br /&gt;
&lt;/th&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt; integer_short_id_table     &lt;/td&gt;&lt;td&gt; integer            &lt;/td&gt;&lt;td&gt; s.i                                                 &lt;/td&gt;&lt;td&gt;   5000000 &lt;/td&gt;&lt;td&gt;                      4 &lt;/td&gt;&lt;td&gt; 498 MB                   &lt;/td&gt;&lt;td&gt; 257 MB&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt; integer_large_id_table     &lt;/td&gt;&lt;td&gt; integer            &lt;/td&gt;&lt;td&gt; s.i * 123                                           &lt;/td&gt;&lt;td&gt;   5000000 &lt;/td&gt;&lt;td&gt;                      4 &lt;/td&gt;&lt;td&gt; 498 MB                   &lt;/td&gt;&lt;td&gt; 257 MB&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt; bigint_short_id_table      &lt;/td&gt;&lt;td&gt; bigint             &lt;/td&gt;&lt;td&gt; s.i                                                 &lt;/td&gt;&lt;td&gt;   5000000 &lt;/td&gt;&lt;td&gt;                      8 &lt;/td&gt;&lt;td&gt; 539 MB                   &lt;/td&gt;&lt;td&gt; 321 MB&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt; bigint_large_id_table      &lt;/td&gt;&lt;td&gt; bigint             &lt;/td&gt;&lt;td&gt; s.i * 123                                           &lt;/td&gt;&lt;td&gt;   5000000 &lt;/td&gt;&lt;td&gt;                      8 &lt;/td&gt;&lt;td&gt; 539 MB                   &lt;/td&gt;&lt;td&gt; 321 MB&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt; text_number_short_id_table &lt;/td&gt;&lt;td&gt; text               &lt;/td&gt;&lt;td&gt; s.i                                                 &lt;/td&gt;&lt;td&gt;   5000000 &lt;/td&gt;&lt;td&gt;                      7 &lt;/td&gt;&lt;td&gt; 538 MB                   &lt;/td&gt;&lt;td&gt; 287 MB&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt; text_number_large_id_table &lt;/td&gt;&lt;td&gt; text               &lt;/td&gt;&lt;td&gt; s.i * 123                                           &lt;/td&gt;&lt;td&gt;   5000000 &lt;/td&gt;&lt;td&gt;                      9 &lt;/td&gt;&lt;td&gt; 539 MB                   &lt;/td&gt;&lt;td&gt; 340 MB&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt; numeric_short_id_table     &lt;/td&gt;&lt;td&gt; numeric            &lt;/td&gt;&lt;td&gt; s.i                                                 &lt;/td&gt;&lt;td&gt;   5000000 &lt;/td&gt;&lt;td&gt;                      8 &lt;/td&gt;&lt;td&gt; 539 MB                   &lt;/td&gt;&lt;td&gt; 384 MB&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt; numeric_large_id_table     &lt;/td&gt;&lt;td&gt; numeric            &lt;/td&gt;&lt;td&gt; s.i * 123                                           &lt;/td&gt;&lt;td&gt;   5000000 &lt;/td&gt;&lt;td&gt;                     10 &lt;/td&gt;&lt;td&gt; 569 MB                   &lt;/td&gt;&lt;td&gt; 384 MB&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt; binary_md5_text_table      &lt;/td&gt;&lt;td&gt; bytea              &lt;/td&gt;&lt;td&gt; decode( md5( s.i::text || &#39;-text-filler&#39;), &#39;hex&#39; )  &lt;/td&gt;&lt;td&gt;   5000000 &lt;/td&gt;&lt;td&gt;                     17 &lt;/td&gt;&lt;td&gt; 615 MB                   &lt;/td&gt;&lt;td&gt; 354 MB&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt; md5_text_table             &lt;/td&gt;&lt;td&gt; text               &lt;/td&gt;&lt;td&gt; md5( s.i::text || &#39;-text-filler&#39; )                  &lt;/td&gt;&lt;td&gt;   5000000 &lt;/td&gt;&lt;td&gt;                     33 &lt;/td&gt;&lt;td&gt; 766 MB                   &lt;/td&gt;&lt;td&gt; 545 MB&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;&lt;br /&gt;
Ok, we see here, that actually it is a field byte width, that is important and for small numbers, the difference is really not so significant, but still an index on &lt;code&gt;integer&lt;/code&gt; (4 bytes wide) filed is little smaller then index on text representation of the same ids even for tables that keep 10000 records only (these results are not posted here).&lt;br /&gt;
&lt;br /&gt;
I specially was creating the tables in 2 variants, &lt;code&gt;id&lt;/code&gt; as a result of &lt;code&gt;1..N&lt;/code&gt; series, and multiplied by 1234 or 123 to make the numbers wider in text representation to see how the length of the text representation of the number is influencing the size of the index.&lt;br /&gt;
&lt;br /&gt;
Some more play with the &lt;code&gt;smallint&lt;/code&gt; type, showed that it does save space in the table itself, but the index size is the same as for &lt;code&gt;integer&lt;/code&gt;. &lt;br /&gt;
&lt;br /&gt;
What was really surprising for me, that &lt;code&gt;numeric&lt;/code&gt; type had bigger indexes then text indexes for the same numbers.&lt;br /&gt;
&lt;br /&gt;
And of cause again and again -- indexes on some long text fields can be built on the md5 hash of such a text and be much more performant, when you index &lt;code&gt;bytea&lt;/code&gt; version of the md5 hash instead of text representation of the md5 hash itself. Though md5 hash collisions can theoretically happen, I have not experienced them yet myself in practice.</content><link rel='replies' type='application/atom+xml' href='http://tech.valgog.com/feeds/7076524815797925804/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/6018614372585319965/7076524815797925804' title='3 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/6018614372585319965/posts/default/7076524815797925804'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/6018614372585319965/posts/default/7076524815797925804'/><link rel='alternate' type='text/html' href='http://tech.valgog.com/2011/04/index-sizes-depending-on-type-of-field.html' title='Index sizes depending on the type of the field being indexed'/><author><name>valgog</name><uri>http://www.blogger.com/profile/02521879454303479150</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>3</thr:total></entry><entry><id>tag:blogger.com,1999:blog-6018614372585319965.post-7029611350428822398</id><published>2011-02-17T17:22:00.002+01:00</published><updated>2013-09-01T18:42:03.792+02:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="pgadmin"/><category scheme="http://www.blogger.com/atom/ns#" term="postgresql"/><title type='text'>pgAdmin III macros: get table fields</title><content type='html'>&lt;div dir=&quot;ltr&quot; style=&quot;text-align: left;&quot; trbidi=&quot;on&quot;&gt;
When writing SQL statements of Stored Procedure code in pgAdmin, it is quite often quite handy to know the names and restrictions on the fields of some table. &lt;br /&gt;
&lt;br /&gt;
Here is a simple macro that can be assigned to a key combination in &lt;code&gt;pgAdmin -&amp;gt; Query Window -&amp;gt; Macros -&amp;gt; Manage macros&lt;/code&gt;... &lt;br /&gt;
&lt;br /&gt;
&lt;pre class=&quot;brush: sql; light: true&quot;&gt;select quote_ident(nspname) || &#39;.&#39; || quote_ident(relname) as table_name, 
       quote_ident(attname) as field_name, 
       format_type(atttypid,atttypmod) as field_type, 
       case when attnotnull then &#39; NOT NULL&#39; else &#39;&#39; end as null_constraint,
       case when atthasdef then &#39;DEFAULT &#39; || 
                                ( select pg_get_expr(adbin, attrelid) 
                                    from pg_attrdef 
                                   where adrelid = attrelid and adnum = attnum )::text else &#39;&#39; 
       end as dafault_value,
       case when nullif(confrelid, 0) is not null 
            then confrelid::regclass::text || &#39;( &#39; || 
                 array_to_string( ARRAY( select quote_ident( fa.attname ) 
                                           from pg_attribute as fa 
                                          where fa.attnum = ANY ( confkey ) 
                                            and fa.attrelid = confrelid
                                          order by fa.attnum 
                                        ), &#39;,&#39; 
                                 ) || &#39; )&#39; 
            else &#39;&#39; end as references_to
  from pg_attribute 
       left outer join pg_constraint on conrelid = attrelid 
                                    and attnum = conkey[1] 
                                    and array_upper( conkey, 1 ) = 1,
       pg_class, 
       pg_namespace
 where pg_class.oid = attrelid
   and pg_namespace.oid = relnamespace
   and pg_class.oid = btrim( &#39;$SELECTION$&#39; )::regclass::oid
   and attnum &amp;gt; 0
   and not attisdropped
 order by attrelid, attnum;
&lt;/pre&gt;
&lt;br /&gt;
Just select a table name, that you are interested in, and press the key binding, that you selected for that macros. pgAdmin will execute the query and show the list of all the columns of that table, including default values and foreign key references... &lt;br /&gt;
&lt;br /&gt;
My preferred key binding is &lt;code&gt;CTRL+1&lt;/code&gt;&lt;/div&gt;
</content><link rel='replies' type='application/atom+xml' href='http://tech.valgog.com/feeds/7029611350428822398/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/6018614372585319965/7029611350428822398' title='4 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/6018614372585319965/posts/default/7029611350428822398'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/6018614372585319965/posts/default/7029611350428822398'/><link rel='alternate' type='text/html' href='http://tech.valgog.com/2011/02/pgadmin-iii-macros-get-table-fields.html' title='pgAdmin III macros: get table fields'/><author><name>valgog</name><uri>http://www.blogger.com/profile/02521879454303479150</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>4</thr:total></entry><entry><id>tag:blogger.com,1999:blog-6018614372585319965.post-1252146233309684037</id><published>2011-02-17T16:20:00.002+01:00</published><updated>2011-02-17T16:22:57.844+01:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="postgresql"/><title type='text'>Type of NULL is important</title><content type='html'>The following SQL query shows how different can be the result of concatenating arrays with NULL values:&lt;br /&gt;
&lt;br /&gt;
&lt;pre class=&quot;brush: sql; light: true&quot;&gt;select &#39;{e1,e2}&#39;::text[] || &#39;t&#39;::text     as normal_array_concatenation,
       NULL::text[] || &#39;t&#39;::text         as appending_element_to_existing_NULL_value_array, 
       NULL || &#39;t&#39;::text                 as appending_element_to_NULL_value,
       &#39;{e1,e2}&#39;::text[] || NULL::text   as appending_typed_NULL,
       &#39;{e1,e2}&#39;::text[] || NULL::text[] as appending_typed_NULL_array;
&lt;/pre&gt;&lt;br /&gt;
The result of the execution (on 9.0) is:&lt;br /&gt;
&lt;br /&gt;
&lt;pre class=&quot;brush: text; light: true&quot;&gt;─[ RECORD 1 ]──────────────────────────────────┬─────────────
normal_array_concatenation                     │ {e1,e2,t}
appending_element_to_existing_null_value_array │ {t}
appending_element_to_null_value                │ 
appending_typed_null                           │ {e1,e2,NULL}
appending_typed_null_array                     │ {e1,e2}
&lt;/pre&gt;&lt;br /&gt;
That explains why one can simply initialize a new array variable in PL/pgSQL and then immediately start concatenating it with values, not pre-initializing it with an empty array (that you have to do when you want to populate a text string... you cannot just take a NULL::text variable and start concatenating strings to it, you have to first pre-inizialize it with en empty string, but with arrays you can).&lt;br /&gt;
&lt;br /&gt;
Another very important issue, that is related to that beheviour of arrays, is that when creating dynamic SQL queries for EXECUTE command in PL/pgSQL you &lt;font color=red&gt;SHOULD&lt;/font&gt; always put explicit type casts to the variables that you quote using quote_literal() or quote_nullable() when building dynamic queries:&lt;br /&gt;
&lt;br /&gt;
&lt;pre class=&quot;brush: sql; light: true&quot;&gt;DO $SQL$
DECLARE
  r text[];
  t text   := NULL;
  a text[] := NULL;
BEGIN

  SELECT &#39;{e1,e2}&#39;::text[] || a INTO r;
  RAISE INFO &#39;array concatenate: r is %&#39;, r;

  EXECUTE $$SELECT &#39;{e1,e2}&#39;::text[] || $$ || quote_nullable(a) || $$::text[] $$ INTO r;
  RAISE INFO &#39;array concatenate from EXECUTE: r is %&#39;, r;

  SELECT &#39;{e1,e2}&#39;::text[] || t INTO r;
  RAISE INFO &#39;array append: r is %&#39;, r;

  EXECUTE $$SELECT &#39;{e1,e2}&#39;::text[] || $$ || quote_nullable(t) || $$::text $$ INTO r;
  RAISE INFO &#39;array append from EXECUTE: r is %&#39;, r;

  /* These 2 examples will fail on the runtime 
     throwing exception (operator is not unique: text[] || unknown)
  --EXECUTE $$SELECT &#39;{e1,e2}&#39;::text[] || $$ || quote_nullable(a) INTO r;
  --RAISE INFO &#39;r is %&#39;, r;
  --
  --EXECUTE $$SELECT &#39;{e1,e2}&#39;::text[] || $$ || quote_nullable(t) INTO r;
  --RAISE INFO &#39;r is %&#39;, r;
  */

END;
$SQL$;
&lt;/pre&gt;</content><link rel='replies' type='application/atom+xml' href='http://tech.valgog.com/feeds/1252146233309684037/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/6018614372585319965/1252146233309684037' title='2 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/6018614372585319965/posts/default/1252146233309684037'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/6018614372585319965/posts/default/1252146233309684037'/><link rel='alternate' type='text/html' href='http://tech.valgog.com/2011/02/type-of-null-is-important.html' title='Type of NULL is important'/><author><name>valgog</name><uri>http://www.blogger.com/profile/02521879454303479150</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>2</thr:total></entry><entry><id>tag:blogger.com,1999:blog-6018614372585319965.post-339870404233962032</id><published>2010-08-08T14:57:00.010+02:00</published><updated>2012-03-01T12:47:30.200+01:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="postgresql"/><title type='text'>ALTER ENUM in PostgreSQL</title><content type='html'>&lt;div dir=&quot;ltr&quot; style=&quot;text-align: left;&quot; trbidi=&quot;on&quot;&gt;
&lt;blockquote class=&quot;tr_bq&quot;&gt;
&lt;i&gt;Note:&lt;/i&gt; do not use this method on PostgreSQL 9.1. This can corrupt your catalog as new enum model&amp;nbsp;differentiates&amp;nbsp;between even and odd OIDs for enum values. But in PostgreSQL 9.1 there is a nice &lt;a href=&quot;http://www.postgresql.org/docs/current/static/sql-altertype.html&quot; target=&quot;_blank&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: &#39;Courier New&#39;, Courier, monospace;&quot;&gt;ALTER TYPE&lt;/span&gt;&lt;/a&gt; command, that can alter you enum&amp;nbsp;correctly.&lt;/blockquote&gt;
&lt;br /&gt;
Unfortunately PostgreSQL (9.0 and before) does not support altering &lt;a href=&quot;http://www.postgresql.org/docs/current/static/datatype-enum.html&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: &#39;Courier New&#39;, Courier, monospace;&quot;&gt;ENUM&lt;/span&gt;&lt;/a&gt; types, that makes their use quite limited. Very often, it is important to have a possibility to extend already existing enumeration type and in very few cases delete or rename an element of an enumeration type. &lt;br /&gt;
&lt;br /&gt;
The standard way to do that, would be to create a new enumeration type with some temporary name, convert the fields of the type of your old enumeration type to the new enumeration type you just created, then drop the old one and rename the temporary name to the original enum type. This operation is doable, but conversion of the field type is usually getting a full lock on that table and can be quite a long operation (maybe in 9.1 this will change).&lt;br /&gt;
&lt;br /&gt;
By now, the only way to do it without table lock and messing up with temporary enumeration types, is to play with the catalog directly. &lt;br /&gt;
&lt;br /&gt;
As internally postgres is storing OIDs of the enumeration type labels in the tables and types, where this enumeration is being used, one can actually add new labels and change the names of existing labels in the needed enumeration without a fear to destroy integrity of existing data structures, that use that enumeration labels already (i.e. OIDs of that enumeration labels).&lt;br /&gt;
&lt;br /&gt;
To add one additional label in an existing &lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: &#39;Courier New&#39;, Courier, monospace;&quot;&gt;ENUM&lt;/span&gt; one can do the following:&lt;br /&gt;
&lt;br /&gt;
&lt;pre class=&quot;brush: sql; light: true;&quot;&gt;insert into pg_enum(enumtypid, enumlabel)
select t.oid, &#39;NEW_ENUM_VALUE&#39;
  from pg_type as t
 where t.typtype = &#39;e&#39;
   and t.oid = &#39;existing_enum&#39;::regtype::oid
   and string_to_array( split_part(version(), &#39; &#39;, 2), &#39;.&#39; )::int[] &amp;lt; ARRAY[9,1]
returning enumtypid::regtype::text, enumlabel;
&lt;/pre&gt;
&lt;br /&gt;
This will add a label of &lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: &#39;Courier New&#39;, Courier, monospace;&quot;&gt;NEW_ENUM_VALUE&lt;/span&gt; to an existing enumeration type &lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: &#39;Courier New&#39;, Courier, monospace;&quot;&gt;existing_enum&lt;/span&gt; (be sure to put the right letter case for your enumeration type and enumeration label names).&lt;br /&gt;
&lt;br /&gt;
To rename an existing value name one can do the following:&lt;br /&gt;
&lt;br /&gt;
&lt;pre class=&quot;brush: sql; light: true;&quot;&gt;update pg_enum as e
   set enumlabel = &#39;UPDATED_ENUM_VALUE&#39;
 where e.enumtypid = &#39;existing_enum&#39;::regtype::oid
   and e.enumlabel = &#39;OLD_ENUM_VALUE&#39;
   and string_to_array( split_part(version(), &#39; &#39;, 2), &#39;.&#39; )::int[] &amp;lt; ARRAY[9,1]
returning e.enumtypid::regtype::text, e.enumlabel;
&lt;/pre&gt;
&lt;br /&gt;
Deletion of the enumeration labels can be really dangerous as one can accidentally drop an OID of the label, that is still being used in the tables. &lt;a href=&quot;http://en.dklab.ru/lib/dklab_postgresql_enum/&quot;&gt;Dmitry Koterov wrote a stored procedure&lt;/a&gt;, that does some easiest checks on tables (not types or types of types) that use the enumeration that you want to change. So if you really need to drop an enumeration label from an existing enumeration, think twice first and then read &lt;a href=&quot;http://en.dklab.ru/lib/dklab_postgresql_enum/&quot;&gt;the post&lt;/a&gt; with the source of this stored procedure :)&lt;/div&gt;</content><link rel='replies' type='application/atom+xml' href='http://tech.valgog.com/feeds/339870404233962032/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/6018614372585319965/339870404233962032' title='5 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/6018614372585319965/posts/default/339870404233962032'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/6018614372585319965/posts/default/339870404233962032'/><link rel='alternate' type='text/html' href='http://tech.valgog.com/2010/08/alter-enum-in-postgresql.html' title='ALTER ENUM in PostgreSQL'/><author><name>valgog</name><uri>http://www.blogger.com/profile/02521879454303479150</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>5</thr:total></entry><entry><id>tag:blogger.com,1999:blog-6018614372585319965.post-4552412525300318101</id><published>2010-05-28T13:59:00.002+02:00</published><updated>2011-02-17T16:23:49.119+01:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="postgresql"/><title type='text'>Merging and manipulating arrays in PostgreSQL</title><content type='html'>Though, strictly speaking, using arrays in relational databases is kind of not correct, still we use arrays, especially in stored procedures. &lt;br /&gt;
&lt;br /&gt;
Actually PostgreSQL provides us with quite an number of nice &lt;a href=&quot;http://www.postgresql.org/docs/8.0/interactive/functions-array.html&quot;&gt;array manipulation functions&lt;/a&gt;. That can be used at least starting from PostgreSQL 8.0. But what about set operations. How to sort an array in Postgres? How to merge arrays without repetitions? How to remove duplicate elements keeping the order of the appearance in the original array?&lt;br /&gt;
&lt;br /&gt;
If you are working with the integer based arrays, there is quite an old and effective module &lt;a href=&quot;http://www.postgresql.org/docs/8.3/interactive/intarray.html&quot; target=&quot;_blank&quot;&gt;&lt;code&gt;intarray&lt;/code&gt;&lt;/a&gt; that allows us to perform some basic set operations on the integer based arrays.&lt;br /&gt;
&lt;br /&gt;
But what to do with not integer based arrays? Or if we want to do some relatively complicated operation on integer bases array? SQL is a set manipulation language, so that means that actually, we can use internal SQL mechanics to manipulate, merge and sort arrays. But for that, first we have to be able to convert an array into kind of a table, so that we can use usual SQL manipulation on it&#39;s elements. &lt;br /&gt;
&lt;br /&gt;
&lt;a href=&quot;http://www.postgresql.org/docs/8.0/interactive/functions-srf.html&quot;&gt;&lt;code&gt;generate_series()&lt;/code&gt;&lt;/a&gt;, &lt;a href=&quot;http://www.postgresql.org/docs/8.3/interactive/functions-srf.html&quot;&gt;&lt;code&gt;generate_subscripts()&lt;/code&gt;&lt;/a&gt; and &lt;a href=&quot;http://www.postgresql.org/docs/8.4/interactive/functions-array.html&quot;&gt;&lt;code&gt;unnest()&lt;/code&gt;&lt;/a&gt; are the most important functions in our case. Unfortunately &lt;a href=&quot;http://www.postgresql.org/docs/8.3/interactive/functions-srf.html&quot;&gt;&lt;code&gt;generate_subscripts()&lt;/code&gt;&lt;/a&gt; and &lt;a href=&quot;http://www.postgresql.org/docs/8.4/interactive/functions-array.html&quot;&gt;&lt;code&gt;unnest()&lt;/code&gt;&lt;/a&gt; are only available starting from PostgreSQL 8.4, but one can easily create a small helper &lt;code&gt;unnest()&lt;/code&gt; function using &lt;a href=&quot;http://www.postgresql.org/docs/8.0/interactive/functions-srf.html&quot;&gt;&lt;code&gt;generate_series()&lt;/code&gt;&lt;/a&gt;, that will be not as efficient as the build in one, but still it is quite good trade off. &lt;br /&gt;
&lt;br /&gt;
&lt;pre class=&quot;brush: sql; light: true;&quot;&gt;CREATE OR REPLACE FUNCTION unnest(a anyarray)
  RETURNS SETOF anyelement AS
$BODY$
/**
 *  Unnests given array into a table
 */
 /* -- testing
     select unnest(ARRAY[1,2,3,4,5]) as i
     select * from unnest(ARRAY[1,2,3,4,5]) as u(i)
  */
select ($1)[s.i] 
  from generate_series( array_lower($1, 1), array_upper($1, 1 ) ) as s(i);
$BODY$
  LANGUAGE &#39;sql&#39; IMMUTABLE STRICT;
&lt;/pre&gt;&lt;br /&gt;
A good thing about &lt;code&gt;unnest()&lt;/code&gt; written in SQL and not PL/pgSQL is that one can use such a set retuning function directly after &lt;code&gt;SELECT&lt;/code&gt; not necessarily pushing it in the list of used tables after &lt;code&gt;FROM&lt;/code&gt; clause. The reason was explained by Tom Lane some time back in one of the newsgroup threads and is related to the implementation drawbacks of the PL/pgSQL. &lt;br /&gt;
&lt;br /&gt;
Ok, so now I will simply give several examples of how to use these functions.&lt;br /&gt;
&lt;br /&gt;
&lt;table border=0 cellspacing=0 cellpadding=5 width=100%&gt;&lt;tr&gt;&lt;th&gt;Merge 2 or more text arrays excluding duplicates: &lt;/th&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;pre class=&quot;brush: sql; light: true;&quot;&gt;select ARRAY(
  select unnest(ARRAY[ &#39;a&#39;, &#39;b&#39;, &#39;c&#39; ])
   union
  select unnest(ARRAY[ &#39;c&#39;, &#39;d&#39;, &#39;e&#39; ])
)&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;If we use &lt;code&gt;UNION ALL&lt;/code&gt; instead, we are doing just the same as &lt;code&gt;array_cat()&lt;/code&gt;, but slower :)&lt;br /&gt;
&lt;br /&gt;
Note, that &lt;code&gt;UNION&lt;/code&gt; actually sorts the elements of the result set fist, to eliminate the duplicates, so the resulting array will be sorted usually. But one cannot relay on that behavior and if you really need a sorted array on the output, you have to use &lt;code&gt;ORDER BY&lt;/code&gt; explicitly.&lt;br /&gt;
&lt;br /&gt;
&lt;table border=0 cellspacing=0 cellpadding=5 width=100%&gt;&lt;tr&gt;&lt;th&gt;Merge 2 or more text arrays excluding duplicates and sorting elements: &lt;/th&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;pre class=&quot;brush: sql; light: true;&quot;&gt;select ARRAY(
  select unnest(ARRAY[ &#39;a&#39;, &#39;b&#39;, &#39;c&#39; ]) as e
   union
  select unnest(ARRAY[ &#39;c&#39;, &#39;d&#39;, &#39;e&#39; ]) as e
   order by e
)&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;Here we are forcing PostgreSQL to sort the elements alphabetically. If you check the execution plan for that query, you will see that it is exactly the same as for the previous query. But here we can actually reverse the order or sort by element length or first by element length and then alphabetically and so on. This makes this approach quite flexible not still the whole construct is very readable and understandable. But what if we want to trim text of each of the elements, lower the case or even rewrite each element using regular expression replace?&lt;br /&gt;
&lt;br /&gt;
&lt;table border=0 cellspacing=0 cellpadding=5 width=100%&gt;&lt;tr&gt;&lt;th&gt;Trim or rewrite text array elements: &lt;/th&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;pre class=&quot;brush: sql; light: true;&quot;&gt;select ARRAY(
  select btrim(lower(regexp_replace(unnest(ARRAY[&#39;a a&#39;,&#39;B b&#39;,&#39;c  C  &#39;]), 
                                    $$\s+$$, 
                                    &#39; &#39;, 
                                    &#39;g&#39;)
                     )
               ) as e
)&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;We can use this approach in all our previous queries as well, so we actually can merge several arrays reducing all repeated space characters into one, trimming and lowering each element before, removing duplicates from the resulting array and all that in one simple and readable SQL statement. &lt;br /&gt;
&lt;br /&gt;
By now, with this syntax we cannot actually filter some elements out of the original array.&lt;br /&gt;
&lt;br /&gt;
&lt;table border=0 cellspacing=0 cellpadding=5 width=100%&gt;&lt;tr&gt;&lt;th&gt;Filter elements from an array depending on element properties: &lt;/th&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;pre class=&quot;brush: sql; light: true;&quot;&gt;select ARRAY(
  select a.e 
    from unnest(ARRAY[0,4,2,5,4,1,6,9,8,7]) as a(e)
   where a.e % 2 = 0
)
&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;This query actually rewrites an integer array so, that it drops all the add elements from it. We have to push &lt;code&gt;unnest()&lt;/code&gt; to &lt;code&gt;FROM&lt;/code&gt; list to be able to name the virtual table, that &lt;code&gt;unnest()&lt;/code&gt; creates as well as name the element column so that we can reference it in the where clause. Again there are not guaranties here that our array will be contain elements in the same order that it was in the original one. &lt;br /&gt;
&lt;br /&gt;
The problem is that all these queries do not know anything about the original position of the elements in the original array. To solve this problem we need element indexes to be visible inside the query. One can do it using a plain old &lt;code&gt;generate_series()&lt;/code&gt; or not so old &lt;code&gt;generate_subscripts()&lt;/code&gt; (one actually should prefer the later one if you are on the PostgreSQL 8.4+) to unnest our original array. You cannot use the &lt;code&gt;unnest()&lt;/code&gt; directly as there is not way to determine the position or index other then using window functions like &lt;code&gt;row_number()&lt;/code&gt;, but this is much much slower on practice. &lt;br /&gt;
&lt;br /&gt;
&lt;table border=0 cellspacing=0 cellpadding=5 width=100%&gt;&lt;tr&gt;&lt;th&gt;Filtering elements from an array depending on element position (index): &lt;/th&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;pre class=&quot;brush: sql; light: true;&quot;&gt;select ARRAY(
  select (ARRAY[0,4,2,5,4,1,6,9,8,7])[s.i]
    from generate_series(array_lower(ARRAY[0,4,2,5,4,1,6,9,8,7], 1),
                         array_upper(ARRAY[0,4,2,5,4,1,6,9,8,7], 1) ) as s(i)
   where s.i % 2 = 0
   order by s.i
)
&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;This query is dropping all the elements of the original array that are located on the odd positions and ensures that the elements are ordered in the resulting array the same way as they were in the original one. forcing ordering the elements here is probably not the best thing to do for performance reasons, but I want to demonstrate that it is possible and, strictly speaking, even a proper way to do. &lt;br /&gt;
&lt;br /&gt;
But using &lt;code&gt;generate_series()&lt;/code&gt; or &lt;code&gt;generate_subscripts()&lt;/code&gt; shows a small problem actually. We have to push the original array at least 2 times in the query. Inside a PL/pgSQL code the arrays are usually replaced with the variable names, and the queries do not look so bulky there, but there is a possibility to create an additional helper function that will unnest an array returning not only a element itself, but also it&#39;s index in the original array:&lt;br /&gt;
&lt;br /&gt;
&lt;pre class=&quot;brush: sql; gutter: false&quot;&gt;CREATE OR REPLACE FUNCTION array_enumerate(IN a anyarray, 
  OUT i integer, OUT e anyelement)
  RETURNS SETOF record AS
$BODY$
/** 
 *  Unnests array into a table together with element indexes
 */
 /* -- testing
     select * from array_enumerate(ARRAY[&#39;a&#39;, &#39;b&#39;, &#39;c&#39;]) as u
  */
select s.i, ($1)[s.i] from generate_series( array_lower($1, 1), array_upper($1, 1 ) ) as s(i)
$BODY$
  LANGUAGE &#39;sql&#39; IMMUTABLE STRICT;
&lt;/pre&gt;on the PostgreSQL versions after 8.4 one can write it using &lt;code&gt;generate_subscripts()&lt;/code&gt; like:&lt;br /&gt;
&lt;pre class=&quot;brush: sql; gutter: false&quot;&gt;CREATE OR REPLACE FUNCTION array_enumerate(IN a anyarray, 
  OUT i integer, OUT e anyelement)
  RETURNS SETOF record AS
$BODY$
/** 
 *  Unnests array into a table together with element indexes
 */
 /* -- testing
     select * from array_enumerate(ARRAY[&#39;a&#39;, &#39;b&#39;, &#39;c&#39;]) as u
  */
select s.i, ($1)[s.i] from generate_subscripts( $1, 1 ) as s(i)
$BODY$
  LANGUAGE &#39;sql&#39; IMMUTABLE STRICT;
&lt;/pre&gt;Using &lt;code&gt;generate_subscripts()&lt;/code&gt; should be preferable as one can create some crazy array with elements having noncontinuous indexes actually. And it should be also a little bit faster, then &lt;code&gt;generate_series()&lt;/code&gt;.&lt;br /&gt;
&lt;br /&gt;
This actually gives us a possibility to rewrite previous query like this: &lt;br /&gt;
&lt;table border=0 cellspacing=0 cellpadding=5 width=100%&gt;&lt;tr&gt;&lt;td&gt;&lt;br /&gt;
&lt;pre class=&quot;brush: sql; light: true;&quot;&gt;select ARRAY(
  select s.e
    from utils.array_enumerate(ARRAY[0,4,2,5,4,1,6,9,8,7]) as s(i,e)
   where s.i % 2 = 0
   order by s.i
)
&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;Now we can even do something practical using this approach. for example &lt;br /&gt;
&lt;br /&gt;
&lt;table border=0 cellspacing=0 cellpadding=5 width=100%&gt;&lt;tr&gt;&lt;th&gt;Remove duplicate elements from array keeping the element appearance order: &lt;/th&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;&lt;pre class=&quot;brush: sql; light: true;&quot;&gt;select ARRAY(
  select s.e
    from utils.array_enumerate(ARRAY[0,4,2,5,4,1,6,9,8,7]) as s(i,e)
   group by s.e
   order by min(s.i)
)
&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;Actually this gives a possibility to process lists of words using &lt;code&gt;string_to_array()&lt;/code&gt; and &lt;code&gt;array_to_string()&lt;/code&gt; methods, or on newer versions of PostgreSQL &lt;code&gt;regexp_split_to_array()&lt;/code&gt; to remove the repeating space or punctuation characters during splitting directly.&lt;br /&gt;
&lt;br /&gt;
Practically everywhere in the queries, one could use the &lt;code&gt;array_agg()&lt;/code&gt; to generate the resulting arrays or, in the coming 9.0, a very efficient &lt;code&gt;string_agg()&lt;/code&gt; to generate strings directly, without creating preliminary arrays and then using &lt;code&gt;array_to_string()&lt;/code&gt;.&lt;br /&gt;
&lt;br /&gt;
On my tests comparing the performance of the &lt;code&gt;ARRAY(select)&lt;/code&gt; and &lt;code&gt;array_agg()&lt;/code&gt; I could not find any significant performance difference.</content><link rel='replies' type='application/atom+xml' href='http://tech.valgog.com/feeds/4552412525300318101/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/6018614372585319965/4552412525300318101' title='1 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/6018614372585319965/posts/default/4552412525300318101'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/6018614372585319965/posts/default/4552412525300318101'/><link rel='alternate' type='text/html' href='http://tech.valgog.com/2010/05/merging-and-manipulating-arrays-in.html' title='Merging and manipulating arrays in PostgreSQL'/><author><name>valgog</name><uri>http://www.blogger.com/profile/02521879454303479150</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>1</thr:total></entry><entry><id>tag:blogger.com,1999:blog-6018614372585319965.post-2022146044550224479</id><published>2010-02-24T18:04:00.002+01:00</published><updated>2011-02-17T16:23:49.119+01:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="postgresql"/><category scheme="http://www.blogger.com/atom/ns#" term="python"/><title type='text'>re-assigning values to the parameters in plpythonu</title><content type='html'>As discussed in Postgres &lt;a href=&quot;http://groups.google.com/group/pgsql.bugs/browse_frm/thread/3ba9d2a25ad48a66/6d4d903b87ef33d1?tvc=1&amp;q=BUG+%235232#6d4d903b87ef33d1&quot;&gt;BUG #5232&lt;/a&gt; assigning values to the parameters fails with the following error:&lt;br /&gt;
&lt;pre class=&quot;brush: text; light: true;&quot;&gt;Detail: &amp;lt;type &#39;exceptions.UnboundLocalError&#39;&amp;gt;: local variable &#39;src&#39; 
referenced before assignment&lt;/pre&gt;&lt;br /&gt;
If assignment to the parameter is still needed one can solve the issue by adding &lt;code&gt;global&lt;/code&gt; definition for such a parameter. &lt;br /&gt;
&lt;br /&gt;
&lt;pre class=&quot;brush: sql; light: true; highlight: [4]&quot;&gt;CREATE OR REPLACE FUNCTION pyreplace(src text, s text) 
  RETURNS text AS 
$BODY$ 
global src
src=src.replace(s,&#39;&#39;) 
return src 
$BODY$ 
  LANGUAGE &#39;plpythonu&#39; VOLATILE 
  COST 100; 
&lt;/pre&gt;</content><link rel='replies' type='application/atom+xml' href='http://tech.valgog.com/feeds/2022146044550224479/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/6018614372585319965/2022146044550224479' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/6018614372585319965/posts/default/2022146044550224479'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/6018614372585319965/posts/default/2022146044550224479'/><link rel='alternate' type='text/html' href='http://tech.valgog.com/2010/02/re-assigning-values-to-parameters-in.html' title='re-assigning values to the parameters in plpythonu'/><author><name>valgog</name><uri>http://www.blogger.com/profile/02521879454303479150</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>0</thr:total></entry><entry><id>tag:blogger.com,1999:blog-6018614372585319965.post-7262315206946753563</id><published>2009-02-27T10:10:00.021+01:00</published><updated>2011-02-17T16:23:49.120+01:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="java"/><category scheme="http://www.blogger.com/atom/ns#" term="postgresql"/><title type='text'>Getting arrays from PostgreSQL database using JDBC in Java</title><content type='html'>&lt;code&gt;getArray()&lt;/code&gt; standard JDBC methods are working for basic types in PostgreSQL as documented in JDBC specifications.&lt;br /&gt;
&lt;br /&gt;
As PostgreSQL 8.2 does not support arrays of types, and PostgreSQL JDBC driver does not support getting of the arrays of types even from PostgreSQL 8.3+, we can pass this kind of structures as text arrays, were text elements are postgres records (not types), serialized with &lt;code&gt;textin(record_out(ROW(a,&amp;nbsp;b,&amp;nbsp;c)))::text&lt;/code&gt; approach (for PostgreSQL 8.2 this is the only way to serialize the record, in 8.3+ it is now possible to simply convert the record to text like &lt;code&gt;ROW(a,&amp;nbsp;b,&amp;nbsp;c)::text&lt;/code&gt;) and the way of deserialization of the received text array data with java method that is shown below:&lt;br /&gt;
&lt;pre class=&quot;brush: java&quot;&gt;public class Utils {
...

/**
* Method parses a postgres Row into a List of Strings.
* &amp;lt;p&amp;gt;
* The postgres row is represented by a String and consists of one or more columns, that are separated by a comma.
* The row must begin with an open bracket and must end with a closing bracket.
* Each column must begin with a letter or a quote. If a column begins with a quote, the column must end with a quote.
* Inside quotation a quote is represented by a double quote or by backslash and quote, a backslash is represented by double backslash.
*
* @param value
* @return List of Strings
* @throws JBackendParserException
*/
public static List&amp;lt;String&amp;gt; postgresROW2StringList(String value) throws JBackendParserException
{
    return postgresROW2StringList(value, 128);
}

/**
* Method parses a postgres Row into a List of Strings.
* &amp;lt;p&amp;gt;
* The postgres row is represented by a String and consists of one or more columns, that are separated by a comma.
* The row must begin with an open bracket and must end with a closing bracket.
* Each column must begin with a letter or a quote. If a column begins with a quote, the column must end with a quote.
* Inside quotation a quote is represented by a double quote or by backslash and quote, a backslash is represented by double backslash.
* &amp;lt;p&amp;gt;
* The appendStringSize is the Size for StringBuilder.
*
* @param value, the postgres Row
* @param appendStringSize
* @return List of Strings
* @throws JBackendParserException
*/
public static List&amp;lt;String&amp;gt; postgresROW2StringList(String value, int appendStringSize)
throws JBackendParserException
{
    if (!(value.startsWith(&quot;(&quot;) &amp;&amp; value.endsWith(&quot;)&quot;)))
    throw new ParseException(&quot;postgresROW2StringList() ROW must begin with &#39;(&#39; and end with &#39;)&#39;: &quot; + value);

    List&amp;lt;String&amp;gt; result = new ArrayList&amp;lt;String&amp;gt;();

    char[] c = value.toCharArray();

    StringBuilder element = new StringBuilder(appendStringSize);
    int i = 1;
    while (c[i] != &#39;)&#39;)
    {
        if (c[i] == &#39;,&#39;)
        {
            if (c[i+1] == &#39;,&#39;)
            {
                result.add(new String());
            } else if (c[i+1] == &#39;)&#39;)
            {
                result.add(new String());
            }
            i++;
        } else if (c[i] == &#39;\&quot;&#39;)
        {
            i++;
            boolean insideQuote = true;
            while(insideQuote)
            {
                char nextChar = c[i + 1];
                if(c[i] == &#39;\&quot;&#39;)
                {
                    if (nextChar == &#39;,&#39; || nextChar == &#39;)&#39;)
                    {
                        result.add(element.toString());
                        element = new StringBuilder(appendStringSize);
                        insideQuote = false;
                    } else if(nextChar == &#39;\&quot;&#39;)
                    {
                        i++;
                        element.append(c[i]);
                    } else
                    {
                        throw new ParseException(&quot;postgresROW2StringList() char after \&quot; is not valid&quot;);
                    }
                } else if (c[i] == &#39;\\&#39;)
                {
                    if(nextChar == &#39;\\&#39; || nextChar == &#39;\&quot;&#39;)
                    {
                        i++;
                        element.append(c[i]);
                    } else
                    {
                        throw new ParseException(&quot;postgresROW2StringList() char after \\ is not valid&quot;);
                    }
                } else
                {
                    element.append(c[i]);
                }
                i++;
            }
        }else
        {
            while(!(c[i] == &#39;,&#39; || c[i] == &#39;)&#39;))
            {
                element.append(c[i]);
                i++;
            }
            result.add(element.toString());
            element = new StringBuilder(appendStringSize); // we aways loose the last object here, but its easier then checking for flag every time before append (definitely we loose some performance here)
        }
    }
return result;
}
...
}
&lt;/pre&gt;&lt;br /&gt;
&lt;br /&gt;
We can use the following example SQL statement to demonstrate how to pack needed data structures into the serialized text arrays&lt;br /&gt;
&lt;br /&gt;
&lt;pre class=&quot;brush: sql&quot;&gt;select s.i as id,
&#39;row &#39; || s.i as text_data,
ARRAY( select textin(record_out( ROW( 100 * s.i + a.i,
&#39;element &#39; || 100 * s.i + a.i,
&#39;constant text with some &quot;quoting&quot;&#39; ) ))::text
from generate_series( 1, 5 ) as a(i) ) as serialized_row_array
from generate_series(1, 10) as s(i)
&lt;/pre&gt;&lt;br /&gt;
The result of the execution of this query is:&lt;br /&gt;
&lt;br /&gt;
&lt;a onblur=&quot;try {parent.deselectBloggerImageGracefully();} catch(e) {}&quot; href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgLJBBvco0cqdKrzMLCKr-L1IyBgqpdDbkYDp-54GobAcuMDNbZXBV6BZl7kBDAkDdrVYmHoikOJJWYTp7zx0uS6bDqMxN3eyeMTMeI5qu3UXwF5d-RGFm8EUkLjBZXVie7BdVbPqWvv7g/s1600-h/array_of_serialized_rows.PNG&quot;&gt;&lt;img style=&quot;cursor:pointer; cursor:hand;width: 320px; height: 193px;&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgLJBBvco0cqdKrzMLCKr-L1IyBgqpdDbkYDp-54GobAcuMDNbZXBV6BZl7kBDAkDdrVYmHoikOJJWYTp7zx0uS6bDqMxN3eyeMTMeI5qu3UXwF5d-RGFm8EUkLjBZXVie7BdVbPqWvv7g/s320/array_of_serialized_rows.PNG&quot; border=&quot;0&quot; alt=&quot;&quot;id=&quot;BLOGGER_PHOTO_ID_5307402305156267074&quot; /&gt;&lt;/a&gt;&lt;br /&gt;
&lt;br /&gt;
And then read these text arrays using the following java code in the springsframework row mapper&lt;br /&gt;
&lt;br /&gt;
And then read these text arrays using the following java code in the &lt;br /&gt;
&lt;a href=&quot;http://static.springframework.org/spring/docs/2.5.x/reference/jdbc.html&quot;&gt;springframework row mapper&lt;/a&gt; (as this example uses ResultSet actually, you can see as well, how to read data directly from ResultSet in the same example):&lt;br /&gt;
&lt;pre class=&quot;brush: java&quot;&gt;public class ArrayRowMapper&amp;lt;ITEM&amp;gt; implements ParameterizedRowMapper&amp;lt;ITEM&amp;gt; {

...
private ITEM createEmptyItem() {
...
}
private Element createEmptyElement() {
...
}

public final ITEM mapRow(ResultSet rs, int rowNum) throws SQLException {
ITEM item = createEmptyItem();
item.setId( rs.getInt(&quot;id&quot;) );
item.setTextData( rs.getString(&quot;text_data&quot;) );
Array sqlArray = rs.getArray(&quot;serialized_row_array&quot;);
if ( sqlArray == null ) {
item.setElements(null);
} else {
String[] textArray = (String[])sqlArray.getArray();
List&amp;lt;Element&amp;gt; elements = new ArrayList&amp;lt;Element&amp;gt;(textArray.length);

for(int i = 0; i &amp;lt; textArray.length; i++)
{
try
{
List&amp;lt;String&amp;gt; stringResultList = Utils.postgresROW2StringList(textArray[i]);

Element element = createEmptyElement();
element.setId(Integer.parseInt(stringResultList.get(0)));
element.setTextData(stringResultList.get(1));
element.setConstantTextData(stringResultList.get(2));
elements.add(element);
}catch (JBackendParserException pe) {
logger.error(&quot;Problem parsing received ROW value [&quot; + textArray[i] + &quot;]: &quot; + pe.getMessage(), pe);
}catch (Exception e) {
logger.error(&quot;Problem setting values to Element object from received ROW value [&quot; + textArray[i] + &quot;] : &quot; + e.getMessage(), e);
}
}
item.setElements(elements);
}
}
}&lt;/pre&gt;</content><link rel='replies' type='application/atom+xml' href='http://tech.valgog.com/feeds/7262315206946753563/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/6018614372585319965/7262315206946753563' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/6018614372585319965/posts/default/7262315206946753563'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/6018614372585319965/posts/default/7262315206946753563'/><link rel='alternate' type='text/html' href='http://tech.valgog.com/2009/02/getting-arrays-from-postgresql-database.html' title='Getting arrays from PostgreSQL database using JDBC in Java'/><author><name>valgog</name><uri>http://www.blogger.com/profile/02521879454303479150</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgLJBBvco0cqdKrzMLCKr-L1IyBgqpdDbkYDp-54GobAcuMDNbZXBV6BZl7kBDAkDdrVYmHoikOJJWYTp7zx0uS6bDqMxN3eyeMTMeI5qu3UXwF5d-RGFm8EUkLjBZXVie7BdVbPqWvv7g/s72-c/array_of_serialized_rows.PNG" height="72" width="72"/><thr:total>0</thr:total></entry><entry><id>tag:blogger.com,1999:blog-6018614372585319965.post-7455016673513941274</id><published>2009-02-26T17:47:00.014+01:00</published><updated>2011-10-13T00:33:18.240+02:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="java"/><category scheme="http://www.blogger.com/atom/ns#" term="postgresql"/><title type='text'>Passing arrays to PostgreSQL database from java (JDBC)</title><content type='html'>Normally JDBC driver needs to know, how to serialize some database type so, that the database can accept it. In case of PostgreSQL JDBC driver we use 2 implementations for passing integer and text arrays.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;ul&gt;&lt;li&gt;to pass an integer array to PostgreSQL database the following java.sql.Array implementation can be used:&lt;/li&gt;

&lt;pre class=&quot;brush: java&quot;&gt;import java.sql.ResultSet;
import java.sql.SQLException;
import java.util.Arrays;
import java.util.Map;

/**
 * This is class provides {@link java.sql.Array} interface for PostgreSQL &amp;lt;code&amp;gt;int4&amp;lt;/code&amp;gt; array.
 *
 * @author Valentine Gogichashvili
 *
 */

public class PostgreSQLInt4Array implements java.sql.Array {

    private final int[] intArray;
    private final String stringValue;

    public PostgreSQLInt4Array(int[] intArray) {
        this.intArray = intArray;
        this.stringValue = intArrayToPostgreSQLInt4ArrayString(intArray);
    }

    public String toString() {
        return stringValue;
    }

    /**
     * This static method can be used to convert an integer array to string representation of PostgreSQL integer array.
     * @param a source integer array
     * @return string representation of a given integer array
     */
    public static String intArrayToPostgreSQLInt4ArrayString(int[] a) {
        if ( a == null ) {
            return &quot;NULL&quot;;
        }
        final int al = a.length;
        if ( al == 0 ) {
            return &quot;{}&quot;;
        }
        StringBuilder sb = new StringBuilder( 2 + al * 7 ); // as we usually operate with 6 digit numbers + 1 symbol for a delimiting comma
        sb.append(&#39;{&#39;);
        for (int i = 0; i &amp;lt; al; i++) {
            if ( i &amp;gt; 0 ) sb.append(&#39;,&#39;);
            sb.append(a[i]);
        }
        sb.append(&#39;}&#39;);
        return sb.toString();
    }


    public static String intArrayToCommaSeparatedString(int[] a) {
        if ( a == null ) {
            return &quot;NULL&quot;;
        }
        final int al = a.length;
        if ( al == 0 ) {
            return &quot;&quot;;
        }
        StringBuilder sb = new StringBuilder( al * 7 ); // as we usually operate with 6 digit numbers + 1 symbol for a delimiting comma
        for (int i = 0; i &amp;lt; al; i++) {
            if ( i &amp;gt; 0 ) sb.append(&#39;,&#39;);
            sb.append(a[i]);
        }
        return sb.toString();
    }

    public Object getArray() throws SQLException {
        return intArray == null ? null : Arrays.copyOf(intArray, intArray.length);
    }

    public Object getArray(Map&amp;lt;String, Class&amp;lt;?&amp;gt;&amp;gt; map) throws SQLException {
        return getArray();
    }

    public Object getArray(long index, int count) throws SQLException {
        return intArray == null ? null : Arrays.copyOfRange(intArray, (int)index, (int)index + count );
    }

    public Object getArray(long index, int count, Map&amp;lt;String, Class&amp;lt;?&amp;gt;&amp;gt; map) throws SQLException {
        return getArray(index, count);
    }

    public int getBaseType() throws SQLException {
        return java.sql.Types.INTEGER;
    }

    public String getBaseTypeName() throws SQLException {
        return &quot;int4&quot;;
    }

    public ResultSet getResultSet() throws SQLException {
        throw new UnsupportedOperationException();
    }

    public ResultSet getResultSet(Map&amp;lt;String, Class&amp;lt;?&amp;gt;&amp;gt; map) throws SQLException {
        throw new UnsupportedOperationException();
    }

    public ResultSet getResultSet(long index, int count) throws SQLException {
        throw new UnsupportedOperationException();
    }

    public ResultSet getResultSet(long index, int count, Map&amp;lt;String, Class&amp;lt;?&amp;gt;&amp;gt; map) throws SQLException {
        throw new UnsupportedOperationException();
    }

    public void free() throws SQLException {
    }

}&lt;/pre&gt;

&lt;li&gt;the same way we can create a class to pass a string array to PostgreSQL database:&lt;/li&gt;

&lt;pre class=&quot;brush: java&quot;&gt;import java.sql.ResultSet;
import java.sql.SQLException;
import java.util.Arrays;
import java.util.Map;

/**
 * This is class provides {@link java.sql.Array} interface for PostgreSQL &amp;lt;code&amp;gt;text&amp;lt;/code&amp;gt; array.
 *
 * @author Valentine Gogichashvili
 *
 */

public class PostgreSQLTextArray implements java.sql.Array {

    private final String[] stringArray;
    private final String stringValue;

    /**
     * Initializing constructor
     * @param stringArray
     */
    public PostgreSQLTextArray(String[] stringArray) {
        this.stringArray = stringArray;
        this.stringValue = stringArrayToPostgreSQLTextArray(this.stringArray);

    }

    @Override
    public String toString() {
        return stringValue;
    }

    private static final String NULL = &quot;NULL&quot;;

    /**
     * This static method can be used to convert an string array to string representation of PostgreSQL text array.
     * @param a source String array
     * @return string representation of a given text array
     */
    public static String stringArrayToPostgreSQLTextArray(String[] stringArray) {
        final int arrayLength;
        if ( stringArray == null ) {
            return NULL;
        } else if ( ( arrayLength = stringArray.length ) == 0 ) {
            return &quot;{}&quot;;
        }
        // count the string length and if need to quote
        int neededBufferLentgh = 2; // count the beginning &#39;{&#39; and the ending &#39;}&#39; brackets
        boolean[] shouldQuoteArray = new boolean[stringArray.length];
        for (int si = 0; si &amp;lt; arrayLength; si++) {
            // count the comma after the first element
            if ( si &amp;gt; 0 )  neededBufferLentgh++;

            boolean shouldQuote;
            final String s = stringArray[si];
            if ( s == null ) {
                neededBufferLentgh += 4;
                shouldQuote = false;
            } else {
                final int l = s.length();
                neededBufferLentgh += l;
                if ( l == 0 || s.equalsIgnoreCase(NULL) ) {
                    shouldQuote = true;
                } else {
                    shouldQuote = false;
                    // scan for commas and quotes
                    for (int i = 0; i &amp;lt; l; i++) {
                        final char ch = s.charAt(i);
                        switch(ch) {
                            case &#39;&quot;&#39;:
                            case &#39;\\&#39;:
                                shouldQuote = true;
                                // we will escape these characters
                                neededBufferLentgh++;
                                break;
                            case &#39;,&#39;:
                            case &#39;\&#39;&#39;:
                            case &#39;{&#39;:
                            case &#39;}&#39;:
                                shouldQuote = true;
                                break;
                            default:
                                if ( Character.isWhitespace(ch) ) {
                                    shouldQuote = true;
                                }
                                break;
                        }
                    }
                }
                // count the quotes
                if ( shouldQuote ) neededBufferLentgh += 2;
            }
            shouldQuoteArray[si] = shouldQuote;
        }

        // construct the String
        final StringBuilder sb = new StringBuilder(neededBufferLentgh);
        sb.append(&#39;{&#39;);
        for (int si = 0; si &amp;lt; arrayLength; si++) {
            final String s = stringArray[si];
            if ( si &amp;gt; 0 ) sb.append(&#39;,&#39;);
            if ( s == null ) {
                sb.append(NULL);
            } else {
                final boolean shouldQuote = shouldQuoteArray[si];
                if ( shouldQuote ) sb.append(&#39;&quot;&#39;);
                for (int i = 0, l = s.length(); i &amp;lt; l; i++) {
                    final char ch = s.charAt(i);
                    if ( ch == &#39;&quot;&#39; || ch == &#39;\\&#39; ) sb.append(&#39;\\&#39;);
                    sb.append(ch);
                }
                if ( shouldQuote ) sb.append(&#39;&quot;&#39;);
            }
        }
        sb.append(&#39;}&#39;);
        assert sb.length() == neededBufferLentgh;
        return sb.toString();
    }


    @Override
    public Object getArray() throws SQLException {
        return stringArray == null ? null : Arrays.copyOf(stringArray, stringArray.length);
    }

    @Override
    public Object getArray(Map&amp;lt;String, Class&amp;lt;?&amp;gt;&amp;gt; map) throws SQLException {
        return getArray();
    }

    @Override
    public Object getArray(long index, int count) throws SQLException {
        return stringArray == null ? null : Arrays.copyOfRange(stringArray, (int)index, (int)index + count);
    }

    @Override
    public Object getArray(long index, int count, Map&amp;lt;String, Class&amp;lt;?&amp;gt;&amp;gt; map) throws SQLException {
        return getArray(index, count);
    }

    @Override
    public int getBaseType() throws SQLException {
        return java.sql.Types.VARCHAR;
    }

    @Override
    public String getBaseTypeName() throws SQLException {
        return &quot;text&quot;;
    }

    @Override
    public ResultSet getResultSet() throws SQLException {
        throw new UnsupportedOperationException();
    }

    @Override
    public ResultSet getResultSet(Map&amp;lt;String, Class&amp;lt;?&amp;gt;&amp;gt; map) throws SQLException {
        throw new UnsupportedOperationException();
    }

    @Override
    public ResultSet getResultSet(long index, int count) throws SQLException {
        throw new UnsupportedOperationException();
    }

    @Override
    public ResultSet getResultSet(long index, int count, Map&amp;lt;String, Class&amp;lt;?&amp;gt;&amp;gt; map) throws SQLException {
        throw new UnsupportedOperationException();
    }

    @Override
    public void free() throws SQLException {
    }

//  public static void main(String[] args) {
//      // test the method
//      String[][] stringArrayArray = new String[][] {
//              { &quot;shm\taliko&quot;, &quot;&quot;, null, &quot;kluku&quot; },
//              { &quot;&quot;, &quot;NULL&quot;, &quot;NuLL&quot;, &quot;\&quot;kuku\&quot;&quot;, &quot;valiko, shmaliko&quot; },
//              { &quot;&quot;, &quot;NULL&quot;, &quot;NuLL&quot;, &quot;\&quot;ku\\ku\&quot;&quot;, &quot;valiko, shm\taliko&quot;, &quot;shm\taliko&quot; },
//              { }
//      };
//
//      for( String[] stringArray : stringArrayArray ) {
//          PostgreSQLTextArray a = new PostgreSQLTextArray(stringArray);
//          String s = a.toString();
//          System.out.println(s);
//      }
//  }

}&lt;/pre&gt;&lt;/ul&gt;&lt;br /&gt;
Definitely it is possible to merge these two classes so that one wrapper is used instead of two and more known database types can be added in such a wrapper. This implementation relies on the fact that PostgreSQL type names are fixed and the serialization technique does not change much from type to type. So actually all the numeric types can be serialized using an example shown in the first class.&lt;br /&gt;
&lt;br /&gt;
In &lt;a href=&quot;http://static.springframework.org/spring/docs/2.5.x/reference/jdbc.html&quot;&gt;springframework database abstraction&lt;/a&gt; model these classes can be used like that:&lt;br /&gt;
&lt;br /&gt;
&lt;pre class=&quot;brush: java&quot;&gt;...
String sql = &quot;select * from test.array_accepting_procedure( :text_array_param, :int_array_param)&quot;;

MapSqlParameterSource namedParameters = new MapSqlParameterSource();
namedParameters.addValue(&quot;text_array_param&quot;, new PostgreSQLTextArray(dto.getTextArray()),    java.sql.Types.ARRAY );
namedParameters.addValue(&quot;int_array_param&quot;,  new PostgreSQLInt4Array(dto.getIntegerArray()), java.sql.Types.ARRAY );

resultList =  getSimpleJdbcTemplate().queryForList( sql, namedParameters, mapper);
...
&lt;/pre&gt;

&lt;a rel=&quot;license&quot; href=&quot;http://creativecommons.org/licenses/by/3.0/&quot;&gt;&lt;img alt=&quot;Creative Commons License&quot; style=&quot;border-width:0&quot; src=&quot;http://i.creativecommons.org/l/by/3.0/80x15.png&quot; /&gt;&lt;/a&gt;&lt;br /&gt;&lt;span xmlns:dct=&quot;http://purl.org/dc/terms/&quot; href=&quot;http://purl.org/dc/dcmitype/Text&quot; property=&quot;dct:title&quot; rel=&quot;dct:type&quot;&gt;Passing arrays to PostgreSQL database from java (JDBC)&lt;/span&gt; by &lt;a xmlns:cc=&quot;http://creativecommons.org/ns#&quot; href=&quot;http://tech.valgog.com/2009/02/passing-arrays-to-postgresql-database.html&quot; property=&quot;cc:attributionName&quot; rel=&quot;cc:attributionURL&quot;&gt;Valentine Gogichashvili&lt;/a&gt; is licensed under a &lt;a rel=&quot;license&quot; href=&quot;http://creativecommons.org/licenses/by/3.0/&quot;&gt;Creative Commons Attribution 3.0 Unported License&lt;/a&gt;.</content><link rel='replies' type='application/atom+xml' href='http://tech.valgog.com/feeds/7455016673513941274/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/6018614372585319965/7455016673513941274' title='6 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/6018614372585319965/posts/default/7455016673513941274'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/6018614372585319965/posts/default/7455016673513941274'/><link rel='alternate' type='text/html' href='http://tech.valgog.com/2009/02/passing-arrays-to-postgresql-database.html' title='Passing arrays to PostgreSQL database from java (JDBC)'/><author><name>valgog</name><uri>http://www.blogger.com/profile/02521879454303479150</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>6</thr:total></entry><entry><id>tag:blogger.com,1999:blog-6018614372585319965.post-3740242501444651922</id><published>2008-04-10T14:20:00.001+02:00</published><updated>2011-02-24T00:04:36.010+01:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="postgresql"/><title type='text'>Table partitioning automation triggers in PostgreSQL</title><content type='html'>Table partitioning is described in the Postgres documentation &lt;a href=&quot;http://www.postgresql.org/docs/8.3/interactive/ddl-partitioning.html&quot;&gt;Partitioning&lt;/a&gt; chapter. Unfortunately until partition data distribution is done automatically in some future version of the Postgres we need some triggers to handle partitioning automatically. &lt;br /&gt;
&lt;br /&gt;
Here is one such example trigger script, that can be useful when developing a tailor made one:&lt;br /&gt;
&lt;pre class=&quot;brush: sql&quot;&gt;CREATE OR REPLACE FUNCTION myschema.ruled_indexed_partition_multiplexer_by_view_day()
RETURNS TRIGGER AS 
$BODY$
-- $Header: $
/**
* This is a common trigger function that can be used to partition any table 
* that has a VIEW_DAY partitioning column.
* This function will only work on BEFORE INSERT row level triggers.
* If the first parameter is specified, it can only be &#39;week&#39; or &#39;month&#39; 
* to indicate the needed partitioning schedule.
*/
DECLARE
  schema_name_prefix CONSTANT text := quote_ident( TG_TABLE_SCHEMA ) || &#39;.&#39;;
  table_name_prefix CONSTANT text := TG_TABLE_NAME || &#39;_&#39;;
  needed_month_table_name text;
  partitioning_interval CONSTANT text := coalesce( TG_ARGV[0], &#39;week&#39; );
  s text;
BEGIN
  if not ( TG_WHEN = &#39;BEFORE&#39; and TG_LEVEL = &#39;ROW&#39; and TG_OP = &#39;INSERT&#39; ) then 
    raise exception &#39;This trigger function can only be used with BEFORE INSERT row level triggers!&#39;;
  end if;
  -- raise info &#39;starting partition_multiplexer for %.%&#39;, TG_TABLE_SCHEMA, TG_TABLE_NAME;
  if new.view_day is null then 
    raise exception &#39;partitioning column &quot;view_day&quot; cannot be NULL&#39;;
  end if;

  needed_month_table_name := 
myschema_partitions.need_ruled_indexed_partition_table(
TG_TABLE_SCHEMA, 
&quot;name&quot; &#39;myschema_partitions&#39;, 
TG_TABLE_NAME, 
&quot;name&quot; &#39;view_day&#39;, 
new.view_day, partitioning_interval );

  -- raise info &#39;needed_month_table_name is %&#39;, needed_month_table_name;
  select new into s;
  s := $$INSERT INTO myschema_partitions.$$ || needed_month_table_name || 
  $$ SELECT ($$ || quote_literal( s ) || $$::$$ || 
  schema_name_prefix || TG_TABLE_NAME || $$).*  $$;
  -- raise info &#39;executing statement [%]&#39;, s;
  EXECUTE s;
  RETURN NULL;
END;
$BODY$
LANGUAGE plpgsql;&lt;/pre&gt;&lt;br /&gt;
The trigger function that will use a &lt;code&gt;VIEW_DAY&lt;/code&gt; table column of type &lt;code&gt;DATE&lt;/code&gt; and can be assigned to a root table with the following command:&lt;br /&gt;
&lt;pre class=&quot;brush: sql; light: true&quot;&gt;CREATE TRIGGER mytable_multiplexer_trigger
BEFORE INSERT
ON myschema.mytable
FOR EACH ROW
EXECUTE PROCEDURE myschema.ruled_indexed_partition_multiplexer_by_view_day(&#39;week&#39;);&lt;/pre&gt;&lt;br /&gt;
The function uses an additional helper function that creates a needed partition table when it is needed and creates a &lt;code&gt;INSTEAD INSERT&lt;/code&gt; rule, that will prevent the system from calling this trigger (that is actually doing at least one catalog look-up for every inserted record) again, if the table already exists. The rules probably should be dropped by hand for the older partitions, so the planner does not have to check too many rule conditions when rewriting the original insert statement, trying to insert into the root table (I suppose here, that we actively insert only into some recent table partitions and when the rule does not exist and we still have to insert something in to an old table the trigger will still work and choose a needed one).&lt;br /&gt;
&lt;br /&gt;
Here is the helper function:&lt;br /&gt;
&lt;pre class=&quot;brush: sql&quot;&gt;CREATE OR REPLACE FUNCTION myschema_partitions.need_ruled_indexed_partition_table
(TG_TABLE_SCHEMA &quot;name&quot;, 
TG_ARCHIVE_SCHEMA &quot;name&quot;, 
TG_TABLE_NAME &quot;name&quot;, 
partitioning_column_name &quot;name&quot;, 
needed_partitioning_date date, 
partitioning_interval text)
RETURNS &quot;name&quot; AS 
$BODY$
/**
* This stored procedure checks if the needed partitioning table exists, as if not,
* it creates it. 
* It also creates all the indexes, that exist on the parent table renaming it
* according to the new partition table name. 
* 
* Be careful about the maximum length of the object name. 
* 
* It is usually to be called from the trigger function like 
* myschema.ruled_indexed_partition_multiplexer_by_view_day()
*
* @param TG_TABLE_SCHEMA - the source (shallow) table schema name
* @param TG_ARCHIVE_SCHEMA - name of the schema, where the partitioning table should be created
* @param TG_TABLE_NAME - the source (shallow) table name
* @param partitioning_column_name - the name of the column, that is used to perform the partitioning (this column should exist in the source table) 
* @param needed_partitioning_date - the value of the partitioning column, this value is used to determine the name of the needed partitioning table
* @param partitioning_interval - partitioning interval. can be &#39;week&#39; or &#39;month&#39;
*
* @author Valentine Gogichashvili
*/
DECLARE
partition_beginning_date CONSTANT date := date_trunc( partitioning_interval, needed_partitioning_date )::date;
needed_partition_table_name &quot;name&quot;;
BEGIN
-- raise info &#39;starting partition_multiplexer for %.%, needed table is %, partitioning date is %&#39;, TG_TABLE_SCHEMA, TG_TABLE_NAME, needed_partition_table_name, needed_partitioning_date;
-- calculate the name of the needed table
-- we start with the beginning of the week (week partitioning)
needed_partition_table_name := TG_TABLE_NAME || 
to_char( partition_beginning_date, &#39;_YYYYMMDD_&#39;) || partitioning_interval;

-- check that the needed table exists on the database
perform 1 
from pg_class, pg_namespace
where relnamespace = pg_namespace.oid 
and relkind = &#39;r&#39;::&quot;char&quot;
and relname = needed_partition_table_name
and nspname = TG_ARCHIVE_SCHEMA;

if not found then 
DECLARE
archive_schema_name_prefix CONSTANT text := quote_ident( TG_ARCHIVE_SCHEMA ) || &#39;.&#39;;
base_schema_name_prefix CONSTANT text := quote_ident( TG_TABLE_SCHEMA ) || &#39;.&#39;;
base_table_name CONSTANT text := base_schema_name_prefix || quote_ident( TG_TABLE_NAME );
quoted_column_name CONSTANT text := quote_ident( partitioning_column_name );
partition_beginning_date CONSTANT date := date_trunc( partitioning_interval, needed_partitioning_date )::date;
next_partition_beginning_date date := date_trunc( partitioning_interval, needed_partitioning_date + ( &#39;1 &#39; || partitioning_interval )::interval )::date;
quoted_needed_table_name CONSTANT text := archive_schema_name_prefix || quote_ident ( needed_partition_table_name );
quoted_rule_name CONSTANT text := quote_ident( &#39;rule_&#39; || TG_TABLE_NAME || to_char( partition_beginning_date, &#39;_YYYYMMDD&#39;) );
base_table_owner name;
s text;
a text;
parent_index_name text;
parent_index_has_valid_name boolean;
BEGIN
SET search_path = myschema_partitions, myschema, public;
-- we have to create a needed table now
-- check if the partitioning date has been passed correctly
if needed_partitioning_date is null then 
raise exception &#39;partitioning_date should not be NULL&#39;;
end if;
-- check if the partitioning interval is correct
-- we check it here and not in the trigger function to improve the performance
if partitioning_interval not in ( &#39;week&#39;, &#39;month&#39; ) then 
raise exception $$partitioning_interval is set to [%] and should be &#39;week&#39; or &#39;month&#39;$$, partitioning_interval;
end if;
-- check for the base table and extract the table owner
select pg_roles.rolname into base_table_owner
from pg_class, pg_namespace, pg_roles
where relnamespace = pg_namespace.oid 
and relkind = &#39;r&#39;::&quot;char&quot;
and relowner = pg_roles.oid
and relname = TG_TABLE_NAME
and nspname = TG_TABLE_SCHEMA;
if not found then 
raise exception &#39;cannot find base table %.%&#39;, TG_TABLE_SCHEMA, TG_TABLE_NAME;
end if;
-- now check that the base table contains the partitioning column
perform 1 from information_schema.columns where table_schema = TG_TABLE_SCHEMA and table_name = TG_TABLE_NAME and column_name = partitioning_column_name;
if not found then 
raise exception &#39;cannot find partitioning column % in the table %.%&#39;, quoted_column_name, TG_TABLE_SCHEMA, TG_TABLE_NAME;
end if;

s := $$
CREATE TABLE $$ || quoted_needed_table_name || $$ (
CHECK ( $$ || quoted_column_name || $$ &gt;= DATE $$ || quote_literal( partition_beginning_date ) || $$ AND 
$$ || quoted_column_name || $$ &lt; DATE $$ || quote_literal( next_partition_beginning_date ) || $$ )
) INHERITS ( $$ || base_table_name || $$ ); $$;
raise notice &#39;creating table as [%]&#39;, s;
EXECUTE s;

if coalesce(length(base_table_owner), 0) = 0 then 
raise exception &#39;base_table_owner is unknown&#39;;
end if;
s := $$
ALTER TABLE $$ || quoted_needed_table_name || 
$$ OWNER TO $$ || base_table_owner;
raise notice &#39;changing owner as [%]&#39;, s;
EXECUTE s;

-- extract all the indexes existing on the parent table and apply them to the newly created partition
for a, s, parent_index_name, parent_index_has_valid_name
in  SELECT CASE indisclustered WHEN TRUE THEN &#39;ALTER TABLE &#39; || needed_partition_table_name::text || &#39; CLUSTER ON &#39; || replace( i.relname, c.relname, needed_partition_table_name::text ) ELSE NULL END as clusterdef,
replace( pg_get_indexdef(i.oid), TG_TABLE_NAME::text, needed_partition_table_name::text ),
i.relname,
strpos( i.relname, TG_TABLE_NAME::text ) &gt; 0
FROM pg_index x
JOIN pg_class c ON c.oid = x.indrelid
JOIN pg_class i ON i.oid = x.indexrelid
LEFT JOIN pg_namespace n ON n.oid = c.relnamespace
LEFT JOIN pg_tablespace t ON t.oid = i.reltablespace
WHERE c.relkind = &#39;r&#39;::&quot;char&quot; 
AND i.relkind = &#39;i&#39;::&quot;char&quot;
AND n.nspname = TG_TABLE_SCHEMA
AND c.relname = TG_TABLE_NAME
loop
if parent_index_has_valid_name then 
if strpos( s, quote_ident( TG_TABLE_SCHEMA ) || &#39;.&#39; ) then 
raise info &#39;create index statement contains original schema name, removing it&#39;;
s := replace( s, quote_ident( TG_TABLE_SCHEMA ) || &#39;.&#39;, &#39;&#39; );
end if;
raise notice &#39;creating index as [%]&#39;, s;
EXECUTE s;
if a is not null then 
if strpos( a, quote_ident( TG_TABLE_SCHEMA ) || &#39;.&#39; ) then 
raise info &#39;alter index statement contains original schema name, removing it&#39;;
a := replace( a, quote_ident( TG_TABLE_SCHEMA ) || &#39;.&#39;, &#39;&#39; );
end if;
raise notice &#39;setting clustering as [%]&#39;, a;
EXECUTE a;
end if;
else 
raise exception &#39;parent index name [%] should contain the name of the parent table [%]&#39;, parent_index_name, TG_TABLE_NAME;
end if;
end loop;

-- now we create a rule, that will be assigned to the original table
s := $$
CREATE RULE $$ || quoted_rule_name || $$ AS
ON INSERT TO $$ || base_table_name || $$ 
WHERE ( $$ || quoted_column_name || $$ &gt;= DATE $$ || quote_literal( partition_beginning_date ) || $$ AND 
$$ || quoted_column_name || $$ &lt; DATE $$ || quote_literal( next_partition_beginning_date ) || $$ )
DO INSTEAD
INSERT INTO $$ || quoted_needed_table_name || $$ VALUES (NEW.*);$$;
-- raise notice &#39;creating a rule as [%]&#39;, s;
EXECUTE s;
END;
end if;
return needed_partition_table_name;
END;
$BODY$
LANGUAGE plpgsql strict volatile;&lt;/pre&gt;

Note: when using inherited tables, to make real use of setting &lt;code&gt;constraint_exclusion&lt;/code&gt; on, we have actually to use constant values for partition criteria checks. That means in practice, that we have to always construct SQL statements (not forgetting to use &lt;code&gt;quote_ident()&lt;/code&gt; and &lt;code&gt;quote_literal()&lt;/code&gt;) and then &lt;code&gt;EXECUTE&lt;/code&gt; them (when writing PL/pgSQL code of course)&lt;br /&gt;
&lt;br /&gt;
P.S.: Creation of the partitions on the fly will cause parallel transactions the fail on the moment of creation the tables, but this happens only at that moment and the client should be ready to retry the attempt in case of failure...</content><link rel='replies' type='application/atom+xml' href='http://tech.valgog.com/feeds/3740242501444651922/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/6018614372585319965/3740242501444651922' title='2 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/6018614372585319965/posts/default/3740242501444651922'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/6018614372585319965/posts/default/3740242501444651922'/><link rel='alternate' type='text/html' href='http://tech.valgog.com/2008/04/table-partitioning-automation-triggers.html' title='Table partitioning automation triggers in PostgreSQL'/><author><name>valgog</name><uri>http://www.blogger.com/profile/02521879454303479150</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>2</thr:total></entry><entry><id>tag:blogger.com,1999:blog-6018614372585319965.post-8647584433566561635</id><published>2008-04-04T18:12:00.012+02:00</published><updated>2011-03-01T12:36:15.145+01:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="postgresql"/><title type='text'>PostgreSQL array aggregate</title><content type='html'>Interestingly enough, I have only now have found this declaration in the &lt;a href=&quot;http://www.postgresql.org/docs/8.3/static/xaggr.html&quot; target=_blank&gt;User-Defined Aggregates&lt;/a&gt; related Postgres documentation chapter:&lt;br /&gt;
&lt;br /&gt;
&lt;pre class=&quot;brush: sql; light: true;&quot;&gt;CREATE AGGREGATE array_accum (anyelement)
(
sfunc = array_append,
stype = anyarray,
initcond = &#39;{}&#39;
);&lt;/pre&gt;&lt;br /&gt;
&lt;br /&gt;
This array aggregate function is very useful when working with arrays in PostgreSQL and it is not included to the default installation (starting from version 8.4 &lt;code&gt;array_agg()&lt;/code&gt; function is available). It can be used as a reverse to the &lt;code&gt;ARRAY(&lt;i&gt;query&lt;/i&gt;)&lt;/code&gt; construct and sometimes together with &lt;code&gt;generate_series()&lt;/code&gt; result set generation function.&lt;br /&gt;
&lt;br /&gt;
Another, sometimes quite important, aggregate function to aggregate text is &lt;br /&gt;
&lt;br /&gt;
&lt;pre class=&quot;brush: sql; light: true;&quot;&gt;CREATE AGGREGATE text_accum (text)
(
sfunc = textcat,
stype = text,
initcond = &#39;&#39;
);&lt;/pre&gt;&lt;br /&gt;
&lt;br /&gt;
but as it does not allow to insert delimiters in the accumulated text it&#39;s usage is quite limited. &lt;br /&gt;
&lt;br /&gt;
To accumulate texts using say a comma as a delimiter  &lt;code&gt;array_to_string(array_accum(TEXT_COLUMN_TO_AGGREGATE), &#39;, &#39;)&lt;/code&gt; construct can be used (starting from version 9.0 a fast &lt;code&gt;string_agg()&lt;/code&gt; is available to do that).&lt;br /&gt;
&lt;br /&gt;
To concatenate several arrays in one aggregated array, very simple aggregate can be used&lt;br /&gt;
&lt;br /&gt;
&lt;pre class=&quot;brush: sql; light: true;&quot;&gt;CREATE AGGREGATE array_accum_cat(anyarray) (
SFUNC=array_cat,
STYPE=anyarray,
INITCOND=&#39;{}&#39;
);&lt;/pre&gt;&lt;br /&gt;
&lt;br /&gt;
This makes it possible to merge several arrays together in one one-dimensional array. &lt;br /&gt;
&lt;br /&gt;
One can find more related information in my other post &lt;a href=http://tech.valgog.com/2010/05/merging-and-manipulating-arrays-in.html&gt;Merging and Manipulating Arrays in PostgreSQL&lt;/a&gt;</content><link rel='replies' type='application/atom+xml' href='http://tech.valgog.com/feeds/8647584433566561635/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/6018614372585319965/8647584433566561635' title='3 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/6018614372585319965/posts/default/8647584433566561635'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/6018614372585319965/posts/default/8647584433566561635'/><link rel='alternate' type='text/html' href='http://tech.valgog.com/2008/04/postgresql-array-aggregate.html' title='PostgreSQL array aggregate'/><author><name>valgog</name><uri>http://www.blogger.com/profile/02521879454303479150</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>3</thr:total></entry><entry><id>tag:blogger.com,1999:blog-6018614372585319965.post-524507139159224397</id><published>2008-01-15T10:38:00.000+01:00</published><updated>2011-02-17T16:23:49.121+01:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="postgresql"/><title type='text'>Merlin&#39;s stuff: Advisory Locks Part 2</title><content type='html'>&lt;a href=&quot;http://merlinmoncure.blogspot.com/2006/12/advisory-locks-part-2.html&quot;&gt;Merlin&amp;#39;s stuff: Advisory Locks Part 2&lt;/a&gt;&lt;br /&gt;&lt;br /&gt;Quite an interesting article about the possibility to use PostgreSQL Advisory Locks</content><link rel='replies' type='application/atom+xml' href='http://tech.valgog.com/feeds/524507139159224397/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/6018614372585319965/524507139159224397' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/6018614372585319965/posts/default/524507139159224397'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/6018614372585319965/posts/default/524507139159224397'/><link rel='alternate' type='text/html' href='http://tech.valgog.com/2008/01/merlins-stuff-advisory-locks-part-2.html' title='Merlin&#39;s stuff: Advisory Locks Part 2'/><author><name>valgog</name><uri>http://www.blogger.com/profile/02521879454303479150</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>0</thr:total></entry><entry><id>tag:blogger.com,1999:blog-6018614372585319965.post-4263912752070992858</id><published>2007-12-12T10:16:00.000+01:00</published><updated>2011-02-17T16:23:49.122+01:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="postgresql"/><title type='text'>PostgreSQL 8.3 vs. 8.2 - a simple benchmark</title><content type='html'>&lt;a href=&quot;http://www.kaltenbrunner.cc/blog/index.php?/archives/21-8.3-vs.-8.2-a-simple-benchmark.html&quot;&gt;Quite an interesting benchmark results for PostgreSQL 8.3 vs. 8.2&lt;/a&gt;</content><link rel='replies' type='application/atom+xml' href='http://tech.valgog.com/feeds/4263912752070992858/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/6018614372585319965/4263912752070992858' title='1 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/6018614372585319965/posts/default/4263912752070992858'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/6018614372585319965/posts/default/4263912752070992858'/><link rel='alternate' type='text/html' href='http://tech.valgog.com/2007/12/postgresql-83-vs-82-simple-benchmark.html' title='PostgreSQL 8.3 vs. 8.2 - a simple benchmark'/><author><name>valgog</name><uri>http://www.blogger.com/profile/02521879454303479150</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>1</thr:total></entry></feed>