libpq stands for…

I always was interested why PostgreSQL client library named “libpq”. Of course I guessed that “lib” means “library”, but there are two more characters left. I supposed that “p” is some how connected with “Postgres”. But I have no any idea why “q” appeared in the library name.

Now I know the answer. Thanks to Bruce Momjian:

Libpq is called ‘libpq’ because of the original use of the QUEL query language, i.e. lib Post-QUEL. We have a proud backward-compatibility history with this library.

Poll: Do we need SET type in PostgreSQL?

This topic is discussed again and again with huge friend of mine and MySQL sectarian. 🙂 And must confess I agree with him in this particular point of view.

For me it’s strange that such powerful database as PostgreSQL, which has a lot of drums and whistles inside, doesn’t support SET types. Especially when we have ENUM support now!

Of course one may say that this functionality can be implemented using Bit String Types. But where is the joy of power? Where is the crystal clarity of the SQL script? 🙂

What’s you opinion, postgresman?

PS Don’t hesitate to share your thoughts in the comments.

My first poll… BLOB storing

PS Decided to check Poll functionality.

The desktop database similarity illusion


It is no secret that the most development tools grown up from desktop databases technologies, now trying to provide developers with “native” metaphors of desktop databases (PostgresDAC one of them) – tables, indices, searching & positioning in index etc. It’s a huge temptation to realize task as it is simple table, where you can search by the first letters of title…

But we should not forget, that so called tables (in meaning of user interface) and indices are made through SQL-calls anyway, and on opening of the table most likely SELECT * FROM table ORDER BY Index will be executed. And even on this simple query will be different reaction from different servers.

Real life example

There was a tragic history in the 90th described by A. Akopyantc (in Russian). In one bank there was system developed in Clipper (who knows what is it now 😉 ) on Nowell network. This system was out of breath because of huge amount of data.

They bought Sun server and Oracle for the number with six zeros on the end. Task was quickly rewritten under Oracle and launched. And they found out that it pulls no more than 5 users. To make the story short new client program just simulated old “clipper” ideology and opened all needed tables with query shown above.

Hunky-Oracle swapped these huge tables for each operator into memory (so called buffering) and successfully filled the whole memory. Then after every update by operator, server started buffers refreshing with 100% CPU load.

Another common mistake – an attempt to organize the data processing on the client. There one must be aware of the fact that viewing the table data on the client will work twice – three times longer than in the old file server application, and also it will hugely increase network traffic.


“Why TPSQLTable component there in your component suite then?” – one may ask.

“Because man is weak” – my answer would be. 🙂

You have no idea how many people cannot imagine development process without table components (TPSQLTable, TTable etc.). It was easier to give than to refuse. But believe me, this messy thingy will be eliminated.

Once I asked Sergey Vostrikov (DevRace, FIBPlus) how they fight such kind of lamentations?

“We politely sent them to study client-server ideology and explain that TTable was coined for the desktop single user systems and is excellent solution, with direct access to files. File-server approach with a SQL-server is maleficent” – answered Sergey.