How can we measure the use of PostgreSQL in some country or region? it seems that the level of support for an event is a good way!
In June 2008 i suggested the idea of making a PGDay here with little or no response. I have suggested the same idea all years since then with similar response, but this year i had a very good response... we got a University to lend us their instalations and 10 talks without considering my own. We organize it in just a few month with little resources and just local people. It looks like a good start.
I'm planning to talk about the community work, how people can help and how much people trust in PostgreSQL. Some other people will talk about 9.1's features, plpython, cluster and replication tools and other things; so we are going from one people wanting to talk to several people covering a few areas...
yeah! we are growing! i hope next year we can invite people from outside, i didn't do that this time because of lack of money and time (but if someone can afford the expenses and can move thing in his agenda, i can schedule it ;)
For now, i'm happy to anounce that PGDay Ecuador will be held at October the 8th, 2011
For information see: http://wiki.postgresql.org/wiki/PGDay_Ecuador_2011
Jaime Casanova's Blog
viernes, 16 de septiembre de 2011
martes, 26 de julio de 2011
Auditing table changes
How often are you asked for logging changes in tables? AFAIR, every single customer i have had asked me for that... Until now, i have using table_log (http://pgfoundry.org/projects/tablelog) for that but this is less than
ideal for several reasons.
what table_log does is to create a new table for every table we track with the same columns as the original table has plus some added columns at the end of the table, this has a few problems:
what i have done here is:
ideal for several reasons.
what table_log does is to create a new table for every table we track with the same columns as the original table has plus some added columns at the end of the table, this has a few problems:
- if we add new columns to the original table we need to contort things to add the same columns in the correct place in the log table if we don't want to lose history.
- if you drop a column, you should do the same on the log table or you couldn't be able to insert new records. in this case the solution is to lose some history
- this inserts 1 record per INSERT, 1 record per DELETE and 2 records per UPDATE (the old and new versions of the record are stored); so in heavily updated tables size can be double of the real one
- no support for logging truncate actions
- there is no simple way to determine which is the old and new versions of a single tuple, you need to match new versions with old versions using time of change, user and mode (UPDATE)
- there is missing information, at least ip of the client
what i have done here is:
- use hstore, and ensure only log the values that have changed. because of that, if columns are added or removed we don't need to do anything to the logger table.
- it inserts only 1 record per action so use less space than table_log
- it logs TRUNCATE actions (it needs an additional trigger for that, though)
- old and new values are on the same record so is really obvious what have changed in the tuple
- i have added two fields that there aren't in table_log current_user and client_addr (i'm not sure how useful current_user is but surely the ip can be useful)
- i have used only one table for all tables tracked, that way if we want to have separate tables for some we can partition by relid
Suscribirse a:
Entradas (Atom)