Prev: [HACKERS] Issue: Deprecation of the XML2 module 'xml_is_well_formed' function
Next: Issue: Deprecation of the XML2 module 'xml_is_well_formed' function
From: "Kevin Grittner" on 28 Jun 2010 11:17 The thread on Admission Control caused me to realize that one policy I'd been considering as a possibly useful second tier (execution) admission control policy was actually more appropriate as a first tier (simple) admission control policy. The difference (as defined in the Hellerstein, Stonebraker & Hamilton paper) is that second tier policies are enforced after initial query planning, so that they can make use of information acquired during the planning phase; first tier policies are enforced before that. Now, some may not realize it, but PostgreSQL already has a first tier policy, consisting of the max_connections and superuser_reserved_connections GUCs, and maybe others. What I'm looking at could be considered an extension of that first tier or, if you tilt your head right, could be considered a built-in connection pool. The mechanism would be not much more complex than max_connections. My experience with PostgreSQL benchmarks and production use has indicated that performance falls off when you have more active connections than required to saturate available resources. It has also indicated that we get best results by keeping a relatively small number of database transactions busy until they complete, queuing up requests for new transactions so that at saturation, a new transaction begins as soon as a previous one ends. "Relatively small", in my experience, is typically somewhere near (2 * core_count) + effective_spindle_count. I think this could be achieved with the following: When preparing to deal with a new statement: - acquire lockX - if not a superuser - and not in an active transaction - and countX >= max_active_transactions - place current process at tail of waitX queue, and block - (lockX would be released while blocked) - increment countX - release lockX When preparing to complete (commit or rollback) a transaction: - acquire lockX - decrement countX - if countX < max_active_transactions - and waitX queue is not empty - remove process at head of queue and unblock it - release lockX It's possible this code could be place inside of some block of code which is already locked around transaction creation and completion; otherwise, I suspect that a spinlock would suffice. There's not a lot going on inside a block of code protected by lockX. It seems theoretically possible that user locks which span transactions, if used in a certain way, could cause this scheme to deadlock. I'm skeptical that anyone is using them in such a way. If we're worried about that, however, there would need to be some sort of deadlock detection. Another issue would be how to show this to users. If we could show the transaction in pg_locks as having its virtualxid exclusive lock not granted, with waiting showing as true in pg_stat_activity, that would make sense to me. Does this sound at all sane? -Kevin -- Sent via pgsql-hackers mailing list (pgsql-hackers(a)postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers |