Prev: [COMMITTERS] pgsql: Oops, don't forget to rewind the directory before scanning it to
Next: Time travel on the buildfarm
From: Gokulakannan Somasundaram on 26 Feb 2010 16:48 > No, what generally happens is it fails to find a matching index entry at > all, because the search algorithm concludes there can be no match based > on the limited set of comparisons it's done. Transitivity failures lead > to searching the wrong subset of the index. > Actually Tom, i am not able to understand that completely. But what you are saying that in the current scenario, when there is a broken data type based index, then it will return no results, but never will return wrong results. So never the update will corrupt the heap data. But i take it as you say (please, correct me, if i am wrong). But even returning no results might lead to failures in unqiue checks. While i inserting, i try to check whether a particular data is already inserted and if it returns no results, then it will go ahead and insert the data assuming that the unique check has passed, while in reality it has failed. Wait a minute. Bingo!!!! So for unique checks we are already going to index from Heap. So it is the same thing i am doing with Thick index. So if we can trust our current unique checks, then we should trust the Thick index. Thanks Tom!!! for having this good conversation.... I think this broken data type problem / volatile function issue has to be resolved for the current index, if we advocate to stop the thick index. WOW!!! Thanks, Gokul.
From: Gokulakannan Somasundaram on 26 Feb 2010 18:36 > Wait a minute. Bingo!!!! So for unique checks we are already going to > index from Heap. So it is the same thing i am doing with Thick index. So if > we can trust our current unique checks, then we should trust the Thick > index. > > Thanks Tom!!! for having this good conversation.... > > I think this broken data type problem / volatile function issue has to be > resolved for the current index, if we advocate to stop the thick index. > WOW!!! > I think, this opens up lot of opportunities for improvement in Postgres. a) HOT can now extend its reach beyond page boundaries b) If a heap has three indexes and the update is going to affect only one index, then we need not update the other two indexes. HOT can have more cleaner and fresh approach. If we have both normal index without snapshot and the thick index, Postgres can boast itself of having a very rich index family, in which it has some index structures for update/delete intensive transactions(normal index) and the thick index for select based transactions. Marketing folks can easily advertise the product.....:)))) Gokul.
From: Gokulakannan Somasundaram on 27 Feb 2010 00:42 > > > Actually Tom, i am not able to understand that completely. But what you are > saying that in the current scenario, when there is a broken data type based > index, then it will return no results, but never will return wrong results. > So never the update will corrupt the heap data. But i take it as you say > (please, correct me, if i am wrong). > But even returning no results might lead to failures in unqiue checks. > While i inserting, i try to check whether a particular data is already > inserted and if it returns no results, then it will go ahead and insert the > data assuming that the unique check has passed, while in reality it has > failed. > > Wait a minute. Bingo!!!! So for unique checks we are already going to > index from Heap. So it is the same thing i am doing with Thick index. So if > we can trust our current unique checks, then we should trust the Thick > index. > > Thanks Tom!!! for having this good conversation.... > > I think this broken data type problem / volatile function issue has to be > resolved for the current index, if we advocate to stop the thick index. > WOW!!! > > > Can i get a feedback from Tom / Heikki regarding my observation? Regards, Gokul.
From: Gokulakannan Somasundaram on 28 Feb 2010 01:02 If i have got over excited in the previous update, please ignore that. a) We are already going from table to index to do unique checks. This is the same thing, which we will do to go and update the snapshot in the indexes. b) The way, it should work would be to have a check on whether the operator is broken / function is volatile and put the onus on the user to make sure that they are updated correctly. c) In the ItemId, instead of removing the size field completely, we can store the size as size/4(since it is MaxAligned). This will save us 2 bits. In index we only need 13 bits to store the complete size in the tuple, but we use 15 bits in the iid, so again we can have two more bit savings there. That's sufficient for us to express the hint fields in a index. I think Karl's way of expressing it requires only one bit, which looks very efficient. So we can check the hint bits from the iid itself. So just with a addition of 8 bytes per tuple, we can have the snapshot stored with the index. Can someone please comment on this? Thanks, Gokul.
From: Greg Stark on 28 Feb 2010 09:02
On Sun, Feb 28, 2010 at 6:02 AM, Gokulakannan Somasundaram <gokul007(a)gmail.com> wrote: > So just with a addition of 8 bytes per tuple, we can have the snapshot > stored with the index. Can someone please comment on this? The transaction information on tuples take 18 bytes plus several info bits. It's possible just storing a subset of that would be useful but it's unclear. And I think it would complicate the code if it had to sometimes fetch the heap tuple to get the rest and sometimes doesn't. I think you have to take up a simpler project as a first project. This is a major overhaul of transaction information and it depends on understanding how a lot of different areas work -- all of which are very complex tricky areas to understand. -- greg -- Sent via pgsql-hackers mailing list (pgsql-hackers(a)postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers |