From: Gregory Stark on
"Jonah H. Harris" <jonah.harris(a)gmail.com> writes:

> On 2/17/07, Joshua D. Drake <jd(a)commandprompt.com> wrote:
>> My understanding is that the main difference is that rollbacks are
>> inexpensive for us, but expensive for Oracle.
>
> Yes, Oracle is optimized for COMMIT, we're optimized for ROLLBACK :)

I used to say that too but I've since realized it's not really true. It's more
like Oracle is optimized for data that's committed long in the past and we're
optimized for data that's been recently updated.

In Oracle the data that's been committed long in the past requires no
transactional overhead but the data that's been recently updated requires lots
of work to fetch the right version.

In Postgres it's the other way around. data that's been committed deleted long
ago requires extra work to clean up but data that's been recently changed
requires little additional work to see the correct version.

In a sense then it's the opposite of what we usually say. Oracle is optimized
for mostly static data. Postgres is optimized for changing data.

--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com

---------------------------(end of broadcast)---------------------------
TIP 5: don't forget to increase your free space map settings

From: RPK on

I agree that TimeStamp creates an overhead, but I just want to know if an
accidental update happened to a table and this incident got traced three
days after, what facility PGSQL provide to bring the table to its original
condition. You can't wait regretting on why you did not run ROLLBACK before
COMMIT. (Correct me. I am only a user).

When talking about Oracle's technology and that it creates overhead, it is
true, Oracle's database is not for ordinary machines. You can't expect
performance on a normal 256 MB machine with Oracle. But still the more the
options of recovery the best for mission critical environments.

The feature of enabling/disabling TimeStamp logging is acceptable. A user
must be able to decide whether FlashBack type option is needed or not. In
Oracle 10g we can switch off "FlashBack" feature if we are low on resources.
If PGSQL is to be used in a mission-critical situation then no company will
rely on low-end machines. For these type of situations best environment is
chosen and I think PGSQL must have this type of recovery options. PGSQL
installer can ask the user during setup to enable/disable TimeStamp Logging.

Restoring the database from a backup file that was created three days ago is
not feasible. The changes in other tables and the new things created need to
be done again at the price of just undoing the last update on a particular
table.


Warren Turkal-5 wrote:
>
> On Saturday 17 February 2007 07:49, RPK wrote:
>> PostgreSQL, already a mature database, needs to have more options for
>> recovery as compared to proprietary databases. I just worked with
>> Oracle's
>> FlashBack query feature in Oracle 9i and FlashBack Table feature in 10g.
>>
>> Future versions of PostgreSQL must have similar features which enable
>> users
>> to bring Table(s) and/or Database(s) to a desired Time Stamp.
>
> Check out my proposal[1] for Temporal extensions. Ultimately, creating
> valid
> time and transaction time tables would be possible through my proposal.
> Please
> check it out.
>
> [1]http://archives.postgresql.org/pgsql-hackers/2007-02/msg00540.php
>
> wt
> --
> Warren Turkal (w00t)
>
> ---------------------------(end of broadcast)---------------------------
> TIP 9: In versions below 8.0, the planner will ignore your desire to
> choose an index scan if your joining column's datatypes do not
> match
>
>

--
View this message in context: http://www.nabble.com/New-feature-request%3A-FlashBack-Query-tf3245023.html#a9059865
Sent from the PostgreSQL - hackers mailing list archive at Nabble.com.


---------------------------(end of broadcast)---------------------------
TIP 4: Have you searched our list archives?

http://archives.postgresql.org

From: "Andrew Dunstan" on
RPK wrote:
>
> I agree that TimeStamp creates an overhead, but I just want to know if an
> accidental update happened to a table and this incident got traced three
> days after, what facility PGSQL provide to bring the table to its original
> condition. You can't wait regretting on why you did not run ROLLBACK
> before
> COMMIT. (Correct me. I am only a user).
>

Why the heck can't you create a reversing transaction? That's what
ordinary mortals do. Demanding unlimited undo at some time that is
arbitrarilly distant in the future strikes me as wholly unreasonable.

What do you mean by "accidental update"? What you really appear to mean is
that a program or a human operator has made an error, and incorrectly told
the database to commit a transaction. The answer surely is to correct the
behaviour of the program or human, rather than wanting the database to
provide an undo facility. Alternatively, this should be handled at the
application layer, using something like table_log.

Some things just don't work well with this sort of facility. Just ask your
bookie if you can undo a bet that you "accidentally" placed with him and
which, three days later, you discover (after the race) was a mistake.


cheers

andrew




---------------------------(end of broadcast)---------------------------
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to majordomo(a)postgresql.org so that your
message can get through to the mailing list cleanly

From: "Jonah H. Harris" on
On 2/20/07, Gregory Stark <stark(a)enterprisedb.com> wrote:
> I used to say that too but I've since realized it's not really true.

Heh, take a joke man... I was following up on Drake's email :)

But, since you want to discuss your view of the systems openly... I'll
gladly reply :)

> It's more like Oracle is optimized for data that's committed
> long in the past and we're optimized for data that's
> been recently updated.

Wrong. When Oracle says it's committed, it's committed. No
difference between when, where, and how. In Oracle, the committed
version is *always* the first presented to the user... it takes time
to go back and look at older versions; but why shouldn't that be a bit
slower, it isn't common practice anyway. Same with rollbacks... why
should they optimize for them when 97% of transactions commit?

> In Oracle the data that's been committed long in the past requires no
> transactional overhead but the data that's been recently updated requires lots
> of work to fetch the right version.

Wrong. The same transactional overhead applies to *all* data in
Oracle no matter of when it was committed. Similarly, the only
overhead required occurs when someone is querying in serializable
isolation or on read-committed data before or during a commit. On
short OLTP-type transactions, Oracle has the most optimized solution.

> In Postgres it's the other way around. data that's been committed deleted long
> ago requires extra work to clean up but data that's been recently changed
> requires little additional work to see the correct version.

PostgreSQL has little additional work? Like, checking the validity of
every tuple? Oracle checks visibility at the block level, so there's
*much* less overhead. Take most of the benchmarks which can hold ~200
tuples per block. Tables in those benchmarks are 100+ million rows.
On a sequential scan, Oracle would perform 500K checks, PostgreSQL
would perform *all* 100M checks (not counting dead versions due to row
updates and the like). On an index scan, Oracle not only has a
smaller index and less to check, but also knows the tuple will be
committed and will, in most cases, not have to perform additional
physical I/O to find the latest version of a row.

Of course, Oracle's design is much more complicated in its ability to
build read-committed versions of the blocks at runtime; something the
simplicity of PostgreSQL's MVCC design eliminates.

> In a sense then it's the opposite of what we usually say. Oracle is optimized
> for mostly static data. Postgres is optimized for changing data.

Care to share an example to prove it?

Like always, there are pros and cons with both designs, but denying
facts gets us nowhere. We're off-topic now... so we should either
move this off line or to another thread. I personally don't see much
of a reason to continue discussing MVCC designs anymore as Oracle's is
patented and PostgreSQL's is highly unlikely to change drastically.

As always, I'd suggest discussing improvements, not the status quo.
Likewise, discussing Oracle's design, drawbacks, and limitations
without having used it extensively is quite obvious to anyone familiar
with Oracle. Don't get me wrong, it's fine to prefer one design to
another, but pushing discussion items comparing Oracle to PostgreSQL
because of things you've heard or read somewhere isn't the same as
understanding them because you've used them.

--
Jonah H. Harris, Software Architect | phone: 732.331.1324
EnterpriseDB Corporation | fax: 732.331.1301
33 Wood Ave S, 3rd Floor | jharris(a)enterprisedb.com
Iselin, New Jersey 08830 | http://www.enterprisedb.com/

---------------------------(end of broadcast)---------------------------
TIP 2: Don't 'kill -9' the postmaster

From: Rod Taylor on
>
> Wrong. When Oracle says it's committed, it's committed. No
> difference between when, where, and how. In Oracle, the committed
> version is *always* the first presented to the user... it takes time
> to go back and look at older versions; but why shouldn't that be a bit
> slower, it isn't common practice anyway. Same with rollbacks... why
> should they optimize for them when 97% of transactions commit?

Do 97% of transactions commit because Oracle has slow rollbacks and
developers are working around that performance issue, or because they
really commit?

I have watched several developers that would prefer to issue numerous
selects to verify things like foreign keys in the application in
order to avoid a rollback.

Anyway, I don't have experience with big Oracle applications but I'm
not so sure that 97% of transactions would commit if rollbacks were
cheaper.



---------------------------(end of broadcast)---------------------------
TIP 3: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faq

First  |  Prev  |  Next  |  Last
Pages: 1 2 3 4 5 6 7 8
Prev: Piggybacking vacuum I/O
Next: pg_proc without oid?