From: tshad on

"--CELKO--" <jcelko212(a)earthlink.net> wrote in message
news:506ec56a-e829-447c-9896-ee1613b35a6e(a)g30g2000yqc.googlegroups.com...
> In the 1960's we would write records (not rows) to a scratch tape and
> share that tape among several procedures. It saved us the trouble of
> dismounting the tape. One of the fundamental principles of software
> engineering is that loosely coupled modules are better than what you
> have. Look up "data coupling', "functional coupling" and "procedural
> coupling"in any book on software engineering. YOu seem to have one or
> more of these design problems.
>
> For no more than 1000 rows (not records), why not re-write the code to
> use a derived table in each procedure? The procedures would become
> independent, loosely coupled modules. You will probably have to do
> some more work than just that, but it is a start.

Not sure why they would be loosly coupled modules.

But in some cases the table is filled with a set of records and then used
(JOIN'd) in multiple Select statements. So don't think a derived table
would be the best bet here.

Tom


From: Tony Rogerson on
Unless you've lots of concurrent connections CREATING temporary tables you
won't get the blocking problem he talks about - you really need to be going
some in order to see it.

Tony

"tshad" <tfs(a)dslextreme.com> wrote in message
news:eyotilb1KHA.5972(a)TK2MSFTNGP06.phx.gbl...
>
> "Tony Rogerson" <tonyrogerson(a)torver.net> wrote in message
> news:5E21021A-E28E-480F-87D1-6EC16FF7FFA3(a)microsoft.com...
>>> In this case Tony, I think that this is an unnecessarily harsh response:
>>
>> Given --celko--'s attitude over the past decade on this forum any harsh
>> response is reasonable - in fact, justified.
>>
>>> and records, his advise to the OP is sound: for a tiny table (up to 1000
>>> rows), a temp table or a table variable will not only perform as well or
>>> better, but will also be much easier to maintain then a 'queue' table.
>>> If it is possible, it should perform even better if it is possible to
>>> use find these rows from a CTE rather then realizing them at all, but
>>> the OP hasn't given enough information to decide if that is possible.
>>>
>>
>> That's not in my experience; for the table expression - derived table or
>> CTE the optimiser hasn't a real idea in most cases (given real world SQL)
>> how many rows will be returned on the intermediary steps, so you get a
>> general plan.
>>
>> Table expressions are literally inline macros, the SQL is expanded into
>> the main query - there is no encapsulation benefits in using table
>> expressions because of this expansion.
>>
>> # table have statistics - it is my experience - and I've a hell of a lot
>> of it in different industries, situations, scale and schema scenarios
>> that a # table is 80% of the time the right choice because of those
>> statistics. CPU is cheap so a plan compile costs milliseconds - probably
>> not even measurable, however getting the plan wrong can cost significant
>> amounts of CPU and disk IO because of the additional amounts of data
>> having to be processed.
>>
>>> In my experience, the best performance improvements come from changing
>>> applications not to _need_ databases to do things that they aren't good
>>> at; and the second best from fixing schema. The worst 'improvements'
>>> come from doing things like avoiding the table creation overhead by
>>> using a persistent table for my temp storage - something surprising to
>>> many programmers who are used to CPU bound tasks that _always_ run
>>> faster if I do less work here ;)
>>
>> Using a permanet table to act as a temporary table just causes blocking,
>> fragmentation and additional logging because people usually locate it in
>> an application database rather than tempdb.
>>
> In another response, John Bell mentioned that the problem with temp
> tables, which is what I was thinking of doing, was that it would cause a
> bottleneck in TempDB.
>
> If that were the case, why use it?
>
> In my case, not everyone would be using the table at once, typically. But
> there could be multiple users hitting the table at once.
>
> But you would have the same issue with the static table, where everyone
> hits the table at once.
>
> Tom
>
>> In my experience the best performance improvements come from teaching
>> people to think in Sets; often people think the database isn't the right
>> tool for the job because they don't understand Relational Theory so just
>> poo poo the relational database.
>>
>> Tony.
>>
>>
>> "m" <m(a)b.c> wrote in message news:uTLCuyR1KHA.224(a)TK2MSFTNGP06.phx.gbl...
>>> In this case Tony, I think that this is an unnecessarily harsh response:
>>> While Celko is reminding us all _again_ about the difference between
>>> rows and records, his advise to the OP is sound: for a tiny table (up to
>>> 1000 rows), a temp table or a table variable will not only perform as
>>> well or better, but will also be much easier to maintain then a 'queue'
>>> table. If it is possible, it should perform even better if it is
>>> possible to use find these rows from a CTE rather then realizing them at
>>> all, but the OP hasn't given enough information to decide if that is
>>> possible.
>>>
>>> In my experience, the best performance improvements come from changing
>>> applications not to _need_ databases to do things that they aren't good
>>> at; and the second best from fixing schema. The worst 'improvements'
>>> come from doing things like avoiding the table creation overhead by
>>> using a persistent table for my temp storage - something surprising to
>>> many programmers who are used to CPU bound tasks that _always_ run
>>> faster if I do less work here ;)
>>>
>>>
>>> "Tony Rogerson" <tonyrogerson(a)torver.net> wrote in message
>>> news:#Jsd7rP1KHA.264(a)TK2MSFTNGP05.phx.gbl...
>>>>> For no more than 1000 rows (not records), why not re-write the code to
>>>>> use a derived table in each procedure? The procedures would become
>>>>> independent, loosely coupled modules. You will probably have to do
>>>>> some more work than just that, but it is a start.
>>>>
>>>> 1000 rows? From which planet did you pluck that figure from?
>>>>
>>>> No statistics are held on table expressions like a derived table,
>>>> performance can be horrendous.
>>>>
>>>> Temporary tables (akin to a relvar) are widely used and rightly too in
>>>> SQL Server, statistics are held on a temporary table so you get the
>>>> best plan available for what you are doing.
>>>>
>>>> Temporary tables are in fact real tables and conform to Codd's rules.
>>>>
>>>> What's your problem other than the syntax we use in SQL Server doesn't
>>>> explicitly follow ISO SQL?
>>>>
>>>> --ROGGIE--
>>>>
>>>>
>>>> "--CELKO--" <jcelko212(a)earthlink.net> wrote in message
>>>> news:506ec56a-e829-447c-9896-ee1613b35a6e(a)g30g2000yqc.googlegroups.com...
>>>>> In the 1960's we would write records (not rows) to a scratch tape and
>>>>> share that tape among several procedures. It saved us the trouble of
>>>>> dismounting the tape. One of the fundamental principles of software
>>>>> engineering is that loosely coupled modules are better than what you
>>>>> have. Look up "data coupling', "functional coupling" and "procedural
>>>>> coupling"in any book on software engineering. YOu seem to have one or
>>>>> more of these design problems.
>>>>>
>>>>> For no more than 1000 rows (not records), why not re-write the code to
>>>>> use a derived table in each procedure? The procedures would become
>>>>> independent, loosely coupled modules. You will probably have to do
>>>>> some more work than just that, but it is a start.
>>>>
>
>
From: m on
Re Celko, you have a point ;) As for the rest, as usual, it depends! I
have seen cases where temp tables excel, and others where they are just
wasteful - it depends on the query pattern and the distribution of data.
For example, when selecting all the rows from a small table, a CTE can be
more efficient, but when the rows are voluminous and hard to find (i.e. from
multiple sources or scattered through a huge table), realizing them in a
temp table often produces orders of magnitude better performance.

For the OP's problem however, we can't know what will work best, but we do
know that nearly anything will work better than a single shared table to
store inherently independent sets of data.

I have often seen designs where 'queue' or 'scratch' tables are treated by
applications as arrays or linked lists would be. I have even see a case
where to iterate the contents of such a table, the application selected the
ONE row that corresponded to the current index and looped - I nearly died
when the lead on that team tried to explain to me that this was a very good
design. Notwithstanding sets, there are some things that IMHO RDBMS is not
good at doing. Often this is more because of the MS then the RDB part - in
HPC, the overhead of storing anything except the final result is usually
monstrous, but most HPC applications use relational data structures
internally.

"Tony Rogerson" <tonyrogerson(a)torver.net> wrote in message
news:5E21021A-E28E-480F-87D1-6EC16FF7FFA3(a)microsoft.com...
>> In this case Tony, I think that this is an unnecessarily harsh response:
>
> Given --celko--'s attitude over the past decade on this forum any harsh
> response is reasonable - in fact, justified.
>
>> and records, his advise to the OP is sound: for a tiny table (up to 1000
>> rows), a temp table or a table variable will not only perform as well or
>> better, but will also be much easier to maintain then a 'queue' table.
>> If it is possible, it should perform even better if it is possible to use
>> find these rows from a CTE rather then realizing them at all, but the OP
>> hasn't given enough information to decide if that is possible.
>>
>
> That's not in my experience; for the table expression - derived table or
> CTE the optimiser hasn't a real idea in most cases (given real world SQL)
> how many rows will be returned on the intermediary steps, so you get a
> general plan.
>
> Table expressions are literally inline macros, the SQL is expanded into
> the main query - there is no encapsulation benefits in using table
> expressions because of this expansion.
>
> # table have statistics - it is my experience - and I've a hell of a lot
> of it in different industries, situations, scale and schema scenarios that
> a # table is 80% of the time the right choice because of those statistics.
> CPU is cheap so a plan compile costs milliseconds - probably not even
> measurable, however getting the plan wrong can cost significant amounts of
> CPU and disk IO because of the additional amounts of data having to be
> processed.
>
>> In my experience, the best performance improvements come from changing
>> applications not to _need_ databases to do things that they aren't good
>> at; and the second best from fixing schema. The worst 'improvements'
>> come from doing things like avoiding the table creation overhead by using
>> a persistent table for my temp storage - something surprising to many
>> programmers who are used to CPU bound tasks that _always_ run faster if I
>> do less work here ;)
>
> Using a permanet table to act as a temporary table just causes blocking,
> fragmentation and additional logging because people usually locate it in
> an application database rather than tempdb.
>
> In my experience the best performance improvements come from teaching
> people to think in Sets; often people think the database isn't the right
> tool for the job because they don't understand Relational Theory so just
> poo poo the relational database.
>
> Tony.
>
>
> "m" <m(a)b.c> wrote in message news:uTLCuyR1KHA.224(a)TK2MSFTNGP06.phx.gbl...
>> In this case Tony, I think that this is an unnecessarily harsh response:
>> While Celko is reminding us all _again_ about the difference between rows
>> and records, his advise to the OP is sound: for a tiny table (up to 1000
>> rows), a temp table or a table variable will not only perform as well or
>> better, but will also be much easier to maintain then a 'queue' table.
>> If it is possible, it should perform even better if it is possible to use
>> find these rows from a CTE rather then realizing them at all, but the OP
>> hasn't given enough information to decide if that is possible.
>>
>> In my experience, the best performance improvements come from changing
>> applications not to _need_ databases to do things that they aren't good
>> at; and the second best from fixing schema. The worst 'improvements'
>> come from doing things like avoiding the table creation overhead by using
>> a persistent table for my temp storage - something surprising to many
>> programmers who are used to CPU bound tasks that _always_ run faster if I
>> do less work here ;)
>>
>>
>> "Tony Rogerson" <tonyrogerson(a)torver.net> wrote in message
>> news:#Jsd7rP1KHA.264(a)TK2MSFTNGP05.phx.gbl...
>>>> For no more than 1000 rows (not records), why not re-write the code to
>>>> use a derived table in each procedure? The procedures would become
>>>> independent, loosely coupled modules. You will probably have to do
>>>> some more work than just that, but it is a start.
>>>
>>> 1000 rows? From which planet did you pluck that figure from?
>>>
>>> No statistics are held on table expressions like a derived table,
>>> performance can be horrendous.
>>>
>>> Temporary tables (akin to a relvar) are widely used and rightly too in
>>> SQL Server, statistics are held on a temporary table so you get the best
>>> plan available for what you are doing.
>>>
>>> Temporary tables are in fact real tables and conform to Codd's rules.
>>>
>>> What's your problem other than the syntax we use in SQL Server doesn't
>>> explicitly follow ISO SQL?
>>>
>>> --ROGGIE--
>>>
>>>
>>> "--CELKO--" <jcelko212(a)earthlink.net> wrote in message
>>> news:506ec56a-e829-447c-9896-ee1613b35a6e(a)g30g2000yqc.googlegroups.com...
>>>> In the 1960's we would write records (not rows) to a scratch tape and
>>>> share that tape among several procedures. It saved us the trouble of
>>>> dismounting the tape. One of the fundamental principles of software
>>>> engineering is that loosely coupled modules are better than what you
>>>> have. Look up "data coupling', "functional coupling" and "procedural
>>>> coupling"in any book on software engineering. YOu seem to have one or
>>>> more of these design problems.
>>>>
>>>> For no more than 1000 rows (not records), why not re-write the code to
>>>> use a derived table in each procedure? The procedures would become
>>>> independent, loosely coupled modules. You will probably have to do
>>>> some more work than just that, but it is a start.
>>>
From: tshad on

"Tony Rogerson" <tonyrogerson(a)torver.net> wrote in message
news:F822536B-F0BC-4D7C-BA12-04D0A1C15945(a)microsoft.com...
> Unless you've lots of concurrent connections CREATING temporary tables you
> won't get the blocking problem he talks about - you really need to be
> going some in order to see it.
>
And that would be a temporary block, I would assume, since creation of he
table shouldn't be too slow, especially when you have only 3 columns.

Thanks,

Tom

> Tony
>
> "tshad" <tfs(a)dslextreme.com> wrote in message
> news:eyotilb1KHA.5972(a)TK2MSFTNGP06.phx.gbl...
>>
>> "Tony Rogerson" <tonyrogerson(a)torver.net> wrote in message
>> news:5E21021A-E28E-480F-87D1-6EC16FF7FFA3(a)microsoft.com...
>>>> In this case Tony, I think that this is an unnecessarily harsh
>>>> response:
>>>
>>> Given --celko--'s attitude over the past decade on this forum any harsh
>>> response is reasonable - in fact, justified.
>>>
>>>> and records, his advise to the OP is sound: for a tiny table (up to
>>>> 1000 rows), a temp table or a table variable will not only perform as
>>>> well or better, but will also be much easier to maintain then a 'queue'
>>>> table. If it is possible, it should perform even better if it is
>>>> possible to use find these rows from a CTE rather then realizing them
>>>> at all, but the OP hasn't given enough information to decide if that is
>>>> possible.
>>>>
>>>
>>> That's not in my experience; for the table expression - derived table or
>>> CTE the optimiser hasn't a real idea in most cases (given real world
>>> SQL) how many rows will be returned on the intermediary steps, so you
>>> get a general plan.
>>>
>>> Table expressions are literally inline macros, the SQL is expanded into
>>> the main query - there is no encapsulation benefits in using table
>>> expressions because of this expansion.
>>>
>>> # table have statistics - it is my experience - and I've a hell of a lot
>>> of it in different industries, situations, scale and schema scenarios
>>> that a # table is 80% of the time the right choice because of those
>>> statistics. CPU is cheap so a plan compile costs milliseconds - probably
>>> not even measurable, however getting the plan wrong can cost significant
>>> amounts of CPU and disk IO because of the additional amounts of data
>>> having to be processed.
>>>
>>>> In my experience, the best performance improvements come from changing
>>>> applications not to _need_ databases to do things that they aren't good
>>>> at; and the second best from fixing schema. The worst 'improvements'
>>>> come from doing things like avoiding the table creation overhead by
>>>> using a persistent table for my temp storage - something surprising to
>>>> many programmers who are used to CPU bound tasks that _always_ run
>>>> faster if I do less work here ;)
>>>
>>> Using a permanet table to act as a temporary table just causes blocking,
>>> fragmentation and additional logging because people usually locate it in
>>> an application database rather than tempdb.
>>>
>> In another response, John Bell mentioned that the problem with temp
>> tables, which is what I was thinking of doing, was that it would cause a
>> bottleneck in TempDB.
>>
>> If that were the case, why use it?
>>
>> In my case, not everyone would be using the table at once, typically.
>> But there could be multiple users hitting the table at once.
>>
>> But you would have the same issue with the static table, where everyone
>> hits the table at once.
>>
>> Tom
>>
>>> In my experience the best performance improvements come from teaching
>>> people to think in Sets; often people think the database isn't the right
>>> tool for the job because they don't understand Relational Theory so just
>>> poo poo the relational database.
>>>
>>> Tony.
>>>
>>>
>>> "m" <m(a)b.c> wrote in message
>>> news:uTLCuyR1KHA.224(a)TK2MSFTNGP06.phx.gbl...
>>>> In this case Tony, I think that this is an unnecessarily harsh
>>>> response: While Celko is reminding us all _again_ about the difference
>>>> between rows and records, his advise to the OP is sound: for a tiny
>>>> table (up to 1000 rows), a temp table or a table variable will not only
>>>> perform as well or better, but will also be much easier to maintain
>>>> then a 'queue' table. If it is possible, it should perform even better
>>>> if it is possible to use find these rows from a CTE rather then
>>>> realizing them at all, but the OP hasn't given enough information to
>>>> decide if that is possible.
>>>>
>>>> In my experience, the best performance improvements come from changing
>>>> applications not to _need_ databases to do things that they aren't good
>>>> at; and the second best from fixing schema. The worst 'improvements'
>>>> come from doing things like avoiding the table creation overhead by
>>>> using a persistent table for my temp storage - something surprising to
>>>> many programmers who are used to CPU bound tasks that _always_ run
>>>> faster if I do less work here ;)
>>>>
>>>>
>>>> "Tony Rogerson" <tonyrogerson(a)torver.net> wrote in message
>>>> news:#Jsd7rP1KHA.264(a)TK2MSFTNGP05.phx.gbl...
>>>>>> For no more than 1000 rows (not records), why not re-write the code
>>>>>> to
>>>>>> use a derived table in each procedure? The procedures would become
>>>>>> independent, loosely coupled modules. You will probably have to do
>>>>>> some more work than just that, but it is a start.
>>>>>
>>>>> 1000 rows? From which planet did you pluck that figure from?
>>>>>
>>>>> No statistics are held on table expressions like a derived table,
>>>>> performance can be horrendous.
>>>>>
>>>>> Temporary tables (akin to a relvar) are widely used and rightly too in
>>>>> SQL Server, statistics are held on a temporary table so you get the
>>>>> best plan available for what you are doing.
>>>>>
>>>>> Temporary tables are in fact real tables and conform to Codd's rules.
>>>>>
>>>>> What's your problem other than the syntax we use in SQL Server doesn't
>>>>> explicitly follow ISO SQL?
>>>>>
>>>>> --ROGGIE--
>>>>>
>>>>>
>>>>> "--CELKO--" <jcelko212(a)earthlink.net> wrote in message
>>>>> news:506ec56a-e829-447c-9896-ee1613b35a6e(a)g30g2000yqc.googlegroups.com...
>>>>>> In the 1960's we would write records (not rows) to a scratch tape and
>>>>>> share that tape among several procedures. It saved us the trouble of
>>>>>> dismounting the tape. One of the fundamental principles of software
>>>>>> engineering is that loosely coupled modules are better than what you
>>>>>> have. Look up "data coupling', "functional coupling" and "procedural
>>>>>> coupling"in any book on software engineering. YOu seem to have one
>>>>>> or
>>>>>> more of these design problems.
>>>>>>
>>>>>> For no more than 1000 rows (not records), why not re-write the code
>>>>>> to
>>>>>> use a derived table in each procedure? The procedures would become
>>>>>> independent, loosely coupled modules. You will probably have to do
>>>>>> some more work than just that, but it is a start.
>>>>>
>>
>>


From: Tony Rogerson on
Won't even be measurable.

If you are interested rather than taking our word for it Google around for
tempdb blocking and research the problem.

Many thanks,
Tony.

"tshad" <tfs(a)dslextreme.com> wrote in message
news:u185b1d1KHA.224(a)TK2MSFTNGP06.phx.gbl...
>
> "Tony Rogerson" <tonyrogerson(a)torver.net> wrote in message
> news:F822536B-F0BC-4D7C-BA12-04D0A1C15945(a)microsoft.com...
>> Unless you've lots of concurrent connections CREATING temporary tables
>> you won't get the blocking problem he talks about - you really need to be
>> going some in order to see it.
>>
> And that would be a temporary block, I would assume, since creation of he
> table shouldn't be too slow, especially when you have only 3 columns.
>
> Thanks,
>
> Tom
>
>> Tony
>>
>> "tshad" <tfs(a)dslextreme.com> wrote in message
>> news:eyotilb1KHA.5972(a)TK2MSFTNGP06.phx.gbl...
>>>
>>> "Tony Rogerson" <tonyrogerson(a)torver.net> wrote in message
>>> news:5E21021A-E28E-480F-87D1-6EC16FF7FFA3(a)microsoft.com...
>>>>> In this case Tony, I think that this is an unnecessarily harsh
>>>>> response:
>>>>
>>>> Given --celko--'s attitude over the past decade on this forum any harsh
>>>> response is reasonable - in fact, justified.
>>>>
>>>>> and records, his advise to the OP is sound: for a tiny table (up to
>>>>> 1000 rows), a temp table or a table variable will not only perform as
>>>>> well or better, but will also be much easier to maintain then a
>>>>> 'queue' table. If it is possible, it should perform even better if it
>>>>> is possible to use find these rows from a CTE rather then realizing
>>>>> them at all, but the OP hasn't given enough information to decide if
>>>>> that is possible.
>>>>>
>>>>
>>>> That's not in my experience; for the table expression - derived table
>>>> or CTE the optimiser hasn't a real idea in most cases (given real world
>>>> SQL) how many rows will be returned on the intermediary steps, so you
>>>> get a general plan.
>>>>
>>>> Table expressions are literally inline macros, the SQL is expanded into
>>>> the main query - there is no encapsulation benefits in using table
>>>> expressions because of this expansion.
>>>>
>>>> # table have statistics - it is my experience - and I've a hell of a
>>>> lot of it in different industries, situations, scale and schema
>>>> scenarios that a # table is 80% of the time the right choice because of
>>>> those statistics. CPU is cheap so a plan compile costs milliseconds -
>>>> probably not even measurable, however getting the plan wrong can cost
>>>> significant amounts of CPU and disk IO because of the additional
>>>> amounts of data having to be processed.
>>>>
>>>>> In my experience, the best performance improvements come from changing
>>>>> applications not to _need_ databases to do things that they aren't
>>>>> good at; and the second best from fixing schema. The worst
>>>>> 'improvements' come from doing things like avoiding the table creation
>>>>> overhead by using a persistent table for my temp storage - something
>>>>> surprising to many programmers who are used to CPU bound tasks that
>>>>> _always_ run faster if I do less work here ;)
>>>>
>>>> Using a permanet table to act as a temporary table just causes
>>>> blocking, fragmentation and additional logging because people usually
>>>> locate it in an application database rather than tempdb.
>>>>
>>> In another response, John Bell mentioned that the problem with temp
>>> tables, which is what I was thinking of doing, was that it would cause a
>>> bottleneck in TempDB.
>>>
>>> If that were the case, why use it?
>>>
>>> In my case, not everyone would be using the table at once, typically.
>>> But there could be multiple users hitting the table at once.
>>>
>>> But you would have the same issue with the static table, where everyone
>>> hits the table at once.
>>>
>>> Tom
>>>
>>>> In my experience the best performance improvements come from teaching
>>>> people to think in Sets; often people think the database isn't the
>>>> right tool for the job because they don't understand Relational Theory
>>>> so just poo poo the relational database.
>>>>
>>>> Tony.
>>>>
>>>>
>>>> "m" <m(a)b.c> wrote in message
>>>> news:uTLCuyR1KHA.224(a)TK2MSFTNGP06.phx.gbl...
>>>>> In this case Tony, I think that this is an unnecessarily harsh
>>>>> response: While Celko is reminding us all _again_ about the difference
>>>>> between rows and records, his advise to the OP is sound: for a tiny
>>>>> table (up to 1000 rows), a temp table or a table variable will not
>>>>> only perform as well or better, but will also be much easier to
>>>>> maintain then a 'queue' table. If it is possible, it should perform
>>>>> even better if it is possible to use find these rows from a CTE rather
>>>>> then realizing them at all, but the OP hasn't given enough information
>>>>> to decide if that is possible.
>>>>>
>>>>> In my experience, the best performance improvements come from changing
>>>>> applications not to _need_ databases to do things that they aren't
>>>>> good at; and the second best from fixing schema. The worst
>>>>> 'improvements' come from doing things like avoiding the table creation
>>>>> overhead by using a persistent table for my temp storage - something
>>>>> surprising to many programmers who are used to CPU bound tasks that
>>>>> _always_ run faster if I do less work here ;)
>>>>>
>>>>>
>>>>> "Tony Rogerson" <tonyrogerson(a)torver.net> wrote in message
>>>>> news:#Jsd7rP1KHA.264(a)TK2MSFTNGP05.phx.gbl...
>>>>>>> For no more than 1000 rows (not records), why not re-write the code
>>>>>>> to
>>>>>>> use a derived table in each procedure? The procedures would become
>>>>>>> independent, loosely coupled modules. You will probably have to do
>>>>>>> some more work than just that, but it is a start.
>>>>>>
>>>>>> 1000 rows? From which planet did you pluck that figure from?
>>>>>>
>>>>>> No statistics are held on table expressions like a derived table,
>>>>>> performance can be horrendous.
>>>>>>
>>>>>> Temporary tables (akin to a relvar) are widely used and rightly too
>>>>>> in SQL Server, statistics are held on a temporary table so you get
>>>>>> the best plan available for what you are doing.
>>>>>>
>>>>>> Temporary tables are in fact real tables and conform to Codd's rules.
>>>>>>
>>>>>> What's your problem other than the syntax we use in SQL Server
>>>>>> doesn't explicitly follow ISO SQL?
>>>>>>
>>>>>> --ROGGIE--
>>>>>>
>>>>>>
>>>>>> "--CELKO--" <jcelko212(a)earthlink.net> wrote in message
>>>>>> news:506ec56a-e829-447c-9896-ee1613b35a6e(a)g30g2000yqc.googlegroups.com...
>>>>>>> In the 1960's we would write records (not rows) to a scratch tape
>>>>>>> and
>>>>>>> share that tape among several procedures. It saved us the trouble of
>>>>>>> dismounting the tape. One of the fundamental principles of software
>>>>>>> engineering is that loosely coupled modules are better than what you
>>>>>>> have. Look up "data coupling', "functional coupling" and "procedural
>>>>>>> coupling"in any book on software engineering. YOu seem to have one
>>>>>>> or
>>>>>>> more of these design problems.
>>>>>>>
>>>>>>> For no more than 1000 rows (not records), why not re-write the code
>>>>>>> to
>>>>>>> use a derived table in each procedure? The procedures would become
>>>>>>> independent, loosely coupled modules. You will probably have to do
>>>>>>> some more work than just that, but it is a start.
>>>>>>
>>>
>>>
>
>