From: Ian Boyd on
> There are no "fields" in SQL - just rows, columns and values.
You channeling Celko? :)

> SELECT 'Hello, world!' AS MyLovelyWelcomeMessage
> The question is what the column name will be good for.
> If you get just one row even one value, you usually don't
> need to name it. And if you have more, you usually have
> an application dealing with the data.

If there are multiple fields (a.k.a. columns) in the returned
records (a.k.a. rows), and the client accesses
fields (a.k.a columns) by name, then each
field (a.k.a column) will need a name. Even if there is only one
field (a.k.a column) in the returned records (a.k.a rows), and
the client can only access fields (a.k.a columns) by name, that
field (a.k.a column) will stil need a name.

> Yes, to rename a column you have to have a SELECT statement.
> 2. Declaring Variables
>
>> BEGIN
>> DECLARE SaverUserID INT;
>> END
>
> BEGIN ATOMIC
> DECLARE i INT;
> END@

Ah-hah! "BEGIN ATOMIC"

>> i throught that the field type in DB2-SQL was INTEGER. No?
i see it now in the online help. INT is an alias for Integer field type.

3. Returning a value
>> SQL Server:
>>>> SELECT @@spid AS ProcessID
>>>> or
>>>> DECLARE @ProcessID int
>>>> SET @ProcessID = @spid
>>>> SELECT @ProcessID
>>>
>>> Again, use VALUES.
>> Again, i assume that VALUES is a bad thing:
> How so?

Because rather than using VALUES:
> SET ApplicationID = VALUES Application_ID();
i don't use VALUES:
> SET ApplicationID = Application_ID();

Keep in mind that you're responding to a post i made talking to the other
guy,
before it was explained that VALUES constructs a virtual table.

> Have a look at the DB2 special registers in the manual: USER, CURRENT
> DATE,
> CURRENT TIME, CURRENT TIMESTAMP.

Excellent. Thank you.


So now i'm getting closer to a compiling trigger. Next question

5. Setting a field (a.k.a column) to NULL

SQL Server
> UPDATE MyTable SET SomeField = NULL
> WHERE SomeOtherField IN (SELECT KeyField FROM MyTable2 WHERE Field = 3)

or

> INSERT INTO MyTable (Firstname, Lastname, Address, Child, Phone)
> VALUES ('Ian', 'Boyd', '728 Helena', NULL, '911-426-3184')

or

> INSERT INTO MyTable (Firstname, Lastname, Address, Child, Phone)
> SELECT fname, lname, addr1, NULL, NULL FROM legacy_system

In my variant of the 3rd case in DB2, it complains that "NULL is not valid
in the context where it is used."



From: Ian Boyd on
> In a FOR EACH ROW trigger the new and old transition variables are already
> matched.
>
> Also a FOR EACH ROW trigger does not necessarily have to produce any
> inserted and deleted temporary tables. Instead it can pipeline.
> A straight forward audit trigger has a cost which is virtually identical
> to the cost of the individual inserts.
>
> It is ironic that in most OLTP systems the number of rows changes with one
> statement is 1 anyway, so a statement trigger would execute only for one
> row.
>
So a statement like
INSERT INTO NewTable
SELECT * FROM OldTable

can be faster if DB2 has to perform logic on every row in the insert, rather
than one set-based operation?

What about my soon-to-be-headache-for-tomorrow an update trigger

UPDATE MyTable
SET AMoneyField = AMoneyField * 1.10

Wouldn't DB2 perfer when doing the trigger:

INSERT INTO AuditLog
SELECT fields
FROM OldTable
INNER JOIN NewTable
ON OldTable.RowID = NewTable.RowID
WHERE OldTable.AMoneyField <> NewTable.AMoneyField

rather than doing

Row#1
if OldRow.AMoneyField <> NewRow.AMoneyField then
INSERT INTO AuditLog
SELECT fields, OldTableRow.AMoneyField, NewTableRow.AMoneyField

Row#2
if OldRow.AMoneyField <> NewRow.AMoneyField then
INSERT INTO AuditLog
SELECT fields, OldTableRow.AMoneyField, NewTableRow.AMoneyField

Row#3
if OldRow.AMoneyField <> NewRow.AMoneyField then
INSERT INTO AuditLog
SELECT fields, OldTableRow.AMoneyField, NewTableRow.AMoneyField

Row#4
if OldRow.AMoneyField <> NewRow.AMoneyField then
INSERT INTO AuditLog
SELECT fields, OldTableRow.AMoneyField, NewTableRow.AMoneyField

....

Row#984,648,321
if OldRow.AMoneyField <> NewRow.AMoneyField then
INSERT INTO AuditLog
SELECT fields, OldTableRow.AMoneyField, NewTableRow.AMoneyField

Set based always performs better than row-by-row logic. What am i missing?


> talking of triggers, you may also want to familiarize yourself with
> BEFORE triggers. Very powerful and much more lightweight than patching up
> the rows after the update/insert.
> (they come at the cost of the evaluation of the expression)
>
> CREATE TRIGGER trg1 BEFORE UPDATE ON T
> FOR EACH ROW REFERENCING NEW AS n OLD AS o
> WHEN (n.c1 <= o.c1)
> SIGNAL SQLSTATE '78000' SET MESSAGE_TEXT = 'C1 must increase!'

Can you translate my original insert trigger into a db2 before insert
trigger?
i've seen that example of a before trigger somewhere, but it doesn't really
help me.
What would i want the evaluation expression to be:

CREATE TRIGGER trg1 BEFORE UPDATE ON T
FOR EACH ROW REFERENCING NEW AS n OLD AS o
WHEN (1=1)
BEGIN ATOMIC And_Then_What_Goes_Here;
END;@



From: Serge Rielau on
Ian Boyd wrote:
>> In a FOR EACH ROW trigger the new and old transition variables are already
>> matched.
>>
>> Also a FOR EACH ROW trigger does not necessarily have to produce any
>> inserted and deleted temporary tables. Instead it can pipeline.
>> A straight forward audit trigger has a cost which is virtually identical
>> to the cost of the individual inserts.
>>
>> It is ironic that in most OLTP systems the number of rows changes with one
>> statement is 1 anyway, so a statement trigger would execute only for one
>> row.
>>
> So a statement like
> INSERT INTO NewTable
> SELECT * FROM OldTable
>
> can be faster if DB2 has to perform logic on every row in the insert, rather
> than one set-based operation?
>
> What about my soon-to-be-headache-for-tomorrow an update trigger
>
> UPDATE MyTable
> SET AMoneyField = AMoneyField * 1.10
>
> Wouldn't DB2 perfer when doing the trigger:
>
> INSERT INTO AuditLog
> SELECT fields
> FROM OldTable
> INNER JOIN NewTable
> ON OldTable.RowID = NewTable.RowID
> WHERE OldTable.AMoneyField <> NewTable.AMoneyField
>
> rather than doing
>
> Row#1
> if OldRow.AMoneyField <> NewRow.AMoneyField then
> INSERT INTO AuditLog
> SELECT fields, OldTableRow.AMoneyField, NewTableRow.AMoneyField
>
> Row#2
> if OldRow.AMoneyField <> NewRow.AMoneyField then
> INSERT INTO AuditLog
> SELECT fields, OldTableRow.AMoneyField, NewTableRow.AMoneyField
>
> Row#3
> if OldRow.AMoneyField <> NewRow.AMoneyField then
> INSERT INTO AuditLog
> SELECT fields, OldTableRow.AMoneyField, NewTableRow.AMoneyField
>
> Row#4
> if OldRow.AMoneyField <> NewRow.AMoneyField then
> INSERT INTO AuditLog
> SELECT fields, OldTableRow.AMoneyField, NewTableRow.AMoneyField
>
> ...
>
> Row#984,648,321
> if OldRow.AMoneyField <> NewRow.AMoneyField then
> INSERT INTO AuditLog
> SELECT fields, OldTableRow.AMoneyField, NewTableRow.AMoneyField
>
> Set based always performs better than row-by-row logic. What am i missing?
You are missing the fact that thie join of yours is doing all teh same
work in addition to having to match the rows OldTable.RowID =
NewTable.RowID. likely teh join will be a nested loop. meaning you're
scanning one of the temp tables (which have to be created of course)
984,648,321 times. (unless you can use hashjoin which is still far from
for free.
DB2's triggers are inline. There is no invocation cost.
In SQL Server words compare to T-SQL table functions which can be inlined
(Sometimes I hate that I don't own my patents.. Could be rich charging
MS for that stuff)
DB2 will run the following "SQL":
SELECT COUNT(1) -- Ignore the count, artistic freedom...
FROM (INSERT INTO newtable SELECT * FROM OldTable) AS newtablerow,
(INSERT INTO AuditLog
VALUES fields, NULL, NewTableRow.AMoneyField)

(What was oldtable row meant to be...?)

>> talking of triggers, you may also want to familiarize yourself with
>> BEFORE triggers. Very powerful and much more lightweight than patching up
>> the rows after the update/insert.
>> (they come at the cost of the evaluation of the expression)
>>
>> CREATE TRIGGER trg1 BEFORE UPDATE ON T
>> FOR EACH ROW REFERENCING NEW AS n OLD AS o
>> WHEN (n.c1 <= o.c1)
>> SIGNAL SQLSTATE '78000' SET MESSAGE_TEXT = 'C1 must increase!'
>
> Can you translate my original insert trigger into a db2 before insert
> trigger?
> i've seen that example of a before trigger somewhere, but it doesn't really
> help me.
> What would i want the evaluation expression to be:
>
Your trigger can't be a before trigger. it changes the state of the
database. BEFORE triggers are use to:
* Modify the "INSERTED" table BEFORE doing the INSERT
(e.g. to generate complex defaults expressions)
* do error checking not places in a check constraint or RI for some
reason or other.

The WHEN clause is not mandatory, btw.. just omit it if you want the
trigger to fire always.

Cheers
Serge
--
Serge Rielau
DB2 Solutions Development
IBM Toronto Lab
From: Serge Rielau on
Ian Boyd wrote:
>> There are no "fields" in SQL - just rows, columns and values.
> You channeling Celko? :)
*chuckle* I had the same thought.
Seriously though it doesn't hurt to use the correct language.
Just like it doesn't hurt to speak proper English outside the pub ;-)

>> INSERT INTO MyTable (Firstname, Lastname, Address, Child, Phone)
>> SELECT fname, lname, addr1, NULL, NULL FROM legacy_system
>
> In my variant of the 3rd case in DB2, it complains that "NULL is not valid
> in the context where it is used."
DB2 uses strong typing. An untyped NULL (or ?) is only allowed in
specific places where DB2 cann immediatly deduce the datatype.
That would be UPDATE SET, SET statement and INSERT VALUES.
In all other cases CAST(NULL AS <type>) will do the job.
That's the way the standard is defined. No technical reason really.

Cheers
Serge
--
Serge Rielau
DB2 Solutions Development
IBM Toronto Lab
From: Ian Boyd on
> DB2 supports variables in the context of stored procedures, functions,
> methods, triggers and "dynamic compound statements".
> DB2 does not support "global variables" (and I thought SQL Server doesn't
> either, so I'm surprised you have an issue.

i don't think SQL Server has global variables either - at least i've never
seen them.

> I think the name SYSIBM.SYDUMMY1 is rooted in the fact that it returns 1
> row.

That's an interesting insight!

> The VALUES clause is actually very powerful when embedded in the from
> clause.
> E.g. instead of doing:
> T-SQL:
> SELECT * FROM
> (SELECT 5
> UNION
> SELECT 6) AS X
> you can do:
> SELECT * FROM (VALUES (5), (6)) AS X.

i've never used any idea similar to
(SELECT 5 UNION SELECT 6) AS X

i've never had a problem where i needed to join to such virtual table. Like
i said before, i'll have to figure out where VALUES would be useful to me.

> This T-SQL: SELECT @@spid AS ProcessID
> is NOT column aliasing.

If i run this query, i get a record set with one row and one column. And
that column has no name. If i want to give that column a name (or a
different name) i have to use:

> SELECT @@spid AS ProcessID

So the column gets renamed from "" to "ProcessID." Whereas the following SET
operation
> SET ProcessID = spid;

returns no rows.

> It is important to note that whenever you write a statement that contains
> ';' due to "substatements" you need to make sure the statement delimiter
> (what you know as "go" I think) is set to a value other than ';' (imagine
> placing 'go' after each line in a T-SQL procedure.. same confusion).

i've never had to use a "go" in T-SQL. All the text i submit is one batch.
There is no ; needed at the end of every statement.

So now having to have not only a statement delimiter, and a batch delimiter
is painful.

> Again I'm surprised you try this since AFAIK SQL Server supports no such
> thing. all T-SQL must be in a procedure.

No. i can send T-SQL to SQL Server and it runs it. i don't seen it to be
inside
a transaction or any such "BEGIN ATOMIC". It runs the SQL it is given. For
example

>DECLARE @a int
>SET @a = 5
>WHILE @A < 5 BEGIN
> SET @a = @a + 5
> EXECUTE SomeStoredProcedure
>END

is perfectly valid. i can send the T-SQL to SQL Sever using it's own query
tool, or using 3rd party tools, or using ADO. i give it those characters, no
semicolons, no go, no begins, end, atomics; and it just runs.

> So let's move into a procedure example:

Here's an interesting question. The following rus fine, exactly as is:

CREATE PROCEDURE dostuff(IN a INTEGER, INOUT b INTEGER, OUT c INTEGER)
BEGIN
DECLARE d INTEGER;
DECLARE cur CURSOR WITH RETURN FOR
SELECT TABNAME FROM SYSCAT.COLUMNS FETCH FIRST 2 ROWS ONLY;
SET d = a + b;
SET c = d * a;
SET b = 7;
OPEN cur;
END

So now:

> CALL dostuff(5, 7, ?)
Wrong number of parameters

> DECLARE c integer; call dostuff(5, 7, ?);
An unexpected token "integer" was found following "DECLARE c ".

>BEGIN
> DECLARE c integer; call dostuff(5, 7, c);
>END
An unexpected token "DECLARE" was found following "begin "

>BEGIN ATOMIC
> DECLARE c integer; call dostuff(5, 7, c);
>END
The parameter mode (IN, OUT, or INOUT) is not valid for a parameter in
procedure "DOSTUFF"

Why is it that the create procedure doesn't need atomic begins. It doesn't
need the keyword atmoc after the begin, and it doesn't need to wrapped
wholly in a begin end such as:

BEGIN ATOMIC
CREATE PROCEDURE dostuff(...)
BEGIN
END
END

But my ad-hoc sql does need atomic begins? By the way, this is far as i can
get. i don't now how to fix the call to the stored procedure.

> PS: One more thing..... the SQL/PSM standard (which is the foundation for
> DB2's SQL procedures) uses exception handlers for error handling.
> Do NOT overload them to emulate old style T-SQL. Use them just like in SQL
> Server 2005 Microsoft encourages you to use exception handlers.

A welcome addition.

> OK I think that covers it. Let us know how it goes.
Time to go home for the day. Day 9, nothing working yet.


First  |  Prev  |  Next  |  Last
Pages: 1 2 3 4 5 6 7 8 9 10 11 12 13
Prev: SQL0901N Error.
Next: Convert DECIMAL to DATE