From: --CELKO-- on

Let me give you a "cut and paste" I use in the SQL Server groups:

1) The dangerous, slow kludge is to use dynamic SQL and admit that any
random future user is a better programmer than you are. It is used by
Newbies who do not understand SQL or even what a compiled language
is. A string is a string; it is a scalar value like any other
parameter; it is not code. Again, this is not just an SQL problem;
this is a basic misunderstanding of programming principles.

2) Passing a list of parameters to a stored procedure can be done by
putting them into a string with a separator. I like to use the
traditional comma. Let's assume that you have a whole table full of
such parameter lists:

CREATE TABLE InputStrings
(keycol CHAR(10) NOT NULL PRIMARY KEY,
input_string VARCHAR(255) NOT NULL);

INSERT INTO InputStrings VALUES ('first', '12,34,567,896');
INSERT INTO InputStrings VALUES ('second', '312,534,997,896');
etc.

This will be the table that gets the outputs, in the form of the
original key column and one parameter per row.

It makes life easier if the lists in the input strings start and end
with a comma. You will need a table of sequential numbers -- a
standard SQL programming trick, Now, the query,

CREATE VIEW ParmList (keycol, place, parm)
AS
SELECT keycol,
COUNT(S2.seq), -- reverse order
CAST (SUBSTRING (I1.input_string
FROM S1.seq
FOR MIN(S2.seq) - S1.seq -1)
AS INTEGER)
FROM InputStrings AS I1, Sequence AS S1, Sequence AS S2
WHERE SUBSTRING (',' + I1.input_string + ',', S1.seq, 1) = ','
AND SUBSTRING (',' + I1.input_string + ',', S2.seq, 1) = ','
AND S1.seq < S2.seq
GROUP BY I1.keycol, I1.input_string, S1.seq;

The S1 and S2 copies of Sequence are used to locate bracketing pairs
of commas, and the entire set of substrings located between them is
extracted and cast as integers in one non-procedural step. The trick
is to be sure that the right hand comma of the bracketing pair is the
closest one to the first comma. The relative position of each element
in the list is given by the value of "place", but it does a count down
so you can plan horizontal placement in columns.

This might be faster now:

WITH Commas(keycol, comma_seq, comma_place)
AS
(SELECT I1.keycol, S1.seq,
ROW_NUMBER() OVER (PARTITION BY I1.keycol ORDER BY S1.seq)
FROM InputStrings AS I1, Sequence AS S1
WHERE SUBSTRING (',' || I1.input_string || ','
FROM S1.seq
FOR 1) = ','
AND S1.seq <= CHARLENGTH (I1.input_string))

SELECT SUBSTRING(',' || I1.input_string || ','
FROM C1.comma_place +1
FOR C2.comma_place - C1.comma_place - 1)
FROM Commas AS C1, Commas AS C2
WHERE C2.comma_seq = C1.comma_seq + 1
AND C1.keycol = C2.keycol;

The idea is to get all the positions of the commas in the CTE and then
use (n, n+1) pairs of positions to locate substrings. The hope is that
the ROW_NUMBER() is faster than the GROUP BY in the first attempt.
Since it is materialized before the body of the query (in theory),
there are opportunities for parallelism indexing and other things to
speed up the works.

Hey, I can write kludges with the best of them, but I don't. You need
to at the very least write a routine to clean out blanks, handle
double commas and non-numerics in the strings, take care of floating
point and decimal notation, etc. Basically, you must write part of a
compiler in SQL. Yeeeech! Or decide that you do not want to have
data integrity, which is what most Newbies do in practice altho they
do not know it.

A procedural loop is even worse. You have no error checking, no
ability to pass local variables or expressions, etc.

CREATE PROCEDURE HomemadeParser(@input_string VARCHAR(8000))
AS
BEGIN
DECLARE @comma_position INTEGER;

CREATE TABLE #Slices
(slice_value INTEGER);

SET @input_string = @input_string + ','; --add sentinel comma
SET @comma_position = CHARINDEX(',', @input_string);

WHILE @comma_position > 1
BEGIN
INSERT INTO #Slices (slice_value)
VALUES(CAST(LEFT(@input_string, (@comma_position - 1)) AS
INTEGER));
SET @input_string = RIGHT(@input_string, LEN(@input_string)-
@comma_position)
SET @comma_position = CHARINDEX(',', @input_string)
END;
END;

Better answer:

Do this with a long parameter list. You can pass up to 2000+
parameters in T-SQL, which is more than you probably will ever need.
The compiler will do all that error checking that the query version
and the procedural code simply do not have unless you write a full
parser with the standard error codes. You can now pass local
variables to your procedure; you can pass other data types and get
automatic conversions, etc. In short, this is just good software
engineering.

Here is another version with a CTE

CREATE PROCEDURE GetEmployeeList(@emp_list VARCHAR(200))
AS
WITH RECURSIVE EmpCte (emp_id, emp_left, depth)
AS (
SELECT @emp_list, @emp_list, 0
UNION ALL
SELECT LEFT(emp_left, POSITION(',', emp_left) - 1 ),
RIGHT (emp_left, CHAR_LENGTH (emp_left) - POSITION(',', emp_left)),
depth + 1
FROM EmpCte
WHERE emp_left LIKE '%,%'
UNION ALL
SELECT emp_left, NULL, depth + 1
FROM EmpCte
WHERE emp_left NOT LIKE '%,%')

SELECT Personnel.emp_name
FROM Personnel AS E, EmpCte AS C
WHERE E.emp_id = C.emp_id;