passing an array to a stored procedure?


DevX Home    Today's Headlines   Articles Archive   Tip Bank   Forums   

Page 1 of 2 12 LastLast
Results 1 to 15 of 26

Thread: passing an array to a stored procedure?

  1. #1
    Michael Shutt Guest

    passing an array to a stored procedure?

    I am developing a database (on SQL Server 7) to be access via ADO and have a
    small dilemna. Ideally, I could to execute all queries using stored
    procedures, but I can't figure out how to do the following.

    Say I've got two tables. The first table is named 'PRODUCTS' as has a PK
    column named 'PRODUCT_ID'. The second table is named 'SALES', which
    contains a 'PRODUCT_ID' column (which is a FK of PRODUCTS.PRODUCT_ID). I
    need to execute a series of database queries against the SALES table, all of
    which will have the same WHERE clause using a list of PRODUCT_IDs.

    For example, to get the list of product ids I may first use this query

    SELECT PRODUCT_ID FROM PRODUCTS WHERE COLOR = 'red'

    From the results of that query, I would have a list of PRODUCT_IDs (i.e., 1,
    2 & 3) which would be used to execute the next queries

    SELECT * FROM SALES
    WHERE PRODUCT_ID IN (1,2,3)

    SELECT * FROM SALES
    WHERE PRODUCT_ID IN (1,2,3) AND ORDER_DATE = '1/1/2000'

    SELECT * FROM SALES
    WHERE PRODUCT_ID IN (1,2,3) AND SALESMAN = 'Tom'

    What I would like to do is have 4 stored procedures to execute these
    queries, something like:

    spGetProductsForColor @Color varchar(10)
    spGetAllSalesForProducts @ProductIDs ???
    spGetSalesForProductsAndDate @ProductIDs ???, @Date datetime
    spGetSalesForProductsAndSalesman @ProductIDs ???, @Salesman varchar(20)

    But I don't know how to pass a list of delimited values to be used for an IN
    clause in the stored procedure. Or maybe there is a whole different
    approach to solving this problem? I would like to avoid running a query
    against the PRODUCTS table each time I execute the searches against the
    SALES table, so nesting queries is not the solution I am looking for. Any
    ideas would be appreciated.

    MJS




  2. #2
    David Satz Guest

    Re: passing an array to a stored procedure?

    does this help: http://www.mssqlserver.com/faq/ ?

    Dave

    "Michael Shutt" <mshutt@advectis.com> wrote in message
    news:3d63879f$1@10.1.10.29...
    > I am developing a database (on SQL Server 7) to be access via ADO and have

    a
    > small dilemna. Ideally, I could to execute all queries using stored
    > procedures, but I can't figure out how to do the following.
    >
    > Say I've got two tables. The first table is named 'PRODUCTS' as has a PK
    > column named 'PRODUCT_ID'. The second table is named 'SALES', which
    > contains a 'PRODUCT_ID' column (which is a FK of PRODUCTS.PRODUCT_ID). I
    > need to execute a series of database queries against the SALES table, all

    of
    > which will have the same WHERE clause using a list of PRODUCT_IDs.
    >
    > For example, to get the list of product ids I may first use this query
    >
    > SELECT PRODUCT_ID FROM PRODUCTS WHERE COLOR = 'red'
    >
    > From the results of that query, I would have a list of PRODUCT_IDs (i.e.,

    1,
    > 2 & 3) which would be used to execute the next queries
    >
    > SELECT * FROM SALES
    > WHERE PRODUCT_ID IN (1,2,3)
    >
    > SELECT * FROM SALES
    > WHERE PRODUCT_ID IN (1,2,3) AND ORDER_DATE = '1/1/2000'
    >
    > SELECT * FROM SALES
    > WHERE PRODUCT_ID IN (1,2,3) AND SALESMAN = 'Tom'
    >
    > What I would like to do is have 4 stored procedures to execute these
    > queries, something like:
    >
    > spGetProductsForColor @Color varchar(10)
    > spGetAllSalesForProducts @ProductIDs ???
    > spGetSalesForProductsAndDate @ProductIDs ???, @Date datetime
    > spGetSalesForProductsAndSalesman @ProductIDs ???, @Salesman varchar(20)
    >
    > But I don't know how to pass a list of delimited values to be used for an

    IN
    > clause in the stored procedure. Or maybe there is a whole different
    > approach to solving this problem? I would like to avoid running a query
    > against the PRODUCTS table each time I execute the searches against the
    > SALES table, so nesting queries is not the solution I am looking for. Any
    > ideas would be appreciated.
    >
    > MJS
    >
    >
    >




  3. #3
    Q*bert Guest

    Re: passing an array to a stored procedure?


    Not sure why you have to get the array results from one and pass to another.
    You could do one of the following

    1) Exeucte a sp that creates a temp table where the resutls are stored
    using this table, you would have your array for the other sp's.

    2) Execute the same query in each sp
    something like
    SELECT * FROM SALES
    WHERE PRODUCT_ID IN (SELECT PRODUCT_ID FROM PRODUCTS WHERE COLOR = 'red')
    The only value you'd be passin in is the 'red' which you already have.

    3) Use a cursor for the results of the initial query, and concat a sting
    consisiting of the results.

    Any one of these has benefits over the other, all have thier own shortfalls.

    Hope this helped

    Q*bert
    *!@(^



    "Michael Shutt" <mshutt@advectis.com> wrote:
    >I am developing a database (on SQL Server 7) to be access via ADO and have

    a
    >small dilemna. Ideally, I could to execute all queries using stored
    >procedures, but I can't figure out how to do the following.
    >
    >Say I've got two tables. The first table is named 'PRODUCTS' as has a PK
    >column named 'PRODUCT_ID'. The second table is named 'SALES', which
    >contains a 'PRODUCT_ID' column (which is a FK of PRODUCTS.PRODUCT_ID).

    I
    >need to execute a series of database queries against the SALES table, all

    of
    >which will have the same WHERE clause using a list of PRODUCT_IDs.
    >
    >For example, to get the list of product ids I may first use this query
    >
    >SELECT PRODUCT_ID FROM PRODUCTS WHERE COLOR = 'red'
    >
    >From the results of that query, I would have a list of PRODUCT_IDs (i.e.,

    1,
    >2 & 3) which would be used to execute the next queries
    >
    >SELECT * FROM SALES
    >WHERE PRODUCT_ID IN (1,2,3)
    >
    >SELECT * FROM SALES
    >WHERE PRODUCT_ID IN (1,2,3) AND ORDER_DATE = '1/1/2000'
    >
    >SELECT * FROM SALES
    >WHERE PRODUCT_ID IN (1,2,3) AND SALESMAN = 'Tom'
    >
    >What I would like to do is have 4 stored procedures to execute these
    >queries, something like:
    >
    >spGetProductsForColor @Color varchar(10)
    >spGetAllSalesForProducts @ProductIDs ???
    >spGetSalesForProductsAndDate @ProductIDs ???, @Date datetime
    >spGetSalesForProductsAndSalesman @ProductIDs ???, @Salesman varchar(20)
    >
    >But I don't know how to pass a list of delimited values to be used for an

    IN
    >clause in the stored procedure. Or maybe there is a whole different
    >approach to solving this problem? I would like to avoid running a query
    >against the PRODUCTS table each time I execute the searches against the
    >SALES table, so nesting queries is not the solution I am looking for. Any
    >ideas would be appreciated.
    >
    >MJS
    >
    >
    >



  4. #4
    Michael Shutt Guest

    Re: passing an array to a stored procedure?

    Well, it confirms that there aren't any options that I wasn't already
    considering. Thanks for the link.

    "David Satz" <davidNOSPAMsatz@yahoo.NOSPAM.com> wrote in message
    news:3d638fe0$1@10.1.10.29...
    > does this help: http://www.mssqlserver.com/faq/ ?
    >
    > Dave
    >
    > "Michael Shutt" <mshutt@advectis.com> wrote in message
    > news:3d63879f$1@10.1.10.29...
    > > I am developing a database (on SQL Server 7) to be access via ADO and

    have
    > a
    > > small dilemna. Ideally, I could to execute all queries using stored
    > > procedures, but I can't figure out how to do the following.
    > >
    > > Say I've got two tables. The first table is named 'PRODUCTS' as has a

    PK
    > > column named 'PRODUCT_ID'. The second table is named 'SALES', which
    > > contains a 'PRODUCT_ID' column (which is a FK of PRODUCTS.PRODUCT_ID).

    I
    > > need to execute a series of database queries against the SALES table,

    all
    > of
    > > which will have the same WHERE clause using a list of PRODUCT_IDs.
    > >
    > > For example, to get the list of product ids I may first use this query
    > >
    > > SELECT PRODUCT_ID FROM PRODUCTS WHERE COLOR = 'red'
    > >
    > > From the results of that query, I would have a list of PRODUCT_IDs

    (i.e.,
    > 1,
    > > 2 & 3) which would be used to execute the next queries
    > >
    > > SELECT * FROM SALES
    > > WHERE PRODUCT_ID IN (1,2,3)
    > >
    > > SELECT * FROM SALES
    > > WHERE PRODUCT_ID IN (1,2,3) AND ORDER_DATE = '1/1/2000'
    > >
    > > SELECT * FROM SALES
    > > WHERE PRODUCT_ID IN (1,2,3) AND SALESMAN = 'Tom'
    > >
    > > What I would like to do is have 4 stored procedures to execute these
    > > queries, something like:
    > >
    > > spGetProductsForColor @Color varchar(10)
    > > spGetAllSalesForProducts @ProductIDs ???
    > > spGetSalesForProductsAndDate @ProductIDs ???, @Date datetime
    > > spGetSalesForProductsAndSalesman @ProductIDs ???, @Salesman varchar(20)
    > >
    > > But I don't know how to pass a list of delimited values to be used for

    an
    > IN
    > > clause in the stored procedure. Or maybe there is a whole different
    > > approach to solving this problem? I would like to avoid running a query
    > > against the PRODUCTS table each time I execute the searches against the
    > > SALES table, so nesting queries is not the solution I am looking for.

    Any
    > > ideas would be appreciated.
    > >
    > > MJS
    > >
    > >
    > >

    >
    >




  5. #5
    Michael Shutt Guest

    Re: passing an array to a stored procedure?

    Thanks for the info. I have oversimplified the description of my particular
    problem. I guess I was hoping for magic bullet but it doesn't sound like
    there is one.

    I'm leery of the scalability of the temporary table approach since a "cached
    list" of PRODUCT_ID values may need to be available for each user and for
    extended periods of time (i.e., 1 hour). Also, I may not know exactly how
    long the list needs to be available, so I won't necessarily know when I can
    drop a temp table to freeup the resources. This is probably worth some more
    research on my part though.

    Since the query to get the list of PRODUCT_IDs can be quite expensive,
    option 2 is what I am trying to avoid. I can essentially get the list one
    time for each user session, but it needs to refresh each time user begins a
    session.

    A variation of option 3 is what I am currently using (generation of dynamic
    SQL statements), but I loose out on the ability of SQL Server to precompile
    and optimize the queries.

    Damned if you do, damned if you don't...

    "Q*bert" <luke_Davis_76@hotmail.com> wrote in message
    news:3d63a166$1@10.1.10.29...
    >
    > Not sure why you have to get the array results from one and pass to

    another.
    > You could do one of the following
    >
    > 1) Exeucte a sp that creates a temp table where the resutls are stored
    > using this table, you would have your array for the other sp's.
    >
    > 2) Execute the same query in each sp
    > something like
    > SELECT * FROM SALES
    > WHERE PRODUCT_ID IN (SELECT PRODUCT_ID FROM PRODUCTS WHERE COLOR = 'red')
    > The only value you'd be passin in is the 'red' which you already have.
    >
    > 3) Use a cursor for the results of the initial query, and concat a sting
    > consisiting of the results.
    >
    > Any one of these has benefits over the other, all have thier own

    shortfalls.
    >
    > Hope this helped
    >
    > Q*bert
    > *!@(^
    >
    >
    >
    > "Michael Shutt" <mshutt@advectis.com> wrote:
    > >I am developing a database (on SQL Server 7) to be access via ADO and

    have
    > a
    > >small dilemna. Ideally, I could to execute all queries using stored
    > >procedures, but I can't figure out how to do the following.
    > >
    > >Say I've got two tables. The first table is named 'PRODUCTS' as has a PK
    > >column named 'PRODUCT_ID'. The second table is named 'SALES', which
    > >contains a 'PRODUCT_ID' column (which is a FK of PRODUCTS.PRODUCT_ID).

    > I
    > >need to execute a series of database queries against the SALES table, all

    > of
    > >which will have the same WHERE clause using a list of PRODUCT_IDs.
    > >
    > >For example, to get the list of product ids I may first use this query
    > >
    > >SELECT PRODUCT_ID FROM PRODUCTS WHERE COLOR = 'red'
    > >
    > >From the results of that query, I would have a list of PRODUCT_IDs (i.e.,

    > 1,
    > >2 & 3) which would be used to execute the next queries
    > >
    > >SELECT * FROM SALES
    > >WHERE PRODUCT_ID IN (1,2,3)
    > >
    > >SELECT * FROM SALES
    > >WHERE PRODUCT_ID IN (1,2,3) AND ORDER_DATE = '1/1/2000'
    > >
    > >SELECT * FROM SALES
    > >WHERE PRODUCT_ID IN (1,2,3) AND SALESMAN = 'Tom'
    > >
    > >What I would like to do is have 4 stored procedures to execute these
    > >queries, something like:
    > >
    > >spGetProductsForColor @Color varchar(10)
    > >spGetAllSalesForProducts @ProductIDs ???
    > >spGetSalesForProductsAndDate @ProductIDs ???, @Date datetime
    > >spGetSalesForProductsAndSalesman @ProductIDs ???, @Salesman varchar(20)
    > >
    > >But I don't know how to pass a list of delimited values to be used for an

    > IN
    > >clause in the stored procedure. Or maybe there is a whole different
    > >approach to solving this problem? I would like to avoid running a query
    > >against the PRODUCTS table each time I execute the searches against the
    > >SALES table, so nesting queries is not the solution I am looking for.

    Any
    > >ideas would be appreciated.
    > >
    > >MJS
    > >
    > >
    > >

    >




  6. #6
    Q*bert Guest

    Re: passing an array to a stored procedure?


    Ok, So why not run a time based trigger to create the temp table and have
    all the queries pull from it? I'm not sure how "real time" the data needs
    to be but you could have the tigger regen\update the table every so often
    and if the data was pulled from it...

    within the temp table you could have each userid stored and the time in which
    thier data was last refreshed. The trigger would be set to run when data
    needed to be refreshed for each user.

    Or perhaps a combination of features. Use the time based trigger to create
    the temp table and have view pull back all the results, then use
    the view to pull back the appropriate data where ='red' and result in...

    But agian, I digress without really understanding it all, I'm shootin' in
    the dark.

    I'm pretty good at thinking outside the proverbial box and can offer a variety
    of selections, which is best would be left up to someone else

    Box? What Box?

    Hope this helps,
    Q*bert
    @#*$!
    luke_davis_76 AT Hotmail.com
    (incase you want some additional ideas and not post it here)
    Making Simple Solutions More Complex


    >I'm leery of the scalability of the temporary table approach since >a "cached

    list" of PRODUCT_ID values may need to be available for each >user and for
    extended periods of time (i.e., 1 hour). Also, I may not >know exactly how
    long the list needs to be available, so I won't >necessarily know when I
    can drop a temp table to freeup the resources. >This is probably worth some
    more research on my part though.
    >
    >Damned if you do, damned if you don't...





  7. #7
    Chris Hylton Guest

    Re: passing an array to a stored procedure?


    Even w/ scalability concerns, I think I'd still try the temp table deal over
    the current solution you mentioned (building dynamic SQL) because it's cleaner
    than building the dynamic statements (IMHO). You loose out on the precompile
    capabilities w/ that option...but, now that I think about it...you might
    even loose out on the precompiled capabilities w/ a temp table because SQL
    really can't evaluate the statement until the table is available...

    If having TONS of temp tables concern you...I'd create a global (##) temp
    table with USERID as a field in the table...then you only have a single temp
    object and you'd just have to add USERID to the subquery in the IN clause.

    You might want to do some testing to confirm suspicions about query optimization
    and the benefits of precompiled SPs with temp tables, since it might not
    be any better than the dynamic SQL deal. If you find out you loose that,
    then I'd just create a normal table, with userid, indexed on userid and product
    id, and just populate/empty that table by user as they connect/disconnect
    to the database.

    On the temp table thing...a temp table will go away on its own, not sure
    I understand your statement about knowing when you can drop them...when the
    connection drops (or the last connection in the case of a global temp table)
    the temp table will automatically drop. SQL Server isn't like Oracle, temp
    tables are temp tables, there isn't a 'definition' sitting out there like
    in Oracle.

    Just some thoughts...
    Chris

    "Michael Shutt" <mshutt@advectis.com> wrote:
    >Thanks for the info. I have oversimplified the description of my particular
    >problem. I guess I was hoping for magic bullet but it doesn't sound like
    >there is one.
    >
    >I'm leery of the scalability of the temporary table approach since a "cached
    >list" of PRODUCT_ID values may need to be available for each user and for
    >extended periods of time (i.e., 1 hour). Also, I may not know exactly how
    >long the list needs to be available, so I won't necessarily know when I

    can
    >drop a temp table to freeup the resources. This is probably worth some

    more
    >research on my part though.
    >
    >Since the query to get the list of PRODUCT_IDs can be quite expensive,
    >option 2 is what I am trying to avoid. I can essentially get the list one
    >time for each user session, but it needs to refresh each time user begins

    a
    >session.
    >
    >A variation of option 3 is what I am currently using (generation of dynamic
    >SQL statements), but I loose out on the ability of SQL Server to precompile
    >and optimize the queries.
    >
    >Damned if you do, damned if you don't...
    >
    >"Q*bert" <luke_Davis_76@hotmail.com> wrote in message
    >news:3d63a166$1@10.1.10.29...
    >>
    >> Not sure why you have to get the array results from one and pass to

    >another.
    >> You could do one of the following
    >>
    >> 1) Exeucte a sp that creates a temp table where the resutls are stored
    >> using this table, you would have your array for the other sp's.
    >>
    >> 2) Execute the same query in each sp
    >> something like
    >> SELECT * FROM SALES
    >> WHERE PRODUCT_ID IN (SELECT PRODUCT_ID FROM PRODUCTS WHERE COLOR = 'red')
    >> The only value you'd be passin in is the 'red' which you already have.
    >>
    >> 3) Use a cursor for the results of the initial query, and concat a sting
    >> consisiting of the results.
    >>
    >> Any one of these has benefits over the other, all have thier own

    >shortfalls.
    >>
    >> Hope this helped
    >>
    >> Q*bert
    >> *!@(^
    >>
    >>
    >>
    >> "Michael Shutt" <mshutt@advectis.com> wrote:
    >> >I am developing a database (on SQL Server 7) to be access via ADO and

    >have
    >> a
    >> >small dilemna. Ideally, I could to execute all queries using stored
    >> >procedures, but I can't figure out how to do the following.
    >> >
    >> >Say I've got two tables. The first table is named 'PRODUCTS' as has

    a PK
    >> >column named 'PRODUCT_ID'. The second table is named 'SALES', which
    >> >contains a 'PRODUCT_ID' column (which is a FK of PRODUCTS.PRODUCT_ID).

    >> I
    >> >need to execute a series of database queries against the SALES table,

    all
    >> of
    >> >which will have the same WHERE clause using a list of PRODUCT_IDs.
    >> >
    >> >For example, to get the list of product ids I may first use this query
    >> >
    >> >SELECT PRODUCT_ID FROM PRODUCTS WHERE COLOR = 'red'
    >> >
    >> >From the results of that query, I would have a list of PRODUCT_IDs (i.e.,

    >> 1,
    >> >2 & 3) which would be used to execute the next queries
    >> >
    >> >SELECT * FROM SALES
    >> >WHERE PRODUCT_ID IN (1,2,3)
    >> >
    >> >SELECT * FROM SALES
    >> >WHERE PRODUCT_ID IN (1,2,3) AND ORDER_DATE = '1/1/2000'
    >> >
    >> >SELECT * FROM SALES
    >> >WHERE PRODUCT_ID IN (1,2,3) AND SALESMAN = 'Tom'
    >> >
    >> >What I would like to do is have 4 stored procedures to execute these
    >> >queries, something like:
    >> >
    >> >spGetProductsForColor @Color varchar(10)
    >> >spGetAllSalesForProducts @ProductIDs ???
    >> >spGetSalesForProductsAndDate @ProductIDs ???, @Date datetime
    >> >spGetSalesForProductsAndSalesman @ProductIDs ???, @Salesman varchar(20)
    >> >
    >> >But I don't know how to pass a list of delimited values to be used for

    an
    >> IN
    >> >clause in the stored procedure. Or maybe there is a whole different
    >> >approach to solving this problem? I would like to avoid running a query
    >> >against the PRODUCTS table each time I execute the searches against the
    >> >SALES table, so nesting queries is not the solution I am looking for.

    >Any
    >> >ideas would be appreciated.
    >> >
    >> >MJS
    >> >
    >> >
    >> >

    >>

    >
    >



  8. #8
    Michael Shutt Guest

    Re: passing an array to a stored procedure?

    Thanks for the response Chris. A little more explanation may help
    illuminate things.

    The theoretical PRODUCTS table is actually a group of 4 tables and each time
    the application needs to lookup a list of valid PRODUCT_ID values for a
    user, all 4 tables have to be joined. That is why I am trying to cache the
    returned list for each user.

    I have thought about using a view or temporary table to turn the relevant
    info from those four tables into a "flat" table for faster access. A view
    would be nice since the database maintains it, but a temporary table would
    be nice since I could apply indeces to it for better performance.

    My application is completely stateless (i.e., web-based), which means that
    truly temp tables won't work since I need them to stick around between
    individual web-page hits by a single user (the ado connection is dropped
    each time).

    I think that the best approach as far as performance is concerned is to
    generate a simple list of PRODUCT_IDs when the user first connects and store
    it for use in subsequent queries. Although that means I am generating
    dynamic sql for a lot of queries, I think that is worth the tradeoff since
    SQL only has to parse the values from the IN clause when performing the
    queries against the SALES table.

    All of that being said, I need to do some testing to see if there is an
    obvious peformance benefit one way or another. I just wanted to be sure
    that I was using all available scenarios in my testing.

    "Chris Hylton" <cchylton@hotmail.com> wrote in message
    news:3d63d34b$1@10.1.10.29...
    >
    > Even w/ scalability concerns, I think I'd still try the temp table deal

    over
    > the current solution you mentioned (building dynamic SQL) because it's

    cleaner
    > than building the dynamic statements (IMHO). You loose out on the

    precompile
    > capabilities w/ that option...but, now that I think about it...you might
    > even loose out on the precompiled capabilities w/ a temp table because SQL
    > really can't evaluate the statement until the table is available...
    >
    > If having TONS of temp tables concern you...I'd create a global (##) temp
    > table with USERID as a field in the table...then you only have a single

    temp
    > object and you'd just have to add USERID to the subquery in the IN clause.
    >
    > You might want to do some testing to confirm suspicions about query

    optimization
    > and the benefits of precompiled SPs with temp tables, since it might not
    > be any better than the dynamic SQL deal. If you find out you loose that,
    > then I'd just create a normal table, with userid, indexed on userid and

    product
    > id, and just populate/empty that table by user as they connect/disconnect
    > to the database.
    >
    > On the temp table thing...a temp table will go away on its own, not sure
    > I understand your statement about knowing when you can drop them...when

    the
    > connection drops (or the last connection in the case of a global temp

    table)
    > the temp table will automatically drop. SQL Server isn't like Oracle,

    temp
    > tables are temp tables, there isn't a 'definition' sitting out there like
    > in Oracle.
    >
    > Just some thoughts...
    > Chris
    >


    <snip>



  9. #9
    Chris Hylton Guest

    Re: passing an array to a stored procedure?


    Yea, the fact you are dealing with the web kills the global or user temp table
    possibility...but I'm no expert on web based apps and techniques for maintain
    state, there may be a way to maintain that connection across pages, dunno...

    I don't know how your data is structured in these 4 tables, i.e. what drives
    the data being in table 1 vs table 2...BUT...

    If you have SQL 2000, and since you have 4 tables for your product data...you
    might try an INDEXED VIEW. It's probably documented under something like
    'Data Partitioning' in SQL BOL. This technique is designed specifically
    for <view> performance issues with multiple tables containing the same (or
    similar) data structures. The concept is to move data into tables based
    on index ranges (not sure if you can do this or not). Then, presumably the
    indexed view runs much faster. May not offer you any benefit if your tables
    are broken up on something other methodology rather than on the one of the
    indexes you need.

    Anyway...that might be something else to consider.

    Chris

    "Michael Shutt" <mshutt@advectis.com> wrote:
    >Thanks for the response Chris. A little more explanation may help
    >illuminate things.
    >
    >The theoretical PRODUCTS table is actually a group of 4 tables and each

    time
    >the application needs to lookup a list of valid PRODUCT_ID values for a
    >user, all 4 tables have to be joined. That is why I am trying to cache

    the
    >returned list for each user.
    >
    >I have thought about using a view or temporary table to turn the relevant
    >info from those four tables into a "flat" table for faster access. A view
    >would be nice since the database maintains it, but a temporary table would
    >be nice since I could apply indeces to it for better performance.
    >
    >My application is completely stateless (i.e., web-based), which means that
    >truly temp tables won't work since I need them to stick around between
    >individual web-page hits by a single user (the ado connection is dropped
    >each time).
    >
    >I think that the best approach as far as performance is concerned is to
    >generate a simple list of PRODUCT_IDs when the user first connects and store
    >it for use in subsequent queries. Although that means I am generating
    >dynamic sql for a lot of queries, I think that is worth the tradeoff since
    >SQL only has to parse the values from the IN clause when performing the
    >queries against the SALES table.
    >
    >All of that being said, I need to do some testing to see if there is an
    >obvious peformance benefit one way or another. I just wanted to be sure
    >that I was using all available scenarios in my testing.
    >
    >"Chris Hylton" <cchylton@hotmail.com> wrote in message
    >news:3d63d34b$1@10.1.10.29...
    >>
    >> Even w/ scalability concerns, I think I'd still try the temp table deal

    >over
    >> the current solution you mentioned (building dynamic SQL) because it's

    >cleaner
    >> than building the dynamic statements (IMHO). You loose out on the

    >precompile
    >> capabilities w/ that option...but, now that I think about it...you might
    >> even loose out on the precompiled capabilities w/ a temp table because

    SQL
    >> really can't evaluate the statement until the table is available...
    >>
    >> If having TONS of temp tables concern you...I'd create a global (##) temp
    >> table with USERID as a field in the table...then you only have a single

    >temp
    >> object and you'd just have to add USERID to the subquery in the IN clause.
    >>
    >> You might want to do some testing to confirm suspicions about query

    >optimization
    >> and the benefits of precompiled SPs with temp tables, since it might not
    >> be any better than the dynamic SQL deal. If you find out you loose that,
    >> then I'd just create a normal table, with userid, indexed on userid and

    >product
    >> id, and just populate/empty that table by user as they connect/disconnect
    >> to the database.
    >>
    >> On the temp table thing...a temp table will go away on its own, not sure
    >> I understand your statement about knowing when you can drop them...when

    >the
    >> connection drops (or the last connection in the case of a global temp

    >table)
    >> the temp table will automatically drop. SQL Server isn't like Oracle,

    >temp
    >> tables are temp tables, there isn't a 'definition' sitting out there like
    >> in Oracle.
    >>
    >> Just some thoughts...
    >> Chris
    >>

    >
    ><snip>
    >
    >



  10. #10
    Michael Shutt Guest

    Re: passing an array to a stored procedure?

    Thanks again for the info.

    "Chris Hylton" <cchylton@hotmail.com> wrote in message
    news:3d63f458$1@10.1.10.29...
    >
    > Yea, the fact you are dealing with the web kills the global or user

    temp table
    > possibility...but I'm no expert on web based apps and techniques for

    maintain
    > state, there may be a way to maintain that connection across pages,

    dunno...
    >
    > I don't know how your data is structured in these 4 tables, i.e.

    what drives
    > the data being in table 1 vs table 2...BUT...
    >
    > If you have SQL 2000, and since you have 4 tables for your product

    data...you
    > might try an INDEXED VIEW. It's probably documented under something

    like
    > 'Data Partitioning' in SQL BOL. This technique is designed

    specifically
    > for <view> performance issues with multiple tables containing the

    same (or
    > similar) data structures. The concept is to move data into tables

    based
    > on index ranges (not sure if you can do this or not). Then,

    presumably the
    > indexed view runs much faster. May not offer you any benefit if

    your tables
    > are broken up on something other methodology rather than on the one

    of the
    > indexes you need.
    >
    > Anyway...that might be something else to consider.
    >
    > Chris
    >
    > "Michael Shutt" <mshutt@advectis.com> wrote:
    > >Thanks for the response Chris. A little more explanation may help
    > >illuminate things.
    > >
    > >The theoretical PRODUCTS table is actually a group of 4 tables and

    each
    > time
    > >the application needs to lookup a list of valid PRODUCT_ID values

    for a
    > >user, all 4 tables have to be joined. That is why I am trying to

    cache
    > the
    > >returned list for each user.
    > >
    > >I have thought about using a view or temporary table to turn the

    relevant
    > >info from those four tables into a "flat" table for faster access.

    A view
    > >would be nice since the database maintains it, but a temporary

    table would
    > >be nice since I could apply indeces to it for better performance.
    > >
    > >My application is completely stateless (i.e., web-based), which

    means that
    > >truly temp tables won't work since I need them to stick around

    between
    > >individual web-page hits by a single user (the ado connection is

    dropped
    > >each time).
    > >
    > >I think that the best approach as far as performance is concerned

    is to
    > >generate a simple list of PRODUCT_IDs when the user first connects

    and store
    > >it for use in subsequent queries. Although that means I am

    generating
    > >dynamic sql for a lot of queries, I think that is worth the

    tradeoff since
    > >SQL only has to parse the values from the IN clause when performing

    the
    > >queries against the SALES table.
    > >
    > >All of that being said, I need to do some testing to see if there

    is an
    > >obvious peformance benefit one way or another. I just wanted to be

    sure
    > >that I was using all available scenarios in my testing.
    > >
    > >"Chris Hylton" <cchylton@hotmail.com> wrote in message
    > >news:3d63d34b$1@10.1.10.29...
    > >>
    > >> Even w/ scalability concerns, I think I'd still try the temp

    table deal
    > >over
    > >> the current solution you mentioned (building dynamic SQL) because

    it's
    > >cleaner
    > >> than building the dynamic statements (IMHO). You loose out on

    the
    > >precompile
    > >> capabilities w/ that option...but, now that I think about

    it...you might
    > >> even loose out on the precompiled capabilities w/ a temp table

    because
    > SQL
    > >> really can't evaluate the statement until the table is

    available...
    > >>
    > >> If having TONS of temp tables concern you...I'd create a global

    (##) temp
    > >> table with USERID as a field in the table...then you only have a

    single
    > >temp
    > >> object and you'd just have to add USERID to the subquery in the

    IN clause.
    > >>
    > >> You might want to do some testing to confirm suspicions about

    query
    > >optimization
    > >> and the benefits of precompiled SPs with temp tables, since it

    might not
    > >> be any better than the dynamic SQL deal. If you find out you

    loose that,
    > >> then I'd just create a normal table, with userid, indexed on

    userid and
    > >product
    > >> id, and just populate/empty that table by user as they

    connect/disconnect
    > >> to the database.
    > >>
    > >> On the temp table thing...a temp table will go away on its own,

    not sure
    > >> I understand your statement about knowing when you can drop

    them...when
    > >the
    > >> connection drops (or the last connection in the case of a global

    temp
    > >table)
    > >> the temp table will automatically drop. SQL Server isn't like

    Oracle,
    > >temp
    > >> tables are temp tables, there isn't a 'definition' sitting out

    there like
    > >> in Oracle.
    > >>
    > >> Just some thoughts...
    > >> Chris
    > >>

    > >
    > ><snip>
    > >
    > >

    >



  11. #11
    Rune Bivrin Guest

    Re: passing an array to a stored procedure?

    I think you are confusing indexed views with distributed partitioned
    views.
    An indexed view is a view with a unique clustered index on it, and
    optionally other indexes. They function as caches of select statements.
    This would probably be quite functional for the problem at hand.

    A distributed partitioned view serves to (as the name suggests) partition
    data for a table on more than one server, depending on part of the
    primary key, to improve performance. I'd guess this has little to do with
    the problem Michael is facing.

    Rune Bivrin


    "Chris Hylton" <cchylton@hotmail.com> wrote in
    news:3d63f458$1@10.1.10.29:

    >
    > Yea, the fact you are dealing with the web kills the global or user
    > temp table possibility...but I'm no expert on web based apps and
    > techniques for maintain state, there may be a way to maintain that
    > connection across pages, dunno...
    >
    > I don't know how your data is structured in these 4 tables, i.e. what
    > drives the data being in table 1 vs table 2...BUT...
    >
    > If you have SQL 2000, and since you have 4 tables for your product
    > data...you might try an INDEXED VIEW. It's probably documented under
    > something like 'Data Partitioning' in SQL BOL. This technique is
    > designed specifically for <view> performance issues with multiple
    > tables containing the same (or similar) data structures. The concept
    > is to move data into tables based on index ranges (not sure if you can
    > do this or not). Then, presumably the indexed view runs much faster.
    > May not offer you any benefit if your tables are broken up on
    > something other methodology rather than on the one of the indexes you
    > need.
    >
    > Anyway...that might be something else to consider.
    >
    > Chris
    >
    > "Michael Shutt" <mshutt@advectis.com> wrote:
    >>Thanks for the response Chris. A little more explanation may help
    >>illuminate things.
    >>
    >>The theoretical PRODUCTS table is actually a group of 4 tables and
    >>each

    > time
    >>the application needs to lookup a list of valid PRODUCT_ID values for
    >>a user, all 4 tables have to be joined. That is why I am trying to
    >>cache

    > the
    >>returned list for each user.
    >>
    >>I have thought about using a view or temporary table to turn the
    >>relevant info from those four tables into a "flat" table for faster
    >>access. A view would be nice since the database maintains it, but a
    >>temporary table would be nice since I could apply indeces to it for
    >>better performance.
    >>
    >>My application is completely stateless (i.e., web-based), which means
    >>that truly temp tables won't work since I need them to stick around
    >>between individual web-page hits by a single user (the ado connection
    >>is dropped each time).
    >>
    >>I think that the best approach as far as performance is concerned is
    >>to generate a simple list of PRODUCT_IDs when the user first connects
    >>and store it for use in subsequent queries. Although that means I am
    >>generating dynamic sql for a lot of queries, I think that is worth the
    >>tradeoff since SQL only has to parse the values from the IN clause
    >>when performing the queries against the SALES table.
    >>
    >>All of that being said, I need to do some testing to see if there is
    >>an obvious peformance benefit one way or another. I just wanted to be
    >>sure that I was using all available scenarios in my testing.
    >>
    >>"Chris Hylton" <cchylton@hotmail.com> wrote in message
    >>news:3d63d34b$1@10.1.10.29...
    >>>
    >>> Even w/ scalability concerns, I think I'd still try the temp table
    >>> deal

    >>over
    >>> the current solution you mentioned (building dynamic SQL) because
    >>> it's

    >>cleaner
    >>> than building the dynamic statements (IMHO). You loose out on the

    >>precompile
    >>> capabilities w/ that option...but, now that I think about it...you
    >>> might even loose out on the precompiled capabilities w/ a temp table
    >>> because

    > SQL
    >>> really can't evaluate the statement until the table is available...
    >>>
    >>> If having TONS of temp tables concern you...I'd create a global (##)
    >>> temp table with USERID as a field in the table...then you only have
    >>> a single

    >>temp
    >>> object and you'd just have to add USERID to the subquery in the IN
    >>> clause.
    >>>
    >>> You might want to do some testing to confirm suspicions about query

    >>optimization
    >>> and the benefits of precompiled SPs with temp tables, since it might
    >>> not be any better than the dynamic SQL deal. If you find out you
    >>> loose that, then I'd just create a normal table, with userid,
    >>> indexed on userid and

    >>product
    >>> id, and just populate/empty that table by user as they
    >>> connect/disconnect to the database.
    >>>
    >>> On the temp table thing...a temp table will go away on its own, not
    >>> sure I understand your statement about knowing when you can drop
    >>> them...when

    >>the
    >>> connection drops (or the last connection in the case of a global
    >>> temp

    >>table)
    >>> the temp table will automatically drop. SQL Server isn't like
    >>> Oracle,

    >>temp
    >>> tables are temp tables, there isn't a 'definition' sitting out there
    >>> like in Oracle.
    >>>
    >>> Just some thoughts...
    >>> Chris
    >>>

    >>
    >><snip>
    >>
    >>

    >
    >



  12. #12
    Michael Shutt Guest

    Re: passing an array to a stored procedure?

    Well, it looks like I need to get a little more up to speed on views.
    I wasn't aware you could apply indexes to them. That may be the
    solution I was looking for.

    Thanks Rune

    "Rune Bivrin" <rune@bivrin.com> wrote in message
    news:Xns92726C54AF7D8runebivrincom@209.1.14.29...
    > I think you are confusing indexed views with distributed partitioned
    > views.
    > An indexed view is a view with a unique clustered index on it, and
    > optionally other indexes. They function as caches of select

    statements.
    > This would probably be quite functional for the problem at hand.
    >
    > A distributed partitioned view serves to (as the name suggests)

    partition
    > data for a table on more than one server, depending on part of the
    > primary key, to improve performance. I'd guess this has little to do

    with
    > the problem Michael is facing.
    >
    > Rune Bivrin
    >
    >
    > "Chris Hylton" <cchylton@hotmail.com> wrote in
    > news:3d63f458$1@10.1.10.29:
    >
    > >
    > > Yea, the fact you are dealing with the web kills the global or

    user
    > > temp table possibility...but I'm no expert on web based apps and
    > > techniques for maintain state, there may be a way to maintain that
    > > connection across pages, dunno...
    > >
    > > I don't know how your data is structured in these 4 tables, i.e.

    what
    > > drives the data being in table 1 vs table 2...BUT...
    > >
    > > If you have SQL 2000, and since you have 4 tables for your product
    > > data...you might try an INDEXED VIEW. It's probably documented

    under
    > > something like 'Data Partitioning' in SQL BOL. This technique is
    > > designed specifically for <view> performance issues with multiple
    > > tables containing the same (or similar) data structures. The

    concept
    > > is to move data into tables based on index ranges (not sure if you

    can
    > > do this or not). Then, presumably the indexed view runs much

    faster.
    > > May not offer you any benefit if your tables are broken up on
    > > something other methodology rather than on the one of the indexes

    you
    > > need.
    > >
    > > Anyway...that might be something else to consider.
    > >
    > > Chris
    > >
    > > "Michael Shutt" <mshutt@advectis.com> wrote:
    > >>Thanks for the response Chris. A little more explanation may help
    > >>illuminate things.
    > >>
    > >>The theoretical PRODUCTS table is actually a group of 4 tables and
    > >>each

    > > time
    > >>the application needs to lookup a list of valid PRODUCT_ID values

    for
    > >>a user, all 4 tables have to be joined. That is why I am trying

    to
    > >>cache

    > > the
    > >>returned list for each user.
    > >>
    > >>I have thought about using a view or temporary table to turn the
    > >>relevant info from those four tables into a "flat" table for

    faster
    > >>access. A view would be nice since the database maintains it, but

    a
    > >>temporary table would be nice since I could apply indeces to it

    for
    > >>better performance.
    > >>
    > >>My application is completely stateless (i.e., web-based), which

    means
    > >>that truly temp tables won't work since I need them to stick

    around
    > >>between individual web-page hits by a single user (the ado

    connection
    > >>is dropped each time).
    > >>
    > >>I think that the best approach as far as performance is concerned

    is
    > >>to generate a simple list of PRODUCT_IDs when the user first

    connects
    > >>and store it for use in subsequent queries. Although that means I

    am
    > >>generating dynamic sql for a lot of queries, I think that is worth

    the
    > >>tradeoff since SQL only has to parse the values from the IN clause
    > >>when performing the queries against the SALES table.
    > >>
    > >>All of that being said, I need to do some testing to see if there

    is
    > >>an obvious peformance benefit one way or another. I just wanted

    to be
    > >>sure that I was using all available scenarios in my testing.
    > >>
    > >>"Chris Hylton" <cchylton@hotmail.com> wrote in message
    > >>news:3d63d34b$1@10.1.10.29...
    > >>>
    > >>> Even w/ scalability concerns, I think I'd still try the temp

    table
    > >>> deal
    > >>over
    > >>> the current solution you mentioned (building dynamic SQL)

    because
    > >>> it's
    > >>cleaner
    > >>> than building the dynamic statements (IMHO). You loose out on

    the
    > >>precompile
    > >>> capabilities w/ that option...but, now that I think about

    it...you
    > >>> might even loose out on the precompiled capabilities w/ a temp

    table
    > >>> because

    > > SQL
    > >>> really can't evaluate the statement until the table is

    available...
    > >>>
    > >>> If having TONS of temp tables concern you...I'd create a global

    (##)
    > >>> temp table with USERID as a field in the table...then you only

    have
    > >>> a single
    > >>temp
    > >>> object and you'd just have to add USERID to the subquery in the

    IN
    > >>> clause.
    > >>>
    > >>> You might want to do some testing to confirm suspicions about

    query
    > >>optimization
    > >>> and the benefits of precompiled SPs with temp tables, since it

    might
    > >>> not be any better than the dynamic SQL deal. If you find out

    you
    > >>> loose that, then I'd just create a normal table, with userid,
    > >>> indexed on userid and
    > >>product
    > >>> id, and just populate/empty that table by user as they
    > >>> connect/disconnect to the database.
    > >>>
    > >>> On the temp table thing...a temp table will go away on its own,

    not
    > >>> sure I understand your statement about knowing when you can drop
    > >>> them...when
    > >>the
    > >>> connection drops (or the last connection in the case of a global
    > >>> temp
    > >>table)
    > >>> the temp table will automatically drop. SQL Server isn't like
    > >>> Oracle,
    > >>temp
    > >>> tables are temp tables, there isn't a 'definition' sitting out

    there
    > >>> like in Oracle.
    > >>>
    > >>> Just some thoughts...
    > >>> Chris
    > >>>
    > >>
    > >><snip>
    > >>
    > >>

    > >
    > >

    >



  13. #13
    Michael Shutt Guest

    Re: passing an array to a stored procedure?

    Looks like indexed views are only available in SQL Server 2000. As I
    stated in the orginal post, I am currently using SQL Server 7. Maybe
    its time to upgrade.

    "Michael Shutt" <mshutt@advectis.com> wrote in message
    news:3d64bdab@10.1.10.29...
    > Well, it looks like I need to get a little more up to speed on

    views.
    > I wasn't aware you could apply indexes to them. That may be the
    > solution I was looking for.
    >
    > Thanks Rune
    >
    > "Rune Bivrin" <rune@bivrin.com> wrote in message
    > news:Xns92726C54AF7D8runebivrincom@209.1.14.29...
    > > I think you are confusing indexed views with distributed

    partitioned
    > > views.
    > > An indexed view is a view with a unique clustered index on it, and
    > > optionally other indexes. They function as caches of select

    > statements.
    > > This would probably be quite functional for the problem at hand.
    > >
    > > A distributed partitioned view serves to (as the name suggests)

    > partition
    > > data for a table on more than one server, depending on part of the
    > > primary key, to improve performance. I'd guess this has little to

    do
    > with
    > > the problem Michael is facing.
    > >
    > > Rune Bivrin
    > >
    > >
    > > "Chris Hylton" <cchylton@hotmail.com> wrote in
    > > news:3d63f458$1@10.1.10.29:
    > >
    > > >
    > > > Yea, the fact you are dealing with the web kills the global or

    > user
    > > > temp table possibility...but I'm no expert on web based apps and
    > > > techniques for maintain state, there may be a way to maintain

    that
    > > > connection across pages, dunno...
    > > >
    > > > I don't know how your data is structured in these 4 tables, i.e.

    > what
    > > > drives the data being in table 1 vs table 2...BUT...
    > > >
    > > > If you have SQL 2000, and since you have 4 tables for your

    product
    > > > data...you might try an INDEXED VIEW. It's probably documented

    > under
    > > > something like 'Data Partitioning' in SQL BOL. This technique

    is
    > > > designed specifically for <view> performance issues with

    multiple
    > > > tables containing the same (or similar) data structures. The

    > concept
    > > > is to move data into tables based on index ranges (not sure if

    you
    > can
    > > > do this or not). Then, presumably the indexed view runs much

    > faster.
    > > > May not offer you any benefit if your tables are broken up on
    > > > something other methodology rather than on the one of the

    indexes
    > you
    > > > need.
    > > >
    > > > Anyway...that might be something else to consider.
    > > >
    > > > Chris
    > > >
    > > > "Michael Shutt" <mshutt@advectis.com> wrote:
    > > >>Thanks for the response Chris. A little more explanation may

    help
    > > >>illuminate things.
    > > >>
    > > >>The theoretical PRODUCTS table is actually a group of 4 tables

    and
    > > >>each
    > > > time
    > > >>the application needs to lookup a list of valid PRODUCT_ID

    values
    > for
    > > >>a user, all 4 tables have to be joined. That is why I am trying

    > to
    > > >>cache
    > > > the
    > > >>returned list for each user.
    > > >>
    > > >>I have thought about using a view or temporary table to turn the
    > > >>relevant info from those four tables into a "flat" table for

    > faster
    > > >>access. A view would be nice since the database maintains it,

    but
    > a
    > > >>temporary table would be nice since I could apply indeces to it

    > for
    > > >>better performance.
    > > >>
    > > >>My application is completely stateless (i.e., web-based), which

    > means
    > > >>that truly temp tables won't work since I need them to stick

    > around
    > > >>between individual web-page hits by a single user (the ado

    > connection
    > > >>is dropped each time).
    > > >>
    > > >>I think that the best approach as far as performance is

    concerned
    > is
    > > >>to generate a simple list of PRODUCT_IDs when the user first

    > connects
    > > >>and store it for use in subsequent queries. Although that means

    I
    > am
    > > >>generating dynamic sql for a lot of queries, I think that is

    worth
    > the
    > > >>tradeoff since SQL only has to parse the values from the IN

    clause
    > > >>when performing the queries against the SALES table.
    > > >>
    > > >>All of that being said, I need to do some testing to see if

    there
    > is
    > > >>an obvious peformance benefit one way or another. I just wanted

    > to be
    > > >>sure that I was using all available scenarios in my testing.
    > > >>
    > > >>"Chris Hylton" <cchylton@hotmail.com> wrote in message
    > > >>news:3d63d34b$1@10.1.10.29...
    > > >>>
    > > >>> Even w/ scalability concerns, I think I'd still try the temp

    > table
    > > >>> deal
    > > >>over
    > > >>> the current solution you mentioned (building dynamic SQL)

    > because
    > > >>> it's
    > > >>cleaner
    > > >>> than building the dynamic statements (IMHO). You loose out on

    > the
    > > >>precompile
    > > >>> capabilities w/ that option...but, now that I think about

    > it...you
    > > >>> might even loose out on the precompiled capabilities w/ a temp

    > table
    > > >>> because
    > > > SQL
    > > >>> really can't evaluate the statement until the table is

    > available...
    > > >>>
    > > >>> If having TONS of temp tables concern you...I'd create a

    global
    > (##)
    > > >>> temp table with USERID as a field in the table...then you only

    > have
    > > >>> a single
    > > >>temp
    > > >>> object and you'd just have to add USERID to the subquery in

    the
    > IN
    > > >>> clause.
    > > >>>
    > > >>> You might want to do some testing to confirm suspicions about

    > query
    > > >>optimization
    > > >>> and the benefits of precompiled SPs with temp tables, since it

    > might
    > > >>> not be any better than the dynamic SQL deal. If you find out

    > you
    > > >>> loose that, then I'd just create a normal table, with userid,
    > > >>> indexed on userid and
    > > >>product
    > > >>> id, and just populate/empty that table by user as they
    > > >>> connect/disconnect to the database.
    > > >>>
    > > >>> On the temp table thing...a temp table will go away on its

    own,
    > not
    > > >>> sure I understand your statement about knowing when you can

    drop
    > > >>> them...when
    > > >>the
    > > >>> connection drops (or the last connection in the case of a

    global
    > > >>> temp
    > > >>table)
    > > >>> the temp table will automatically drop. SQL Server isn't like
    > > >>> Oracle,
    > > >>temp
    > > >>> tables are temp tables, there isn't a 'definition' sitting out

    > there
    > > >>> like in Oracle.
    > > >>>
    > > >>> Just some thoughts...
    > > >>> Chris
    > > >>>
    > > >>
    > > >><snip>
    > > >>
    > > >>
    > > >
    > > >

    > >

    >



  14. #14
    Chris Hylton Guest

    Re: passing an array to a stored procedure?


    Yes, I got my terminology mixed up...I was definitely referring to partitioned
    data...didn't even really think about the TRUE indexed view solution...but
    either could address your problem. But, both are only in SQL Server 2000,
    if you are using 7.0, I think your old solutions are the only options...

    Chris

    "Michael Shutt" <mshutt@advectis.com> wrote:
    >Looks like indexed views are only available in SQL Server 2000. As I
    >stated in the orginal post, I am currently using SQL Server 7. Maybe
    >its time to upgrade.
    >
    >"Michael Shutt" <mshutt@advectis.com> wrote in message
    >news:3d64bdab@10.1.10.29...
    >> Well, it looks like I need to get a little more up to speed on

    >views.
    >> I wasn't aware you could apply indexes to them. That may be the
    >> solution I was looking for.
    >>
    >> Thanks Rune
    >>
    >> "Rune Bivrin" <rune@bivrin.com> wrote in message
    >> news:Xns92726C54AF7D8runebivrincom@209.1.14.29...
    >> > I think you are confusing indexed views with distributed

    >partitioned
    >> > views.
    >> > An indexed view is a view with a unique clustered index on it, and
    >> > optionally other indexes. They function as caches of select

    >> statements.
    >> > This would probably be quite functional for the problem at hand.
    >> >
    >> > A distributed partitioned view serves to (as the name suggests)

    >> partition
    >> > data for a table on more than one server, depending on part of the
    >> > primary key, to improve performance. I'd guess this has little to

    >do
    >> with
    >> > the problem Michael is facing.
    >> >
    >> > Rune Bivrin
    >> >
    >> >
    >> > "Chris Hylton" <cchylton@hotmail.com> wrote in
    >> > news:3d63f458$1@10.1.10.29:
    >> >
    >> > >
    >> > > Yea, the fact you are dealing with the web kills the global or

    >> user
    >> > > temp table possibility...but I'm no expert on web based apps and
    >> > > techniques for maintain state, there may be a way to maintain

    >that
    >> > > connection across pages, dunno...
    >> > >
    >> > > I don't know how your data is structured in these 4 tables, i.e.

    >> what
    >> > > drives the data being in table 1 vs table 2...BUT...
    >> > >
    >> > > If you have SQL 2000, and since you have 4 tables for your

    >product
    >> > > data...you might try an INDEXED VIEW. It's probably documented

    >> under
    >> > > something like 'Data Partitioning' in SQL BOL. This technique

    >is
    >> > > designed specifically for <view> performance issues with

    >multiple
    >> > > tables containing the same (or similar) data structures. The

    >> concept
    >> > > is to move data into tables based on index ranges (not sure if

    >you
    >> can
    >> > > do this or not). Then, presumably the indexed view runs much

    >> faster.
    >> > > May not offer you any benefit if your tables are broken up on
    >> > > something other methodology rather than on the one of the

    >indexes
    >> you
    >> > > need.
    >> > >
    >> > > Anyway...that might be something else to consider.
    >> > >
    >> > > Chris
    >> > >
    >> > > "Michael Shutt" <mshutt@advectis.com> wrote:
    >> > >>Thanks for the response Chris. A little more explanation may

    >help
    >> > >>illuminate things.
    >> > >>
    >> > >>The theoretical PRODUCTS table is actually a group of 4 tables

    >and
    >> > >>each
    >> > > time
    >> > >>the application needs to lookup a list of valid PRODUCT_ID

    >values
    >> for
    >> > >>a user, all 4 tables have to be joined. That is why I am trying

    >> to
    >> > >>cache
    >> > > the
    >> > >>returned list for each user.
    >> > >>
    >> > >>I have thought about using a view or temporary table to turn the
    >> > >>relevant info from those four tables into a "flat" table for

    >> faster
    >> > >>access. A view would be nice since the database maintains it,

    >but
    >> a
    >> > >>temporary table would be nice since I could apply indeces to it

    >> for
    >> > >>better performance.
    >> > >>
    >> > >>My application is completely stateless (i.e., web-based), which

    >> means
    >> > >>that truly temp tables won't work since I need them to stick

    >> around
    >> > >>between individual web-page hits by a single user (the ado

    >> connection
    >> > >>is dropped each time).
    >> > >>
    >> > >>I think that the best approach as far as performance is

    >concerned
    >> is
    >> > >>to generate a simple list of PRODUCT_IDs when the user first

    >> connects
    >> > >>and store it for use in subsequent queries. Although that means

    >I
    >> am
    >> > >>generating dynamic sql for a lot of queries, I think that is

    >worth
    >> the
    >> > >>tradeoff since SQL only has to parse the values from the IN

    >clause
    >> > >>when performing the queries against the SALES table.
    >> > >>
    >> > >>All of that being said, I need to do some testing to see if

    >there
    >> is
    >> > >>an obvious peformance benefit one way or another. I just wanted

    >> to be
    >> > >>sure that I was using all available scenarios in my testing.
    >> > >>
    >> > >>"Chris Hylton" <cchylton@hotmail.com> wrote in message
    >> > >>news:3d63d34b$1@10.1.10.29...
    >> > >>>
    >> > >>> Even w/ scalability concerns, I think I'd still try the temp

    >> table
    >> > >>> deal
    >> > >>over
    >> > >>> the current solution you mentioned (building dynamic SQL)

    >> because
    >> > >>> it's
    >> > >>cleaner
    >> > >>> than building the dynamic statements (IMHO). You loose out on

    >> the
    >> > >>precompile
    >> > >>> capabilities w/ that option...but, now that I think about

    >> it...you
    >> > >>> might even loose out on the precompiled capabilities w/ a temp

    >> table
    >> > >>> because
    >> > > SQL
    >> > >>> really can't evaluate the statement until the table is

    >> available...
    >> > >>>
    >> > >>> If having TONS of temp tables concern you...I'd create a

    >global
    >> (##)
    >> > >>> temp table with USERID as a field in the table...then you only

    >> have
    >> > >>> a single
    >> > >>temp
    >> > >>> object and you'd just have to add USERID to the subquery in

    >the
    >> IN
    >> > >>> clause.
    >> > >>>
    >> > >>> You might want to do some testing to confirm suspicions about

    >> query
    >> > >>optimization
    >> > >>> and the benefits of precompiled SPs with temp tables, since it

    >> might
    >> > >>> not be any better than the dynamic SQL deal. If you find out

    >> you
    >> > >>> loose that, then I'd just create a normal table, with userid,
    >> > >>> indexed on userid and
    >> > >>product
    >> > >>> id, and just populate/empty that table by user as they
    >> > >>> connect/disconnect to the database.
    >> > >>>
    >> > >>> On the temp table thing...a temp table will go away on its

    >own,
    >> not
    >> > >>> sure I understand your statement about knowing when you can

    >drop
    >> > >>> them...when
    >> > >>the
    >> > >>> connection drops (or the last connection in the case of a

    >global
    >> > >>> temp
    >> > >>table)
    >> > >>> the temp table will automatically drop. SQL Server isn't like
    >> > >>> Oracle,
    >> > >>temp
    >> > >>> tables are temp tables, there isn't a 'definition' sitting out

    >> there
    >> > >>> like in Oracle.
    >> > >>>
    >> > >>> Just some thoughts...
    >> > >>> Chris
    >> > >>>
    >> > >>
    >> > >><snip>
    >> > >>
    >> > >>
    >> > >
    >> > >
    >> >

    >>

    >



  15. #15
    Q*bert Guest

    Re: passing an array to a stored procedure?


    I'm not sure what tools you have available for web based programming,
    but this sounds like an ideal time to be using XML. Using code to
    gather the data out of the database and write it to a flat file for
    use during the session. Response time for XML will not be an issue
    and the file can be regenerated as needed. Since each user will have
    a session variable you can manage, you can easily give the file a
    unique value and reference the session variable for the file name.
    The value of generating the XML file is that it can remain intact
    for the duration of the session with the website. You can delete
    the excess XML generated files after the session is droped. Since
    the XML is able to be queried in its own methods, you can get at
    the data when you need it where you need it. You may have to battle
    a slight learning curve for the XML, but it is not as difficult as
    you may think.

    If your using Cold Fusion (heaven help us) there is a
    built in feature to 5.0 (I think) that enables you to store your results
    locally in session variable and can be referenced just as though it
    were its own table. Hence you can query it just as though it were a
    table.

    Finally .Net offers a totally different approach to all of this
    in some of the local scripting it offers. I'm not up to date on how
    it works but I've seen some pretty snazzy setups similar to what you
    seem to be doing.

    <SNIP>
    </SNIP
    Q*bert
    &^$&%(

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  
HTML5 Development Center
 
 
FAQ
Latest Articles
Java
.NET
XML
Database
Enterprise
Questions? Contact us.
C++
Web Development
Wireless
Latest Tips
Open Source


   Development Centers

   -- Android Development Center
   -- Cloud Development Project Center
   -- HTML5 Development Center
   -- Windows Mobile Development Center