Hello VB ADO Gurus!

This is a bit of a long one, so I'd appreciate if you could read all of the
posting...

Problem:
========

Need to extract over 500,000 rows of data from SQL Server 7, manipulate the
data, update the database to audit what data has been extracted and generate
a delimited file.


The following solutions have been written where there is a VB EXE running
on a PC and it calls several stored procedures on a SQL Server accross the
network.

1st Solution:
=============
To start off with we were advised to use the ADO SHAPE command to extract
ALL the data in one hit, then loop through the data in VB and generate an
output file. The ADO Shape command would call 20+ SQL stored procedures
and put them in a hierarchical recordset. When working through this recordset
the VB app would call audit stored procedures.
This worked well and was very fast when we used it against smaller test volumes
of data, unfortunatly we kept getting an error when working with larger volumes
of data. To this day we still don't know exactly what the following error
message means:

Error Number: -2147467259
Error Description: Data provider or other service returned an E_FAIL status.
Error Source: Microsoft Cursor Engine

We just put it down to 'It couldn't handle the volume size'. Does anyone
know what it means??!?!!


2nd Solution:
=============

Instead of calling ALL the stored procedures in 1 hit using the ADO Shape
command, each stored procedure was called seperately. This slowed down the
App considerably, instead of taking minutes it took hours. This was obviously
due to the increased number of network trips to the database (350000+!!).


3rd Solution:
=============

Tried to get the best of both worlds from solution 1 and 2. Used 1 stored
procedure to get the data for them main structure. Looped through each account
in this recordset, and instead of calling 20+ stored procedures seperatly,
used the ADO Shape command to call these 20+ stored procedures in 1 hit.
Due to time and resource, bla, bla, bla, we had to go for solution 3. This
got it down to 3 hours, which was acceptable for our first run, but in the
future we will be dealing with much larger volumes and needs to be processed
quicker.


Conclusion:
===========
Solution 1 would be ideal if it could handle the large volumes, because the
whole app just has 1 network trip to the database.... but doesn't work.
We need a solution that reduces the number of network trips, the actual data
manupulation and writing to a file has been tweaked and optimized as much
as possible.

Does ANYONE have ANY ideas or able to help in ANY way???

Thanks for getting this far down this posting!!

Richard Lloyd