Need to reduce network trips and speed up ADO VB code!!
Hello VB ADO Gurus!
This is a bit of a long one, so I'd appreciate if you could read all of the
Need to extract over 500,000 rows of data from SQL Server 7, manipulate the
data, update the database to audit what data has been extracted and generate
a delimited file.
The following solutions have been written where there is a VB EXE running
on a PC and it calls several stored procedures on a SQL Server accross the
To start off with we were advised to use the ADO SHAPE command to extract
ALL the data in one hit, then loop through the data in VB and generate an
output file. The ADO Shape command would call 20+ SQL stored procedures
and put them in a hierarchical recordset. When working through this recordset
the VB app would call audit stored procedures.
This worked well and was very fast when we used it against smaller test volumes
of data, unfortunatly we kept getting an error when working with larger volumes
of data. To this day we still don't know exactly what the following error
Error Number: -2147467259
Error Description: Data provider or other service returned an E_FAIL status.
Error Source: Microsoft Cursor Engine
We just put it down to 'It couldn't handle the volume size'. Does anyone
know what it means??!?!!
Instead of calling ALL the stored procedures in 1 hit using the ADO Shape
command, each stored procedure was called seperately. This slowed down the
App considerably, instead of taking minutes it took hours. This was obviously
due to the increased number of network trips to the database (350000+!!).
Tried to get the best of both worlds from solution 1 and 2. Used 1 stored
procedure to get the data for them main structure. Looped through each account
in this recordset, and instead of calling 20+ stored procedures seperatly,
used the ADO Shape command to call these 20+ stored procedures in 1 hit.
Due to time and resource, bla, bla, bla, we had to go for solution 3. This
got it down to 3 hours, which was acceptable for our first run, but in the
future we will be dealing with much larger volumes and needs to be processed
Solution 1 would be ideal if it could handle the large volumes, because the
whole app just has 1 network trip to the database.... but doesn't work.
We need a solution that reduces the number of network trips, the actual data
manupulation and writing to a file has been tweaked and optimized as much
Does ANYONE have ANY ideas or able to help in ANY way???
Thanks for getting this far down this posting!!
Top DevX Stories
Easy Web Services with SQL Server 2005 HTTP Endpoints
JavaOne 2005: Java Platform Roadmap Focuses on Ease of Development, Sun Focuses on the "Free" in F.O.S.S.
Wed Yourself to UML with the Power of Associations
Microsoft to Add AJAX Capabilities to ASP.NET
IBM's Cloudscape Versus MySQL