Author |
Topic |
|
kacalapy
Starting Member
5 Posts |
Posted - 21 December 2004 : 12:17:09
|
hi all,
i am getting strange results on my sql server database. its not super large, but does have a table with 65 million rows (banner history) and a lot table with a few million also. the rest of the tables are mutch smaller...
my site is db sriven and gets alot of time outs cause the db is slow to respond to requests. i read a litttle on profiler and wanted to know what to do, and what to look for , basicaly a pointer in the right direction.
any help or sugested resources would be great!!!
any books , links, ... on sql performance tuning and db inprovments would be nice |
Edited by - ruirib on 21 December 2004 12:32:57 |
|
pdrg
Support Moderator
United Kingdom
2897 Posts |
Posted - 21 December 2004 : 12:58:45
|
OK, so profiler will help you identify long-running queries, and see the traffic flowing between your app and the db - start by setting up a trace profile that will allow you to capture useful data (see BOL for how to get going) - do this by experiment, you can't break anything. You want to make sure you know when sprocs are run, etc, then see if you can make your db sweat a little, and look at the results of the trace.
Things to consider are Indexing - if your 65M line table is indexed (especially if heavily indexed) and you're inserting new rows, your db has to update all those indexes as well - big performance hit. Consider breraking your app up into 'web-site-serving' and 'reporting' db's, and archive off your 65M table so any touches will not affect your site-serving db.
Long-running queries - your definition of long-running may vary, but they're the obvious place to start. Can you index better? (There's an index-optimising tool in SQL SErver, feed it a trace file from a few hours, see what it suggests)..
Don't worry about the db itself, it is pretty much self-tuning (unlike Oracle, say), but the estimated execution plans for some of your queries/sprocs will be interesting - are you seeing a lot of table scans, for instance? They're slow, and indicate you need a well-placed index.
Hey, glad you want to look into this stuff - you can get *really* deep, but you can usually get some good results in a couple of hours determined playtime! Or get a contractor in to help you (I'm free from Jan ;-) )
hth |
|
|
kacalapy
Starting Member
5 Posts |
Posted - 21 December 2004 : 16:29:00
|
i did a fast scan and saw the things that took the longest were a green "audit logout" what is that, its a real killer!!!
also i guess i would be looking at the duration of line items. what is considered a long duration? 1000 ms (1 sec) is this long... i think so?
also does the scan make the DB realyyyyyy slow? i think i noticed that today. not good!!! users were timing out and clients were pissed |
|
|
pdrg
Support Moderator
United Kingdom
2897 Posts |
Posted - 22 December 2004 : 05:27:19
|
The term you're after is trace, not scan - just to clarify!
A trace is another db overhead, so no, it won't improve performance , but is a valuable way of collecting diagnostic info. If your users are timing out, you've got big problems anyway. Try reducing your trace output - do you *really* need every field you're saving?
What's a long time? Depends. Maybe you've got some locking/deadlocking going on as one table is involved in 2 conflicting updates? Can't tell without getting my hands in there. Profiler will get you a load of useful data, but you have to interpret it into information. Sadly there are no hard & fast rules about 'right and wrong', just what works best for your design.
What's the score with the index tuning wizard - what did it say?
Or the 'estimated execution plans' - do they show a lot of table scans? Lots of disk i/o?
This is really rather tricky to explain in posts, but BOL is your friend - also try http://experts-exchange.com and look for some similar questions there. Some excellent quality answers available.
hth |
|
|
|
Topic |
|
|
|