Friday, August 26, 2011

Detecting Mobile Device and Redirecting


Detect All Mobile Devices. Provide Optimised Content

Project Description

This project, called "Foundation", is just one of 51Degrees.mobi components for mobile web development. Its provided as a .NET open source class library that detects mobile devices and browsers, enhancing the information available to .NET programmers. Accurate screen sizes, input methods, plus manufacturer and model information is all available. Mobile handsets can optionally be redirected to content designed for mobile devices. Smart phones, tablets and feature phones are all supported.

Mobile Optimized Web Sites

Foundation detects the presence of a mobile device, and enables the web request to be directed to web pages designed for mobile. 51Degrees.mobi other products help ASP.NET developers rapidly create fast mobile web pages supporting tablet devices, high end handsets, and basic feature phones. Learn More & Free Trial.

How does it work?

Http requests are intercepted by an additional HttpModule before the page handler starts to process the page. The first task of the module is to detect the device making the request and enhance the default properties provided by Microsoft. For example; the Request.Browser.ScreenPixelsWidth property will return the precise value for the mobile device. Finally the module determines if the request should be sent to a mobile home page and performs the redirection. The web.config file can be used to control how detection and redirection operate for your specific web site. For example; some sites may wish to redirect only the very first request to a mobile home page enabling the user to navigate to the traditional home page.
To find out more try downloading our detection example web site or reading the operational summary.

Guides and Documentation

Example Projects

Get in Touch

Find out more: http://51degrees.codeplex.com/

Thursday, August 25, 2011

Finding out the culprit behind “# of Exceps Thrown / Sec” using ProcDump and WinDbg

I’ll show you how to find out the culprit (exceptions) who is causing a high number of .Net “# of Exceps Thrown / Sec”
We’ll do this with WinDbg and ProcDump tools.
Let us start with reviewing the documentation for this performance counter.
  "Exception Performance Counters"

# of Exceps Thrown / Sec
  Displays the number of exceptions thrown per second. This includes both .NET exceptions and unmanaged exceptions that are converted into .NET exceptions.
  For example, an HRESULT returned from unmanaged code is converted to an exception in managed code.
  This counter includes both handled and unhandled exceptions. It is not an average over time; it displays the difference between the values observed in the
  last two samples divided by the duration of the sample interval. This counter is an indicator of potential performance problems if a large (>100s) number of exceptions are thrown.

So we know from here that a large number of exceptions could cause performance issues.
The ‘problem’ here is that this counter includes handled exceptions which means that there will be no crash because, well, it is handled.
In other words it will not be anything in the event logs or any type of stack trace to look at.
And it will not be easy to set a dump trigger on a particular exception since we do not know what the exact exception is.
It could be anything, and this post is about finding out what exception is being thrown this frequently (and possibly cause the performance issue).

All we know is that the application is performing slowly, there are no crashes and the Net “# of Exceps Thrown / Sec” hits a high number (> 100).
So how do we go on from here? I’ll stick to my way of demonstrating from scratch. So the first thing is to create an application that will show this behavior.

Create a new .Net C# console application with the following code:

    class Program
    {
        static void Main(string[] args)
        {
            Console.WriteLine("Press any key to start!");
            Console.ReadLine();

            for (int i = 1; i < 4; i++)
            {
                ThrowHighAmountOfExceptions();
                Console.WriteLine("Iteration {0} done.", i);
            }
           
        }

        private static void ThrowHighAmountOfExceptions()
        {
            int waitTime = 5000;
            bool run = true;
            Stopwatch sw = new Stopwatch();

            Thread.Sleep(waitTime);
            sw.Start();

            while (run)
            {
                int x = 0;
                try
                {
                    int y = 1 / x;
                }
                catch (Exception ex)
                {
                    //Exception is handled.
                }

                if (sw.ElapsedMilliseconds >= waitTime)
                {
                    run = false;
                }
            }
            sw.Stop();
        }
    }


This application will wait 5 seconds, then throw a lot of exceptions for 5 seconds then wait 5 seconds. It will do this over 3 iterations.

Run the application but do not press any key yet.
Start performance monitor (Start – Run – PerfMon ).
Add the “.NET CLR Exceptions\# of Exceps Thrown / sec“ counter.

 

Then run the application and you should have an output like this, notice that the % Processor Time follows (in green).

 

So we can see that we have a lot of .Net exceptions thrown. So now the question is; what exceptions?
( In this case we know (DivideByZeroException), but in real life you probably don’t. If you did, you would have fixed it and not read this J )

First download and install Proc Dump from here:
  “ProcDump”

Start the application again (do not hit any key yet) and figure out what the PID is for the application (use Task Manger for example) or use the process name.
Then navigate to the directory where you have extracted Proc Dump and run the following.

C:\ProcDump>procdump 6140 -ma -s 3 -p "\.NET CLR Exceptions(_Global_)\# of Exceps Thrown / sec" 100

Here we are saying that when the process with PID 6140 hits more than 100 exceptions thrown per second and keeps doing this for at least 3 seconds, then do a full dump (-ma) on the process.
So run this and start the application. This should give an output like this:

C:\ProcDump>procdump 6140 -ma -s 3 -p "\.NET CLR Exceptions(_Global_)\# of Exceps Thrown / sec" 100

ProcDump v3.04 - Writes process dump files
Copyright (C) 2009-2011 Mark Russinovich
Sysinternals - www.sysinternals.com

Process:               HighExcepPerSec.exe (6140)
CPU threshold:         n/a
Performance counter:   \.NET CLR Exceptions(_Global_)\# of Exceps Thrown / sec
Performance threshold: 100
Commit threshold:      n/a
Threshold seconds:     3
Number of dumps:       1
Hung window check:     Disabled
Exception monitor:     Disabled
Terminate monitor:     Disabled
Dump file:             C:\ProcDump\HighExcepPerSec.dmp

[13:56.09] Counter:    17580  1s
[13:56.10] Counter:    33398  2s
[13:56.11] Counter:    33835  3s

Process has hit performance counter spike threshold.
Writing dump file C:\ProcDump\HighExcepPerSec_110623_135611.dmp ...
Dump written.

Dump count reached.

So now you have a dump of the process.
Next step is to download and install the Debugging Tools for Windows. Found here:
  "Download and Install Debugging Tools for Windows"

Start WinDbg and select Open Crash Dump (CTRL+D) and navigate to the dump just created and open that one.
Then load the sos.dll found in your .Net installation directory:

0:000> .load C:\Windows\Microsoft.NET\Framework\v2.0.50727\sos.dll

and then dump out all objects of type exception:

!dumpheap -stat -type Exception

in my case this gives the following output:

0:000> !dumpheap -stat -type Exception
total 10093 objects
Statistics:
      MT    Count    TotalSize Class Name
6c2f3fbc        1           12 System.Text.DecoderExceptionFallback
6c2f3f78        1           12 System.Text.EncoderExceptionFallback
6c2f0e2c        1           72 System.ExecutionEngineException
6c2f0d9c        1           72 System.StackOverflowException
6c2f0d0c        1           72 System.OutOfMemoryException
6c2f0ebc        2          144 System.Threading.ThreadAbortException
6c90b7a0    10086       726192 System.DivideByZeroException
Total 10093 objects

So this clearly shows that 10086 exceptions have been thrown and this is most likely the cause of the high number of “# of Exceps Thrown / Sec” and possible performance issues.

Now all we have to do is to find where in our code that particular exception(s) could be thrown. Thats it!
Author: Michael Aspengren

Tuning IIS 6.0 to Improve ASP.NET Performance

In the Patterns and Practices Group's "Improving .NET Application Performance and Scalability", which is available in full text online and as a PDF download from the above link, as well as in softcover through MSPress and major booksellers, there are over 1000 pages and appendixes of detailed information about how to improve .NET application performance and scalability, written by the top experts in the business. One area that is both little understood and potentially confusing is the tuning of Internet Information Services 6.0.

Formula for Reducing Contention

The formula for reducing contention can give you a good empirical start for tuning the ASP.NET thread pool. Consider using the Microsoft product group-recommended settings that are shown in Table 6.1 if the following conditions are true:
  • You have available CPU.
  • Your application performs I/O bound operations such as calling a Web method or accessing the file system.
  • The ASP.NET Applications/Requests In Application Queue performance counter indicates that you have queued requests.
Table 6.1: Recommended Threading Settings for Reducing Contention
Configuration setting Default value (.NET Framework 1.1) Recommended value
maxconnection 2 12 * #CPUs
maxIoThreads 20 100
maxWorkerThreads 20 100
minFreeThreads 8 88 * #CPUs
minLocalRequestFreeThreads 4 76 * #CPUs
To address this issue, you need to configure the following items in the Machine.config file. Apply the recommended changes that are described in the following section, across the settings and not in isolation. For a detailed description of each of these settings, see "Thread Pool Attributes" in Chapter 17, "Tuning .NET Application Performance."
  • Set maxconnection to 12 * # of CPUs . This setting controls the maximum number of outgoing HTTP connections that you can initiate from a client. In this case, ASP.NET is the client. Set maxconnection to 12 * # of CPUs.
  • Set maxIoThreads to 100 . This setting controls the maximum number of I/O threads in the .NET thread pool. This number is automatically multiplied by the number of available CPUs. Set maxloThreads to 100.
  • Set maxWorkerThreads to 100 . This setting controls the maximum number of worker threads in the thread pool. This number is then automatically multiplied by the number of available CPUs. Set maxWorkerThreads to 100.
  • Set minFreeThreads to 88 * # of CPUs . This setting is used by the worker process to queue all the incoming requests if the number of available threads in the thread pool falls below the value for this setting. This setting effectively limits the number of requests that can run concurrently to maxWorkerThreads minFreeThreads . Set minFreeThreads to 88 * # of CPUs. This limits the number of concurrent requests to 12 (assuming maxWorkerThreads is 100).
  • Set minLocalRequestFreeThreads to 76 * # of CPUs . This setting is used by the worker process to queue requests from localhost (where a Web application sends requests to a local Web service) if the number of available threads in the thread pool falls below this number. This setting is similar to minFreeThreads but it only applies to localhost requests from the local computer. Set minLocalRequestFreeThreads to 76 * # of CPUs.
Discussion: The proviso above indicates that these settings should be used when your application has I/O bound operations and the Applications/Requests In Application Queue perfcounter indicates you have queued requests. However, I have found that settings approaching those indicated can improve performance on ASP.NET apps that do not exhibit these conditions. I recommend using the "Homer" web stress tool from at least one remote machine (and preferably more than one machine, with the supplied ASP controller page), or the .NET ACT Application Center Test application, to throw a good solid load at your app and carefully measure the performance statistics with each set of both the default and the above settings. In particular, pay close attention to the Requests per second and the time to last byte readings. This baseline testing scenario should provide the basis for further tuning if it is necessary, and it doesn't take long at all. You can only improve something if you have metrics, and the way you get the metrics is to take the time to get them! You can easily script all kinds of "user paths" through your ASP.NET application with testing software such as is mentioned here, and get the important baseline metrics you need. One more thing-- rule number 1 of software testing and debugging:
"When you are going to change something, ONLY CHANGE ONE THING AT A TIME!" Test it, get the metrics, and only then, proceed.

Kernel Mode Caching

If you deploy your application on Windows Server 2003, ASP.NET pages automatically benefit from the IIS 6.0 kernel cache. The kernel cache is managed by the HTTP.sys kernel-mode device driver. This driver handles all HTTP requests. Kernel mode caching may produce significant performance gains because requests for cached responses are served without switching to user mode.
The following default setting in the Machine.config file ensures that dynamically generated ASP.NET pages can use kernel mode caching, subject to the requirements listed below.
Dynamically generated ASP.NET pages are automatically cached subject to the following restrictions:
  • Pages must be retrieved by using HTTP GET requests. Responses to HTTP POST requests are not cached in the kernel.
  • Query strings are ignored when responses are cached. If you want a request for http://contoso.com/myapp.aspx?id=1234 to be cached in the kernel, all requests for http://contoso.com/myapp.aspx are served from the cache, regardless of the query string.
  • Pages must have an expiration policy. In other words, the pages must have an Expires header.
  • Pages must not have VaryByParams .
  • Pages must not have VaryByHeaders .
  • The page must not have security restrictions. In other words, the request must be anonymous and not require authentication. The HTTP.sys driver only caches anonymous responses.
  • There must be no filters configured for the W3wp.exe file instance that are unaware of the kernel cache.
Discussion: The "enableKernelOutputCache = "true" setting IS NOT present in the default machine.config "httpRunTime" element. Since it is not present, we should be able to expect that the default setting of "true" is automatic. Personally, I feel better explicitly putting the attribute in there, and setting it to "true". As an aside, I have found that it is ALWAYS a good idea to KEEP A BACKUP COPY of your machine.config stored somewhere safe.

Tuning the Thread Pool for Burst Load Scenarios

If your application experiences unusually high loads of users in small bursts (for example, 1000 clients all logging in at 9 A.M. in the morning), your system may be unable to handle the burst load. Consider setting minWorkerThreads and minIOThreads as specified in Knowledge Base article 810259, "FIX: SetMinThreads and GetMinThreads API Added to Common Language Runtime ThreadPool Class," at http://support.microsoft.com/default.aspx?scid=kb;en-us;810259 .
Discussion: The .NET Threadpool is somewhat limited in its flexibility and is specifically limited in terms of how many instances you may have per process, since it is static. If you have ASP.NET applications that specifically need to run background thread processing, you may wish to investigate using a custom threadpool class. I have used Ami Bar's SmartThreadPool with great success, and have even modified it to provide a ThreadPriority overload. You can have more than one instance of this pool, and each can be custom configured. This type of approach provides maximum flexibility while simultaneously permitting individual threadpool tuning of critical resources.

Tuning the Thread Pool When Calling COM Objects

ASP.NET Web pages that call single-threaded apartment (STA) COM objects should use the ASPCOMPAT attribute. The use of this attribute ensures that the call is executed using a thread from the STA thread pool. However, all calls to an individual COM object must be executed on the same thread. As a result, the thread count for the process can increases during periods of high load. You can monitor the number of active threads used in the ASP.NET worker process by viewing the Process:Thread Count (aspnet_wp instance) performance counter.
The thread count value is higher for an application when you are using ASPCOMPAT attribute compared to when you are not using it. When tuning the thread pool for scenarios where your application extensively uses STA COM components and the ASPCOMPAT attribute, you should ensure that the total thread count for the worker process does not exceed the following value.
75 + ((maxWorkerThread + maxIoThreads) * #CPUs * 2)

Evaluating the Change

To determine whether the formula for reducing contention has worked, look for improved throughput. Specifically, look for the following improvements:
  • CPU utilization increases.
  • Throughput increases according to the ASP.NET Applications\Requests/Sec performance counter.
  • Requests in the application queue decrease according to the ASP.NET Applications\Requests In Application Queue performance counter.
If this change does not improve your scenario, you may have a CPU-bound scenario. In a CPU-bound scenario, adding more threads may increase thread context switching, further degrading performance.
When tuning the thread pool, monitor the Process\Thread Count (aspnet_wp) performance counter. This value should not be more than the following.
75 + ((maxWorkerThread + maxIoThreads) * #CPUs) If you are using AspCompat, then this value should not be more than the following.
75 + ((maxWorkerThread + maxIoThreads) * #CPUs * 2) Values beyond this maximum tend to increase processor context switching.
Discussion: There is a long list of attention items that revolve around and are tightly woven into the IIS tuning issue for ASP.NET application tuning and scalability. These include, but are not limted to the following:
  • Improving page response times.
  • Designing scalable Web applications.
  • Using server controls efficiently.
  • Using efficient caching strategies.
  • Analyzing and applying appropriate state management techniques.
  • Minimizing view state impact.
  • Improving performance without impacting security.
  • Minimizing COM interop scalability issues.
  • Optimizing threading.
  • Optimizing resource management.
  • Avoiding common data binding mistakes.
  • Using security settings to reduce server load.
  • Avoiding common deployment mistakes.
Author: Peter A. Bromberg

ASP.NET Performance Monitoring, and When to Alert Administrators

Monitoring Performance Counters

There are many performance counters available for monitoring applications. Choosing which ones to include in performance logs can be tricky, and learning how to interpret them is an art. This article should help you feel more comfortable with both of these tasks.
At a minimum, the following performance counters should be monitored for Microsoft® ASP.NET applications:
  • Processor(_Total)\% Processor Time
  • Process(aspnet_wp)\% Processor Time
  • Process(aspnet_wp)\Private Bytes
  • Process(aspnet_wp)\Virtual Bytes
  • Process(aspnet_wp)\Handle Count
  • Microsoft® .NET CLR Exceptions\# Exceps thrown / sec
  • ASP.NET\Application Restarts
  • ASP.NET\Requests Rejected
  • ASP.NET\Worker Process Restarts (not applicable to IIS 6.0)
  • Memory\Available Mbytes
  • Web Service\Current Connections
  • Web Service\ISAPI Extension Requests/sec
Below is a larger list of performance counters that are useful for monitoring performance. It's always good to have more performance data than not enough, especially when you experience a problem that is not easily reproduced. The list omits several performance counters that are generally not needed. For example, the session state and transactions performance counters are only necessary when the features are used.
A few thresholds are recommended based upon my experience with debugging and testing ASP.NET applications. You can search this article for "Threshold" to jump right to them. Administrators should determine whether to raise alerts when these thresholds are exceeded based upon their experience. In most cases, alerts are appropriate, especially if the threshold is exceeded for extended periods of time.

Monitoring the Event Log

It is critical to monitor the event log for messages from ASP.NET and Microsoft® Internet Information Server (IIS). ASP.NET writes messages to the application log, for example, each time the aspnet_wp worker process terminates. IIS 6.0 writes messages to both the application and/or system logs, for example, each time the w3wp worker process reports itself unhealthy or crashes. It is quite easy to write a .NET application that reads the application log and filters out messages from ASP.NET and IIS, and fires an alert (sends e-mail or dials a pager) if necessary.

Monitoring the W3C and HTTPERR Logs

First, enable W3C logging for IIS 5.0 and IIS 6.0 through the Internet Information Services (IIS) Manager. This log can be configured to include various data about the requests, such as the URI, status code, and so on. Scan the log for error codes such as 404 Not Found, and take action to correct links, if necessary. On IIS 6.0, the substatus code is included in the log and is useful for debugging. IIS uses substatus codes to indentify specific problems. For example, 404.2 indicates that the ISAPI extension handling the request is locked down. A list of status and substatus codes can be found in the About Custom Error Messages topic.
New for IIS 6.0, malformed or bad requests and requests that fail to be served by an Application Pool are logged to the HTTPERR log by HTTP.SYS, the kernel-mode driver for handling HTTP requests. Each entry includes the URL and a brief description of the error.
Check the HTTPERR log for rejected requests. Requests are rejected by HTTP.SYS when the kernel request queue is exceeded, and when the application is taken offline by the Rapid Fail Protection feature. When the first issue occurs, the URL is logged with the message QueueFull, and when the second occurs, the message is AppOffline. By default, the kernel request queue is set to 1,000, and can be configured on the Application Pool Properties page in IIS Manager. I recommend increasing this to 5,000 for a busy site, since the kernel request queue could easily surpass 1,000 if an Application Pool crashes while a site is under a very high load.
Check the HTTPERR log for requests lost due to a worker process crash or hang. When this occurs the URL will be logged with the message, Connection_Abandoned_By_AppPool, for each in-flight request. An in-flight request is one that was sent to a worker process for processing, but did not complete before the crash or hang.
Details on the HTTPERR Log can be found in Microsoft Knowledge Base Article 820729: INFO: Error Logging in HTTP API.

Other Resources Used to Monitor ASP.NET

Performance counters and the event logs do not catch all errors that occur, and therefore are not entirely sufficient for monitoring ASP.NET. I recommend writing a simple application that sends an HTTP request for one or more pages and expects a certain response. This tool should monitor the time to last byte (TTLB) to ensure that pages are served in a timely manner. It should also record any errors that occur, as this information will be needed to analyze the problem.
The IIS 6.0 Resource Kit includes Log Parser 2.1, a tool for parsing log files (W3C Log, HTTPERR Log, Event Logs) and storing the results in a file or database. The resource kit can be installed on Microsoft® Windows® XP and Microsoft® Windows Server™ 2003.
You might also write an application that collects performance data, filters the event log, and records key data in a Microsoft® SQL Server database. It is amazingly easy to do this using the System.Diagnostics namespace. You can even monitor worker process restarts using the System.Diagnostics.Process class.
To help you get started, use the link at the top of this article to download sample code for several useful tools:
  1. Source code for snap.exe, a command-line tool for logging performance data for processes. The file Snap.cs contains a brief description and explains how to compile the tool.
  2. Source code for HttpClient.exe, a simple client that records time to last byte (TTLB) for HTTP requests. The file HttpClient.cs contains a brief description and explains how to compile the tool.
  3. Source code for qqq.exe, a command-line tool for stress testing an ASP.NET application. When used in combination with a stress client, such as Microsoft® Application Center Test (ACT), this tool will attach debuggers to the worker process and monitor certain performance counters. It can be tuned to break into the debuggers when performance degrades. The file qqq.cs contains a breif description and explains how to compile the tool.
  4. The pminfo.aspx page uses the System.Web.ProcessModelInfo class to display information about process restarts of aspnet_wp. The history is maintained until the w3svc service is stopped.
  5. Source code for ErrorHandler.dll. This is an IHttpModule that you can add to the HTTP pipeline to log unhandled exceptions to the event log. It is better to log errors to a SQL Server database, but the example uses the event log for simplicity.
Another simple step is implementing Application_Error. You can add the following text to global.asax and immediately start logging most unhandled errors to the application log:
<%@ import namespace="System.Diagnostics" %>
<%@ import namespace="System.Text" %>

const string sourceName      = ".NET Runtime";
const string serverName      = ".";
const string logName         = "Application";
const string uriFormat       = "\r\n\r\nURI: {0}\r\n\r\n";
const string exceptionFormat = "{0}: \"{1}\"\r\n{2}\r\n\r\n";

void Application_Error(Object sender, EventArgs ea) {
    StringBuilder message = new StringBuilder();
    
    if (Request != null) {
        message.AppendFormat(uriFormat, Request.Path);
    }
  
    if (Server != null) {
        Exception e;
        for (e = Server.GetLastError(); e != null; e = e.InnerException) {
            message.AppendFormat(exceptionFormat, 
                                 e.GetType().Name, 
                                 e.Message,
                                 e.StackTrace);
        }
    }

    if (!EventLog.SourceExists(sourceName)) {
        EventLog.CreateEventSource(sourceName, logName);
    }

    EventLog Log = new EventLog(logName, serverName, sourceName);
    Log.WriteEntry(message.ToString(), EventLogEntryType.Error);

    //Server.ClearError(); // uncomment this to cancel the error
}

Application_Error will catch parser, compilation, and run-time errors within pages. It will not catch configuration issues, nor will it catch errors that occur within inetinfo while aspnet_isapi processes the request. Also, when using impersonation, the impersonated token must have permission to write to this event source. You may avoid the issue with impersonation by logging errors to a SQL Server database.
Last but not least, the Microsoft® Debugging Tools for Windows are very useful for debugging problems on a production Web server. These tools can be downloaded from http://www.microsoft.com/whdc/ddk/debugging/installx86.mspx. There is a debugger extension named sos.dll that you can load into the debugger windbg.exe or cdb.exe to debug managed code. It can dump contents of the garbage collection (GC) heap, show managed stack traces, aid investigation of contention for managed locks, display thread pool statistics, and much, much more. This can be downloaded as part of the Debugging Toolset mentioned in Production Debugging for .NET Framework Applications.

Understanding the Performance Counters

The following is a brief description of important performance counters and how to use them.

.NET CLR Exceptions Counter

The _Global_ counter instance should not be used with this counter, because it is updated by all managed processes. Instead, use the aspnet_wp instance.
  • #Exceps thrown / sec. The total number of managed exceptions thrown per second. As this number increases, performance degrades. Exceptions should not be thrown as part of normal processing. Note, however, that Response.Redirect, Server.Transfer, and Response.End all cause a ThreadAbortException to be thrown multiple times, and a site that relies heavily upon these methods will incur a performance penalty. If you must use Response.Redirect, call Response.Redirect(url, false), which does not call Response.End, and hence does not throw. The downside is that the user code that follows the call to Response.Redirect(url, false) will execute. It is also possible to use a static HTML page to redirect. Microsoft Knowledge Base Article 312629 provides further detail. In addition to monitoring this very useful performance counter, the Application_Error event should be used in order to alert administrators to problems.
    Threshold: 5% of RPS. Values greater than this should be investigated, and a new threshold should be set as necessary.

.NET CLR Loading Counters

The _Global_ counter instance should not be used with these performance counters, because it is updated by all managed processes. Instead, use the aspnet_wp instance.
  • Current AppDomains. The current number of AppDomains loaded in the process. The value of this counter should be the same as the number of Web applications plus 1. The additional AppDomain is the default domain.
  • Current Assemblies. The current number of assemblies loaded in the process. By default, ASPX and ASCX files in a directory are "batch" compiled. This typically yields one to three assemblies, depending upon dependencies. For example, if there are ASPX pages with parse-time dependencies on ASCX files, two assemblies will typically be generated. One will contain the ASPX files, the other ASCX files. Parse-time dependencies include those created by the <%@ import %>, <%@ reference %>, and <%@ register %> directives. A control loaded through the LoadControl method does not create a parse-time dependency. Note that the global.asax is compiled to an assembly by itself. Occasionally, excessive memory consumption is caused by an unusually large number of loaded assemblies. For example, a site that displays news articles will perform better using a small set of ASPX files that obtain the news from a database than it would were a single ASPX file used for each article. Site designers should attempt to generate content dynamically, make use of caching, and reduce the number of ASPX and ASCX pages.
    Assemblies cannot be unloaded from an AppDomain. To prevent excessive memory consumption, the AppDomain is unloaded when the number of re-compilations (ASPX, ASCX, ASAX) exceeds the limit specified by . Note that if the <%@ page debug=%> attribute is set to true, or if is set to true, batch compilation is disabled.
  • Bytes in Loader Heap. The number of bytes committed by the class loader across all AppDomains. This counter should reach a steady state. If this counter is continuously increasing, monitor the "Current Assemblies" counter. There may be too many assemblies loaded per AppDomain.

.NET CLR Memory Counters

The _Global_ counter instance should not be used with these performance counters, because it is updated by all managed processes. Instead, use the aspnet_wp instance.
  • # Bytes in all Heaps. The number of bytes committed by managed objects. This is the sum of the large object heap and the generation 0, 1, and 2 heaps. These regions of memory are of type MEM_COMMIT (see Platform SDK documentation for VirtualAlloc). The value of this counter will always be less than the value of Process\Private Bytes, which counts all MEM_COMMIT regions for the process. Private Bytes minus # Bytes in all Heaps is the number of bytes committed by unmanaged objects. The first step in the investigation of excessive memory usage is to determine whether it is being used by managed or unmanaged objects. To investigate excessive managed memory usage, I recommend WINDBG.EXE and SOS.DLL, which you can read about in Production Debugging for .NET Framework Applications. SOS.DLL has a "!dumpheap –stat" command that lists the count, size, and type of objects in the managed heap. You can use "!dumpheap –mt" to obtain the address of an object, and "!gcroot" to see its roots. The command "!eeheap" presents memory usage statistics for the managed heaps.
    Another useful tool for diagnosing memory usage is the CLR Profiler, discussed in more detail below.
    Excessive managed memory usage is commonly caused by:
    1. Reading large data sets into memory.
    2. Creating excessive cache entries.
    3. Uploading or downloading large files.
    4. Excessive use of regular expressions or strings while parsing files.
    5. Excessive ViewState.
    6. Too much data in session state or too many sessions.
  • # Gen 0 Collections. The number of times generation 0 objects have been garbage collected. Objects that survive are promoted to generation 1. A collection is performed when room is needed to allocate an object, or when someone forces a collection by calling System.GC.Collect. Collections that involve higher generations take longer, since they are preceded by collections of lower generations. Attempt to minimize the percentage of generation 2 collections. As a rule of thumb, the number of generation 0 collections should be 10 times larger than the number of generation 1 collections, and similarly for generation 1 and 2. The # Gen N Collections counters and the % Time in GC counter are the best for identifying performance issues caused by excessive allocations. See the description for % Time in GC for steps you can take to improve performance.
  • # Gen 1 Collections. The number of times generation 1 objects have been garbage collected. Objects that survive are promoted to generation 2. Threshold: one-tenth the value of # Gen 0 Collections. Applications that perform well follow the rule of thumb mentioned in the description for the # Gen 0 Collections counter.
  • # Gen 2 Collections. The number of times generation 2 objects have been garbage collected. Generation 2 is the highest, thus objects that survive collection remain in generation 2. Gen 2 collections can be very expensive, especially if the size of the Gen 2 heap is excessive. Threshold: one-tenth the value of # Gen 1 Collections. Applications that perform well follow the rule of thumb mentioned in the description for the # Gen 0 Collections counter.
  • % Time in GC. The percentage of time spent performing the last garbage collection. An average value of 5% or less would be considered healthy, but spikes larger than this are not uncommon. Note that all threads are suspended during a garbage collection. The most common cause of a high % Time in GC is making too many allocations on a per request basis. The second most common cause is inserting a large amount of data into the ASP.NET cache, removing it, regenerating it, and reinserting it into the cache every few minutes. There are often small changes that can be made to greatly reduce allocations. For example, string concatenation can be expensive on a per request basis, since the concatenated strings need to be garbage collected. StringBuilder, with an appropriate initial capacity, performs better than string concatenation. However, StringBuilder also needs to be garbage collected, and if used improperly, can result in more allocations than expected as the internal buffers are resized. Calling Response.Write multiple times on each string performs even better than combining them with StringBuilder, so if you can avoid StringBuilder altogther, please do.
    Applications often store data temporarily in a StringBuilder or MemoryStream while generating a response. Instead of recreating this temporary storage on each request, consider implemeting a reusable buffer pool of character or byte arrays. A buffer pool is an object with a GetBuffer and a ReturnBuffer routine. The GetBuffer routine attempts to return a buffer from an internal store of buffers, but creates a new buffer if the store is empty. The ReturnBuffer routine returns the buffer to the store if the maximum number of stored buffers has not yet been reached, but otherwise frees it. The downside to this buffer pool implementation is that it requires locking for thread-safety. Alternatively, you can avoid the performance impact of locking by using HttpContext.ApplicationInstance to access an instance field defined in global.asax. There is one instance of global.asax for each pipeline instance, and thus the field is accessible from only one request at a time, making it a great place to store a reusable character or byte buffer.
    To reduce % Time in GC, you need to know your allocation pattern. Use the CLR Profiler to profile either a single request or a light stress for at most a couple of minutes. (These tools are invasive and are not meant to be used in producton.) The Allocation Graph view displays the total memory allocated for each object type, and the call stack that performed the allocation. Use this to trim down excessive allocations. The Histogram by Size view (select Histogram Allocated Types from the View menu) summarizes the size of the allocated objects. Avoid allocating short-lived objects larger than 85,000 bytes. These objects are allocated in the large object heap, and are more expensive to collect. In the Histogram by Size view, you can select objects with your mouse and right-click to see who allocated them. Reducing allocations is an iterative process of code modifications and profiling.
    Threshold: an average of 5% or less; short-lived spikes larger than this are common. Average values greater than this should be investigated. A new threshold should be set as necessary.

ASP.NET Counters

Performance counters in this category are only reset to 0 when the w3svc service is started.
  • Application Restarts. The number of application restarts. Recreating the application domain and recompiling pages takes time, therefore unforeseen application restarts should be investigated. The application domain is unloaded when one of the following occurs:
    • Modification of machine.config, web.config, or global.asax.
    • Modification of the application's bin directory or its contents.
    • When the number of compilations (ASPX, ASCX, or ASAX) exceeds the limit specified by .
    • Modification of the physical path of a virtual directory.
    • Modification of the code-access security policy.
    • The Web service is restarted.
    For Web farms in production, it is recommended that a server be removed from rotation prior to updating content for best performance and reliability. For a single Web server in production, content can be updated while the server is under load. The hotfix described in Knowledge Base Article 810281 is of interest to anyone experiencing errors after an application restarts, such as sharing violations with an error similar to "Cannot access file because it is being used by another process."
    An issue involving anti-virus software and applications restarts is fixed in Knowledge Base Article 820746: FIX: Some Antivirus Programs May Cause Web Applications to Restart Unexpectedly for v1.0, and in Knowledge Base Article 821438 for v1.1.
    Threshold: 0. In a perfect world, the application domain will survive for the life of the process. Excessive values should be investigated, and a new threshold should be set as necessary.
  • Applications Running. The number of applications running.
  • Requests Current. The number of requests currently handled by the ASP.NET ISAPI. This includes those that are queued, executing, or waiting to be written to the client. This performance counter was added to v1.0 of ASP.NET in the pre-SP3 hotfix described in Knowledge Base Article 329959. ASP.NET will begin to reject requests when this counter exceeds the requestQueueLimit defined in the processModel configuration section. Note that requestQueueLimit applies to ASP.NET on IIS 5.0 when running in aspnet_wp, but perhaps surprisingly, it also applies on IIS 6.0 when running in w3wp. It is not well known that several processModel configuration settings still apply when running in IIS 6.0. These include requestQueueLimit, responseDeadlockInterval, maxWorkerThreads, maxIoThreads, minWorkerThreads, and minIoThreads. A bug in v1.1 of the Framework, fixed in ASP.NET 1.1 June 2003 Hotfix Rollup Package, allowed ASP.NET to handle an infinite number of requests when running in IIS 6.0. The fix causes ASP.NET to reject requests when Requests Current exceeds the requestQueueLimit.
    For classic ASP applications, Requests Queued provides a warning for when requests will be rejected. For ASP.NET, Requests Current, together with Requests in Application Queue, provide this functionality.This counter is also used by the ASP.NET deadlock detection mechanism. If Requests Current is greater than 0 and no responses have been received from the worker process for a duration greater than the limit specified by <processModel responseDeadlockInterval=/>, the process is terminated and a new process is started. In the pre-SP3 hotfix described in Knowledge Base Article 328267, the algorithm was modified so that Requests Current must be greater than the sum of maxWorkerThreads plus maxIoThreads, specified in the <processModel/> configuration section. Note that by default the request execution timeout is 90 seconds, and is intentionally less than responseDeadlockInterval. The request execution timeout can be modified through the <httpRuntime executionTimeout=/> configuration setting or the Server.ScriptTimeout property, but it should always be made less than responseDeadlockInterval.
  • Request Execution Time. The number of milliseconds taken to execute the last request. In version 1.0 of the Framework, the execution time begins when the worker process receives the request, and stops when the ASP.NET ISAPI sends HSE_REQ_DONE_WITH_SESSION to IIS. For IIS version 5, this includes the time taken to write the response to the client, but for IIS version 6, the response buffers are sent asynchronously, and so the time taken to write the response to the client is not included. Thus on IIS version 5, a client with a slow network connection will increase the value of this counter considerably. In version 1.1 of the Framework, execution time begins when the HttpContext for the request is created, and stops before the response is sent to IIS. Assuming that user code does not call HttpResponse.Flush, this implies that execution time stops before sending any bytes to IIS, or to the client for that matter.
    ASP.NET\Request Execution Time is an instance counter, and very volatile. On the other hand, time to last byte (TTLB) can be easily averaged and used to calculate a better estimate of performance over a period of time. TTLB can be calculated through a simple HTTP client written in managed code, or you can use one of the many HTTP clients available that calculate TTLB, such as Application Center Test (ACT).
    Note that if <compilation debug=/> is set to TRUE, then batch compilation will be disabled and the <httpRuntime executionTimeout=/> configuration setting as well as calls to Server.ScriptTimeout will be ignored. This can cause problems if the <processModel responseDeadlockInterval=/> setting is not set to Infinite, since requests for debug pages can theoretically run forever.
    Threshold: N.A. The value of this counter should be stable. Experience will help set a threshold for a particular site. When the process model is enabled, the request execution time includes the time required to write the response to the client, and therefore depends upon the bandwidth of the client's connection.
  • Requests Queued. The number of requests currently queued. When running on IIS 5.0, there is a queue between inetinfo and aspnet_wp, and there is one queue for each virtual directory. When running on IIS 6.0, there is a queue where requests are posted to the managed ThreadPool from native code, and a queue for each virtual directory. This counter includes requests in all queues. The queue between inetinfo and aspnet_wp is a named pipe through which the request is sent from one process to the other. The number of requests in this queue increases if there is a shortage of available I/O threads in the aspnet_wp process. On IIS 6.0 it increases when there are incoming requests and a shortage of worker threads. Note that requests are rejected when the Requests Current counter exceeds the . Many people think this occurs when the Requests Queued counter exceeds requestQueueLimit, but this is not the case. When this limit is exceeded, requests will be rejected with a 503 status code and the message "Server is too busy." If a request is rejected for this reason, it will never reach managed code, and error handlers will not be notified. Normally this is only an issue when the server is under a very heavy load, although a "burst" load every hour might also cause this. For the unusual burst load scenario, you might be interested in the hotfix described in Knowledge Base Article 810259, which will allow you to increase the minimum number of I/O threads from the default of 1 per CPU.
    Each virtual directory has a queue that it uses to maintain the availability of worker and I/O threads. The number of requests in this queue increases if the number of available worker threads or available I/O threads falls below the limit specified by <httpRuntime minFreeThreads=/>. When the limit specified by <httpRuntime appRequestQueueLimit=/> is exceeded, the request is rejected with a 503 status code and the client receives an HttpException with the message "Server too busy."
    By itself, this counter is not a clear indicator of performance issues, nor can it be used to determine when requests will be rejected. In Knowledge Base Article 329959, two new performance counters were introduced to address this problem: Requests Current and Requests In Application Queue. Please see the descriptions for these two counters, as well as for Requests Rejected.
  • Requests Rejected. The number of rejected requests. Requests are rejected when one of the queue limits is exceeded (see description of Requests Queued). Requests can be rejected for a number of reasons. Backend latency, such as that caused by a slow SQL server, is often preceded by a sudden increase in the number of pipeline instances and a decrease in CPU utilization and Requests/sec. A server may be overwhelmed during times of heavy load due to processor or memory constraints that ultimately result in the rejection of requests. An application's design may result in excessive request execution time. For example, batch compilation is a feature in which all the pages in a directory are compiled into a single assembly when the first request for a page is received. If there are several hundred pages in a directory, the first request into this directory may take a long time to execute. If <compilation batchTimeout=/> is exceeded, the batch compilation will continue on a background thread and the requested page will be compiled individually. If the batch compilation succeeds, the assembly will be preserved to disk and can be reused after an application restart. However, if the global.asax, web.config, machine.config, or an assembly in the application's bin folder is modified, the batch compilation process will execute again due to the dependency change.
    Careful design of a large site can have a significant impact upon performance. In this case, it is better to have only a few pages that vary behavior based upon query string data. In general, you need to minimize request execution time, which is best monitored by averaging time to last byte (TTLB) using an HTTP client that requests one or more pages from the Web site.
    The following performance counters are best suited toward discovering the cause of rejected requests:
    • Process\% Processor Time
    • Process\Private Bytes
    • Process\Thread Count
    • Web Service\ISAPI Extension Requests/sec
    • ASP.NET\Requests Current
    • ASP.NET\Requests Queued
    • ASP.NET\Request Wait Time
    • ASP.NET Applications\Pipeline Instance Count
    • ASP.NET Applications\Requests in Application Queue
    Threshold: 0. The value of this counter should be 0. Values greater than this should be investigated.
  • Request Wait Time. The number of milliseconds that the most recent request spent waiting in the queue, or named pipe, that exists between inetinfo and aspnet_wp (see description of Requests Queued). This does not include any time spent waiting in the application queues. Threshold: 1000. The average request should spend 0 milliseconds waiting in the queue.
  • Worker Processes Running. The current number of aspnet_wp worker processes. For a short period of time, a new worker process and the worker process that is being replaced may coexist. Although rare, sometimes processes deadlock while they are exiting. If you suspect this, consider using a tool to monitor the number of instances of the worker process. Alternatively, the Memory\Available Mbytes performance counter can be used, since these hanging processes will consume memory. Threshold: 2. During shutdown of the previous worker process, there may be two instances. If webGarden is enabled, the threshold should be #CPUs plus 1. Values greater than this may indicate excessive process restarts within a very short period of time.
  • Worker Process Restarts. The number of aspnet_wp process restarts. Threshold: 1. Process restarts are expensive and undesirable. Values are dependent upon the process model configuration settings, as well as unforeseen access violations, memory leaks, and deadlocks. If aspnet_wp restarts, an Application Event Log entry will indicate why. Requests will be lost if an access violation or deadlock occurs. If process model settings are used to preemptively recycle the process, it will be necessary to set an appropriate threshold.

ASP.NET Applications Counters

The performance counters in this category are reset to 0 when either the application domain or Web service is restarted.
  • Cache Total Entries. The current number of entries in the cache (both User and Internal). Internally, ASP.NET uses the cache to store objects that are expensive to create, including configuration objects, preserved assembly entries, paths mapped by the MapPath method, and in-process session state objects.
    Note   The "Cache Total" family of performance counters is useful for diagnosing issues with in-process session state. Storing too many objects in the cache is often the cause of memory leaks.
  • Cache Total Hit Ratio. The total hit-to-miss ratio of all cache requests (both user and internal).
  • Cache Total Turnover Rate. The number of additions and removals to the cache per second (both user and internal). A high turnover rate indicates that items are being quickly added and removed, which can be expensive.
  • Cache API Entries. The number of entries currently in the user cache.
  • Cache API Hit Ratio. The total hit-to-miss ratio of User Cache requests.
  • Cache API Turnover Rate. The number of additions and removals to the user cache per second. A high turnover rate indicates that items are being quickly added and removed, which can be expensive.
  • Output Cache Entries. The number of entries currently in the Output Cache.
  • Output Cache Hit Ratio. The total hit-to-miss ratio of Output Cache requests.
  • Output Cache Turnover Rate. The number of additions and removals to the output cache per second. A high turnover rate indicates that items are being quickly added and removed, which can be expensive.
  • Pipeline Instance Count. The number of active pipeline instances. Only one thread of execution can be running within a pipeline instance, so this number gives the maximum number of concurrent requests that are being processed for a given application. The number of pipeline instances should be steady. Sudden increases are indicative of backend latency (see the description of Requests Rejected above).
  • Compilations Total. The number of ASAX, ASCX, ASHX, ASPX, or ASMX files that have been compiled. This is the number of files compiled, not the number of generated assemblies. Assemblies are preserved to disk and reused until either the create time, last write time, or length of a file dependency changes. The dependencies of an ASPX page include global.asax, web.config, machine.config, dependent assemblies in the bin folder, and ASCX files referenced by the page. If you restart the application without modifying any of the file dependencies, the preserved assembly will be reloaded without requiring any compilation. This performance counter will increment only when a file is initially parsed and compiled into an assembly. By default, batch compilation is enabled, however, this counter will increment once for each file that is parsed and compiled into an assembly, regardless of how many assemblies are created.
    If compilation fails, the counter will not be incremented.
  • Errors During Preprocessing. The total number of configuration and parsing errors. This counter is incremented each time a configuration error or parsing error occurs. Even though configuration errors are cached, the counter increments each time the error occurs.
    Note    Do not rely solely upon the "Errors" performance counters to determine whether the server is healthy. They are reset to zero when the AppDomain is unloaded. They can, however, be used to dig deeper into a specific issue. In general, use the Application_Error event in order to alert administrators to problems.
  • Errors During Compilation. The total number of compilation errors. The response is cached, and this counter increments only once until recompilation is forced by a file change. Implement custom error handling to raise an event.
  • Errors During Execution. The total number of run-time errors.
  • Errors Unhandled During Execution. The total number of unhandled exceptions at run time. This does not include the following:
    1. Errors cleared by an event handler (for example, by Page_Error or Application_Error).
    2. Errors handled by a redirect page.
    3. Errors that occur within a try/catch block.
  • Errors Unhandled During Execution/sec. The total number of unhandled exceptions per second at run time.
  • Errors Total. The sum of Errors During Preprocessing, Errors During Compilation, and Errors During Execution.
  • Errors Total/sec. The total of Errors During Preprocessing, Errors During Compilation, and Errors During Execution per second.
  • Requests Executing. The number of requests currently executing. This counter is incremented when the HttpRuntime begins to process the request and is decremented after the HttpRuntime finishes the request. In v1.1 of the Framework, there is a bug in this performance counter that is fixed in the ASP.NET 1.1 June 2003 Hotfix Rollup Package. Unfortunately the bug is not described in the Knowledge Base Article. Prior to the fix, the counter included the time taken to write the response to the client.
  • Requests In Application Queue. The number of requests in the application request queue (see description of Requests Queued above). In addition to Requests Current, Requests in Application Queue provides a warning for when requests will be rejected. If there are only a couple virtual directories, increasing the default appRequestQueueLimit to 200 or 300 may be suitable, especially for slow applications under heavy load.
  • Requests Not Found. The number of requests for resources not found.
  • Requests Not Authorized. The number of request failed due to unauthorized access.
  • Requests Timed Out. The number of requests that have timed out.
  • Requests Succeeded. The number of requests that have executed successfully.
  • Requests Total. The number of requests since the application was started.
  • Requests/Sec. The number of requests executed per second. I prefer "Web Service\ISAPI Extension Requests/sec" because it is not affected by application restarts.

Process Counters

With these counters, the processes of interest are aspnet_wp and inetinfo.
  • % Processor Time. The percentage of time the threads of this process spend using the processors. Threshold: 70%. Values greater than this for extended periods of time indicate a need to purchase hardware or optimize your application.
  • Handle Count. Threshold: 10000. A handle count of 2000 in aspnet_wp is suspicious, and 10,000 is far beyond acceptable limits. Noticeable performance degradation will occur if the total handle count for all processes exceeds approximately 40,000, which is entirely achievable during a denial-of-service attack against IIS.
  • Private Bytes. The current size, in bytes, of the committed memory owned by this process. Memory leaks are identified by a consistent and prolonged increase in Private Bytes. This is the best performance counter for detecting memory leaks. When running on IIS 5.0, a memory limit for Private Bytes should be set in the configuration section. When running on IIS 6.0, the memory limit should be set in IIS Manager. Open Properties for the application pool, and on the Recycling tab, specify a limit for Maximum used memory (in megabytes). This limit corresponds to Private Bytes. Private Bytes for the worker process is compared with the memory limit to determine when to recycle the process. System.Web.Caching.Cache also uses Private Bytes and the memory limit to determine when to expunge items from the cache, and thus avoid recycling the process. A memory limit of 60% of physical RAM is recommended to avoid paging, especially when a new process replaces the old one due to excessive memory consumption. Note that Knowledge Base Article 330469 resolves a problem with ASP.NET in which it fails to monitor Private Bytes on servers with a large number of running processes. This hotfix also enables the cache memory manager to function properly when there are a large number of running processes.
    It is important to adjust the memory limit on machines with large amounts of physical RAM, so that the cache memory manager and process recycling function properly. For example, assume you have a server with 4 gigabytes (GB) of physical RAM that is using the default memory limit. This is a problem. Sixty percent of physical RAM is 2.4 GB, which is larger than the default virtual address space of 2 GB. So what should the memory limit be set to?
    There are a couple things to consider: First, the likelihood of experiencing an OutOfMemoryException begins to increase dramatically when "Process\Virtual Bytes" is within 600 MB of the virtual address space limit (generally 2 GB), and secondly, tests have shown that "Process\Virtual Bytes" is often larger than "Process\Private Bytes" by no more than 600 MB. This difference is due in part to the MEM_RESERVE regions maintained by the GC, allowing it to quickly commit more memory when needed. Taken together this implies that when "Process\Private Bytes" exceeds 800 MB, the likelihood of experiencing an OutOfMemoryException increases. In this example the machine has 4 GB of physical RAM, so you need to set the memory limit to 20% to avoid out-of-memory conditions. You might experiment with these numbers to maximize the usage of memory on a machine, but if you want to play it safe, the numbers in the example will work.
    To summarize, set the memory limit to the smaller of 60% of physical RAM or 800 MB. Since v1.1 supports 3 GB virtual address space, if you add /3GB to boot.ini, you can safely use 1,800 MB instead of 800 MB as an upper bound.
    Note that when running tests, if you would like to force a GC and stabilize managed memory, you can call System.GC.GetTotalMemory(true) once. This method will call GC.Collect and WaitForPendingFinalizers() repeatedly until the memory stabilizes within 5%.
    Threshold: the minimum of 60% of physical RAM and 800 MB. Values greater than 60% of total physical RAM begin to have an impact upon performance, especially during application and process restarts. The liklihood of an OutOfMemoryException greatly increases when Private Bytes exceeds 800 MB in a process with a virtual address space limit of 2 GB.
  • Thread Count. The number of threads active in this process. Thread count often increases when the load is too high. Threshold: 75 + ((maxWorkerThread + maxIoThreads) * #CPUs). The threshold should be increased if aspcompat mode is used: Threshold: 75 + ((maxWorkerThread + maxIoThreads) * #CPUs * 2).
  • Virtual Bytes. The current size, in bytes, of the virtual address space for this process. The virtual address space limit of a user mode process is 2 GB, unless 3 GB address space is enabled by using the /3GB switch in boot.ini. Performance degrades as this limit is approached, and typically results in a process or system crash. The address space becomes fragmented as the 2 GB or 3 GB limit is approached, and so I recommend a conservative threshold of 1.4 or 2.4 GB, respectively. If you're running into issues here, you will see System.OutOfMemoryException being thrown, and this may or may not crash the process.
    When running on IIS 6.0, a virtual memory limit can be set in IIS Manager. However, setting this improperly can cause problems for ASP.NET. ASP.NET expunges items from the cache to avoid exceeding the Private Bytes limit, but the algorithm uses Private Bytes and the Private Bytes limit in this determination. It does not monitor Virtual Bytes or the Virtual Bytes limit. Given that the difference between Virtual Bytes and Private Bytes is typically no more than 600 MB, you could set the Virtual Bytes limit to a value 600 MB larger than the Private Bytes limit if you are concerned about the possibility of virtual memory leaks or fragmentation. If this is desirable, set a limit for Maximum virtual memory (in megabytes), found on the Recycling tab for the Properties of the application pool.
    Version 1.0 of the Framework does not support 3 GB address space in the worker process or the state service. However, see Knowledge Base Article 320353 for instructions to enable 3 GB address space within inetinfo.exe. Version 1.1 fully supports 3 GB address space for the worker process and state service.
    Threshold: 600 MB less than the size of the virtual address space; either 1.4 or 2.4 GB.

Processor Counter

  • % Processor Time. The percentage of time all threads spend using the processors. Threshold: 70%. Values greater than this for extended periods of time indicate a need to purchase hardware or optimize your application.

Memory Counter

  • Available Mbytes. The amount of physical RAM available. Threshold: 20% of physical RAM. Values less than this should be investigated and may indicate a need to purchase hardware.

System Counter

  • Context Switches/sec. The rate at which the processors switch thread contexts. A high number may indicate high lock contention or transitions between user and kernel mode. Context Switches/sec should increase linearly with throughput, load, and the number of CPUs. If it increases exponentially, there is a problem. A profiler should be used for further investigation.

Web Service Counters

  • Current Connections. A threshold for this counter is dependent upon many variables, such as the type of requests (ISAPI, CGI, static HTML, and so on), CPU utilization, and so on. A threshold should be developed through experience.
  • Total Method Requests/sec. Used primarily as a metric for diagnosing performance issues. It can be interesting to compare this with "ASP.NET Applications\Requests/sec" and "Web Service\ISAPI Extension Requests/sec" in order to see the percentage of static pages served versus pages rendered by aspnet_isapi.dll.
  • ISAPI Extension Requests/sec. Used primarily as a metric for diagnosing performance issues. It can be interesting to compare this with "ASP.NET Applications\Requests/sec" and "Web Service\Total Method Requests/sec." Note that this includes requests to all ISAPI extensions, not just aspnet_isapi.dll.

Conclusion

Careful stress and performance testing of an application before going live can prevent a major headache. There seem to be two major stumbling blocks that many people encounter:
  1. You need to use an HTTP client capable of simulating the traffic and load that you expect the Web site to experience.
  2. You need to test the entire application in an environment nearly identical to the production environment.
It's not easy simulating real Web site traffic, but I can honestly say that most of the applications that experience trouble were never adequately stress tested. This article should help you understand performance counters and create some useful tools for monitoring performance. To apply the load, I recommend Microsoft Application Center Test (ACT), which is included with Microsoft® Visual Studio® .NET. You can read more about this stress tool at the Microsoft Application Center Test 1.0, Visual Studio .NET Edition. I also recommend Microsoft® Web Application Stress Tool (WAST). This can be downloaded for free from TechNet. If your application uses ViewState, you'll need to use ACT since WAST cannot dynamically parse the response.
I don't know what it is about production environments, but there is definitely something special about them. I cannot count the times I've heard the statement, "The problem only occurs on our production site." Typically the difference is the application itself. There is often some part of the application that cannot be simulated in the lab. For example, the ad server was omitted from testing, or the database used to simulate the real database is substantially different. Sometimes network or DNS differences are the cause, and sometimes it's a difference in the hardware on which the servers run.
I've been debugging and monitoring the performance of ASP.NET applications for several years, yet there are still times when I need help. If you find yourself in this position, the forums on the ASP.NET Web site are a good place to go for answers. But if you're really in a bind, don't hesitate to contact Microsoft Product Support using the contact information supplied on that site. Note that if a problem is determined by Microsoft to be the result of a defect in a Microsoft product, you will not be charged for that incident.
Hopefully this document has equipped you with the tools and information that you need to ensure the reliability and performance of your application. If you have any questions, post them on ASP.NET Web and I'll do my best to answer them. Good luck!

About the Author

Thomas Marquardt is a developer on the ASP.NET team at Microsoft. He's been debugging and investigating performance issues with ASP.NET applications since the winter of 2000. Thomas would like to thank Dmitry Robsman, the Development Manager for ASP.NET at Microsoft, for hours and hours of help and guidance over the years.