Quicksteps to Profiling Entity Framework Queries in SQL Server

When creating an application with an ORM framework, such as Entity Framework, it’s a good idea to look at the SQL that’s actually being sent to the server. This is a critical step that can be easily overlooked.

Why is it important to look at the SQL? Well, if we don’t look at the SQL, then we can end up with queries that join ten different tables and return rows with more than 100 columns. Unless your machine has unlimited resources, then we should strive to keep our queries resource-friendly.

In this session, we’ll see how to quickly use SQL Profiler to see what queries your application is generating. Notice that I said quickly because we just want to see the SQL being generated. In a later session, we’ll look at how you can identify queries that may need to be refactored by looking at some attributes of the query, such as the reads, writes, row counts, CPU, and duration.

Step 1:  Name Your Application

If you share the same database with other applications then it’s very useful to name your application because it will allow you to filter based on your application’s queries.

Naming your application is as easy as adding Application Name to your connection string.


<add name="MyCoolApplicationConnection"
connectionString="Server=MyServerName; Database=MyDatabaseName; Application Name=MyCoolApplicationName; Integrated Security=SSPI" />

Step 2:  Start the Profiler

Now let’s start SQL Server Profiler and get it ready to listen for queries.

  1. Open SQL Server Profiler. (This is installed with SQL Server Management tools, which is an optional component when installing SQL Server, so be sure to select it in the Features section during installation.)
  2. Select File => New Trace…
    1 New Trace
  3. Select the server that contains the database that will be profiled and click Connect.
  4. After connecting to a database, the Trace Properites window will open. Follows is a description of some suggested values:
    1. Trace name: Enter something meaningful to you, or leave it as the default if you’re only running a quick, throw-away session.
    2. Use the template: You can use one of the predefined templates or you can create your own custom template.Note:  A description of the predefined templates can be found at SQL Server Profiler Templates.You’ll probably want to create your own personalized template. A description of the different events can be found at Describing Events by Using Data Columns.Note II:  I’ve also created a nice starting template that you can download here. Instructions for importing a custom template can be found at Import a Trace Template (SQL Server Profiler).The custom template includes the following columns:
      Data column Description
      TextData Depends on the event, but this will contain the SQL query for query events.
      Duration How long the event tool in microseconds.
      CPU How much CPU was used in milliseconds
      Reads Number of read operations on the logical disk
      Writes Number of physical disk writes
      RowCounts Number of rows
      StartTime When the event started
      EndTime When the event ended
      IsSystem Will be 1 if this event occurred on a system process. A user process is indicated by 0.
      ApplicationName The client application name (from step one).
      SPID Server process ID
      DatabaseID ID of the database specified in the USE statement. If no USE statement was used, then this will be the default database.
      DatabaseName Name of the database.
      ObjectType Corresponds to the type column in sysobjects.
      LoginName The user’s login name.
      HostName The client’s computer name.
  5. Save to file: You can save to a file if you want to store the results for future analysis or to compare a future run against this run.
  6. Save to table: You can also choose a database table where the results should be saved. SQL Server will automatically create the columns based on the events that you selected.Note: If you can’t select Save to Table here because you don’t have write access to a database, then you can save to a file and you will be able to save the results to a table at a later time.Note II: In order to find queries that are executed a very high number of times, then you’ll need to eventually store the results to a table in order to query them. These queries are candidates for caching, if their results are fairly static.
  7. Enable trace stop time: This option is useful if you want to analyze a database for a specific time period.
  8. At this point, the Trace Properties window might look something like this:Trace Properties
  9. Click Run when you are ready to begin profiling.

Step 3:  Execute Your Query (or Queries)

Now you need to execute the query/queries that you want to profile. This may be as simple as executing a single query in the SQL Server Management Studio or it can be as complex as running a full-out load test.

It’s important to keep in mind that if you execute a single query, then certain queries may have short duration times when run in isolation but they might perform much more slowly when the server is under load. For example, a query with 10,000 reads might perform quickly when it’s the only query executed; however, this query might cause performance issues when many queries are executed at the same time.

Step 3: Stop the Profiling Session

When you wish to stop the profiling session, simply click the Stop Selected Trace button, as shown below:

Stop Selected Trace

Note: After stopping the session, you can save the data either to a file or a database table if you didn’t previously choose to do so.

Step 4: Analyze the Queries

Now it’s time to review your queries.

Query Analysis

The most costly queries will have a high number of reads, writes, CPU, row counts, or any combination of these values.

Find Individually Expensive Queries

A query with a large number of logical reads may either be missing an index or there may be too many rows requested.

You can group the queries by the number of reads by going to Organize Columns and moving the Reads column into the Groups section.

Find Slow Queries

In SQL Profiler, group the queries by the Duration column.

Some causes of a slow query can be an inefficient query, missing indexes, resource constraints, and/or blocking.

Find Individually Cheap but Aggregately Expensive Queries

It’s important to look for cheap queries that execute a large number of times. These queries are similar to someone with a shopping addiction who makes many small purchases at the local retail outlet, then realizes that the culmination of these small purchases added up to more than their budget allowed.

You can find queries that execute a large number of times with one of the following options:

  • Group the trace output in Profiler on the following columns: EventClass, TextData, and Reads. (You can do this by going to File => Properties => Events Selection => Organize Columns, then move the column(s) of interest up into the Groups section.
  • Query the data collected by the Profiler (if it’s stored in a database table), by grouping by the EventClass, then TextData. There is a good example of this query in the book titled SQL Server 2008 Query Performance Tuning Distilled by Sajal Dam and Grant Fritchey.
  • Query the sys.dm_exec_query_stats Dynamic Management View (DMV). An example query can be found at Uncover Hidden Data to Optimize Application Performance.