Tuesday, June 22, 2010

What Does “width: 100%” Do in CSS?

What Does width: 100% Do in CSS?It seems like this should be one of the easiest things to understand in CSS. If you want a block-level element to fill any remaining space inside of its parent, then it’s simple — just add width: 100% in your CSS declaration for that element, and your problem is solved.

Not so fast. It’s not quite that easy. I’m sure CSS developers of all skill levels have attempted something similar to what I’ve just described, with bizarre results ultimately leading to head scratching and shruggingly resorting to experimenting with absolute widths until we find just the right fit. This is just one of those things in CSS that seems easy to understand (and really, it should be), but it’s sometimes not — because of the way that percentages work in CSS.

First Things First: Blocks Don’t Need 100% Width

When we understand the difference between block-level elements and inline elements, we’ll know that a block element (such as a <div>, <p>, or <ul>, to name a few) will, by default expand to fit the width of its containing, or parent, element (minus any margins it has or padding its parent has).

A Block-Level Element Expands Naturally

Most CSS developers understand this concept pretty well, but I thought it would be useful to point it out here as an introduction to explaining how percentages work when used on the width property.

What it Really Means

When you give an element a width of 100% in CSS, you’re basically saying “Make this element’s content area exactly equal to the explicit width of its parent — but only if its parent has an explicit width.” So, if you have a parent container that’s 400px wide, a child element given a width of 100% will also be 400px wide, and will still be subject to margins, paddings, and borders — on top of the 100% width setting. The image below attempts to illustrate this:

The child's content area will equal the parent's

And of course, the exact same rule would apply to any percentage value. The content area will be the percentage of the parent’s explicitly-set width setting.

NOTE ADDED: The width of the child’s content area will equal the width of the parent’s content area, but will not move outside the parent unless its own padding or margins are affecting it. Thanks to the comment by Fabio Brendolin for clarifying this.

Should it Ever Be Used?

In 99% of cases, applying width: 100% to a block level element is either unnecessary or will bring undesirable results. The only time that I can think of when you need to give an element a width of 100% is when the element is inheriting a set width value that you want to override. But even in this case, the correct option would be to use “width: auto”, which is the default for block elements.

Height Works the Same Way (in All Browsers!)

And this little lesson provides a reminder of one of the frustrating things about CSS layouts — that you can’t give an element a height that fills its parent, unless the parent is given (you guessed it) an explicit height setting. The only difference is that in this case “auto” will not work, but instead “height: 100%” is required.

And the great part is that all browsers (even IE6 and IE7) use percentage height exactly the same (IE bugs excluded), so as long as you explicitly set a width on the parent, the child element will fill the height to 100% as expected.

On a Side Note: “Remainder” should be a value

As a side note to all of this, there are some cases where a group of floated child elements will have varying widths, and it would be great to be able to just tell the last element “fill the remaining space”. On my personal CSS wish list would be a property/value setting as follows:

  1. #element {  
  2.     width: remainder; /* not valid, but I wish it was! */  
  3. }  

This would be nice, and would certainly make things easier in some circumstances.

What do you think? Have you ever misinterpreted percentage widths, and abandoned using them because they didn’t work as you expected? I know I’ve done this, and it took me quite some time to get it right and understand them better.

Thursday, June 17, 2010

web application development best practices

Top 10 Best Practices for Production ASP.NET Applications

12 Feb, 2008.

In no particular order, here are the top ten things I've learned to pay attention to when dealing with production ASP.NET applications.  Hopefully they will help you save you some time and headaches.  As always, your thoughts and additions are welcome.

1.  Generate new encryption keys

When moving an application to production for the first time it is a good idea to generate new encryption keys.  This includes the machine validation key and decryption key as well as any other custom keys your application may be using.  There is an article on CodeProject that talks about generating machineKeys specifically that should be helpful with this.

2.  Encrypt sensitive sections of your web.config

This includes both the connection string and machine key sections.  See Scott Guthrie's post for some good references.  Note that if your application runs in a clustered environment you will need to share a custom key using the RSA provider as described in an MSDN article.

3.  Use trusted SQL connections

Both Barry Dorrans and Alex Chang have articles which discuss this in detail.

4.  Set retail="true" in your machine.config

<configuration>
<system.web>
<deployment retail="true"/>
</system.web>
</configuration>

This will kill three birds with one stone.  It will force the 'debug' flag in the web.config to be false,  it will disable page output tracing, and  it will force the custom error page to be shown to remote users rather than the actual exception or error message.  For more information you can read Scott Guthrie's post or the MSDN reference.

5.  Create a new application pool for your site

When setting up your new site for the first time do not share an existing application pool.  Create a new application pool which will be used by only by the new web application.

6.  Set the memory limit for your application pool

When creating the application pool, specifically set the memory limit rather than the time limit which is set by default.  Asp.net has a good whitepaper which explains the value of this:

By default IIS 6.0 does not set a limit on the amount of memory that IIS is allowed to use. ASP.NET’s Cache feature relies on a limitation of memory so the Cache can proactively remove unused items from memory.

It is recommended that you configure the memory recycling feature of IIS 6.0.

7.  Create and appropriately use an app_Offline.htm file

There are many benefits to using this file.  It provides an easy way to take your application offline in a somewhat user friendly way (you can at least have a pretty explanation) while fixing critical issues or pushing a major update.  It also forces an application restart in case you forget to do this for a deployment.  Once again, ScottGu is the best source for more information on this.

8.  Develop a repeatable deployment process and automate it

It is way too easy to make mistakes when deploying any type of software.  This is especially the case with software that uses configuration files that may be different between the development, staging, or production environments.  I would argue that the process you come up with is not nearly as important as it being easily repeatable and automated.  You can fine tune the process as needed, but you don't want a simple typo to bring a site down.

9.  Build and reference release versions of all assemblies

In addition to making sure ASP.NET is not configured in debug mode, also make sure that your assemblies are not debug assemblies.  There are of course exceptions if you are trying to solve a unique issue in your production environment ... but in most cases you should always deploy with release builds for all assemblies.

10.  Load test

This goes without saying.  Inevitably, good load testing will uncover threading and memory issues not otherwise considered.

 

 

Introduction
Performance tuning can be tricky. It's especially tough in Internet-related projects with lots of components running around, like HTML client, HTTP network, Web server, middle-tier components, database components, resource-management components, TCP/IP networks, and database servers. Performance tuning depends on a lot of parameters and sometimes, by changing a single parameter, performance can increase drastically.

 

This document lists out some tips for optimizing ASP.Net Web applications and many traps and pitfalls are discussed as follows :


Tips For Web Application


1) Turn off Tracing unless until required
Tracing is one of the wonderful features which enable us to track the application's trace and the sequences. However, again it is useful only for developers and you can set this to "false" unless you require to monitor the trace logging.
How it affects performance:
Enabling tracing adds performance overhead and might expose private information, so it should be enabled only while an application is being actively analyzed.
Solution:
When not needed, tracing can be turned off using
<trace enabled="false" requestLimit=”10” pageoutput=”false” traceMode=”SortByTime” localOnly=”true”>


2) Turn off Session State, if not required
One extremely powerful feature of ASP.NET is its ability to store session state for users, such as a shopping cart on an e-commerce site or a browser history.
How it affects performance:
Since ASP.NET Manages session state by default, you pay the cost in memory even if you don't use it. I.e. whether you store your data in in-process or on state server or in a Sql Database, session state requires memory and it's also time consuming when you store or retrieve data from it.
Solution:
You may not require session state when your pages are static or when you do not need to store information captured in the page.
In such cases where you need not use session state, disable it on your web form using the directive,
<@%Page EnableSessionState="false"%>
In case you use the session state only to retrieve data from it and not to update it, make the session state read only by using the directive,
<@%Page EnableSessionState ="ReadOnly"%>

3) Disable View State of a Page if possible
View state is a fancy name for ASP.NET storing some state data in a hidden input field inside the generated page. When the page is posted back to the server, the server can parse, validate, and apply this view state data back to the page's tree of controls.
View state is a very powerful capability since it allows state to be persisted with the client and it requires no cookies or server memory to save this state. Many ASP.NET server controls use view state to persist settings made during interactions with elements on the page, for example, saving the current page that is being displayed when paging through data.
How it affects performance:
? There are a number of drawbacks to the use of view state, however.
? It increases the total payload of the page both when served and when requested. There is also an additional overhead incurred when serializing or deserializing view state data that is posted back to the server.
? View state increases the memory allocations on the server. Several server controls, the most well known of which is the DataGrid, tend to make excessive use of view state, even in cases where it is not needed.
Solution:
Pages that do not have any server postback events can have the view state turned off.
The default behavior of the ViewState property is enabled, but if you don't need it, you can turn it off at the control or page level. Within a control, simply set the EnableViewState property to false, or set it globally within the page using this setting:
<%@ Page EnableViewState="false" %>
If you turn view state off for a page or control, make sure you thoroughly test your pages to verify that they continue to function correctly.


4) Set debug=false in web.config
When you create the application, by default this attribute is set to "true" which is very useful while developing. However, when you are deploying your application, always set it to "false".
How it affects performance:
Setting it to "true" requires the pdb information to be inserted into the file and this results in a comparatively larger file and hence processing will be slow.
Solution:
Therefore, always set debug="false" before deployment.

5) Avoid Response.Redirect
Response.Redirect () method simply tells the browser to visit another page.
How it affects performance:
Redirects are also very chatty. They should only be used when you are transferring people to another physical web server.
Solution:
For any transfers within your server, use .transfer! You will save a lot of needless HTTP requests. Instead of telling the browser to redirect, it simply changes the "focus" on the Web server and transfers the request. This means you don't get quite as many HTTP requests coming through, which therefore eases the pressure on your Web server and makes your applications run faster.

Tradeoffs:
? ".transfer" process can work on only those sites running on the server. Only Response.Redirect can do that.
? Server.Transfer maintains the original URL in the browser. This can really help streamline data entry techniques, although it may make for confusion when debugging
5. A) To reduce CLR Exceptions count, Use Response.Redirect (".aspx", false) instead of response.redirect (".aspx").

6) Use the String builder to concatenate string
How it affects performance:
String is Evil when you want to append and concatenate text to your string. All the activities you do to the string are stored in the memory as separate references and it must be avoided as much as possible.
i.e. When a string is modified, the run time will create a new string and return it, leaving the original to be garbage collected. Most of the time this is a fast and simple way to do it, but when a string is being modified repeatedly it begins to be a burden on performance: all of those allocations eventually get expensive.
Solution:
Use String Builder when ever string concatenation is needed so that it only stores the value in the original string and no additional reference is created.


7) Avoid throwing exceptions
How it affects performance:
Exceptions are probably one of the heaviest resource hogs and causes of slowdowns you will ever see in web applications, as well as windows applications.
Solution:
You can use as many try/catch blocks as you want. Using exceptions gratuitously is where you lose performance. For example, you should stay away from things like using exceptions for control flow.

8) Use Finally Method to kill resources
?The finally method gets executed independent of the outcome of the Block.
?Always use the finally block to kill resources like closing database connection, closing files and other resources such that they get executed independent of whether the code worked in Try or went to Catch.

9) Use Client Side Scripts for validations
User Input is Evil and it must be thoroughly validated before processing to avoid overhead and possible injections to your applications.
How It improves performance:
Client site validation can help reduce round trips that are required to process user's request. In ASP.NET you can also use client side controls to validate user input. However, do a check at the Server side too to avoid the infamous Javascript disabled scenarios.

10) Avoid unnecessary round trips to the server
How it affects performance:
Round trips significantly affect performance. They are subject to network latency and to downstream server latency. Many data-driven Web sites heavily access the database for every user request. While connection pooling helps, the increased network traffic and processing load on the database server can adversely affect performance.
Solution:
? Keep round trips to an absolute minimum
? Implement Ajax UI whenever possible. The idea is to avoid full page refresh and only update the portion of the page that needs to be changed

11) Use Page.ISPostBack
Make sure you don't execute code needlessly. Use Page.ISPostBack property to ensure that you only perform page initialization logic when a page is first time loaded and not in response to client postbacks.

12) Include Return Statements with in the Function/Method
How it improves performance
Explicitly using return allows the JIT to perform slightly more optimizations. Without a return statement, each function/method is given several local variables on stack to transparently support returning values without the keyword. Keeping these around makes it harder for the JIT to optimize, and can impact the performance of your code. Look through your functions/methods and insert return as needed. It doesn't change the semantics of the code at all, and it can help you get more speed from your application.

13) Use Foreach loop instead of For loop for String Iteration
Foreach is far more readable, and in the future it will become as fast as a For loop for special cases like strings. Unless string manipulation is a real performance hog for you, the slightly messier code may not be worth it.

14) Avoid Unnecessary Indirection
How it affects performance:
When you use byRef, you pass pointers instead of the actual object.
Many times this makes sense (side-effecting functions, for example), but you don't always need it. Passing pointers results in more indirection, which is slower than accessing a value that is on the stack.
Solution:
When you don't need to go through the heap, it is best to avoid it there by avoiding indirection.

15) Use "ArrayLists" in place of arrays
How it improves performance
An ArrayList as everything that is good about an array PLUS automatic sizing, Add, Insert, Remove, Sort, Binary Search. All these great helper methods are added when implementing the IList interface.
Tradeoffs:
The downside of an ArrayList is the need to cast objects upon retrieval.

16) Always check Page.IsValid when using Validator Controls
Always make sure you check Page.IsValid before processing your forms when using Validator Controls.

17) Use Paging
Take advantage of paging's simplicity in .net. Only show small subsets of data at a time, allowing the page to load faster.
Tradeoffs:
Just be careful when you mix in caching. Don't cache all the data in the grid.

18) Store your content by using caching
How it improves performance:
ASP.NET allows you to cache entire pages, fragment of pages or controls. You can cache also variable data by specifying the parameters that the data depends. By using caching you help ASP.NET engine to return data for repeated request for the same page much faster.
When and Why Use Caching:
A Proper use and fine tune of caching approach of caching will result on better performance and scalability of your site. However improper use of caching will actually slow down and consume lots of your server performance and memory usage.
Good candidate to use caching is if you have infrequent chance of data or static content of web page.

19) Use low cost authentication
Authentication can also have an impact over the performance of your application. For example passport authentication is slower than form-base authentication which in here turn is slower than Windows authentication.

20) Minimize the number of web server controls
How it affects performance:
The use of web server controls increases the response time of your application because they need time to be processed on the server side before they are rendered on the client side.
Solution:
One way to minimize the number of web server controls is to taking into consideration, the usage of HTML elements where they are suited, for example if you want to display static text.

21) Avoid using unmanaged code
How it affects performance:
Calls to unmanaged code are a costly marshaling operation.
Solution:
Try to reduce the number calls between the managed and unmanaged code. Consider to do more work in each call rather than making frequent calls to do small tasks.

22) Avoid making frequent calls across processes
If you are working with distributed applications, this involves additional overhead negotiating network and application level protocols. In this case network speed can also be a bottleneck. Try to do as much work as possible in fewer calls over the network.

23) Cleaning Up Style Sheets and Script Files
? A quick and easy way to improve your web application's performance is by going back and cleaning up your CSS Style Sheets and Script Files of unnecessary code or old styles and functions. It is common for old styles and functions to still exist in your style sheets and script files during development cycles and when improvements are made to a website.
? Many websites use a single CSS Style Sheet or Script File for the entire website. Sometimes, just going through these files and cleaning them up can improve the performance of your site by reducing the page size. If you are referencing images in your style sheet that are no longer used on your website, it's a waste of performance to leave them in there and have them loaded each time the style sheet is loaded.
? Run a web page analyzer against pages in your website so that you can see exactly what is being loaded and what takes the most time to load.

24) Design with ValueTypes
Use simple structs when you can, and when you don't do a lot of boxing
and unboxing.
Tradeoffs:
ValueTypes are far less flexible than Objects, and end up hurting performance if used incorrectly. You need to be very careful about when you treat them like objects. This adds extra boxing and unboxing overhead to your program, and can end up costing you more than it would if you had stuck with objects.

25) Minimize assemblies
Minimize the number of assemblies you use to keep your working set small. If you load an entire assembly just to use one method, you're paying a tremendous cost for very little benefit. See if you can duplicate that method's functionality using code that you already have loaded.

26) Encode Using ASCII When You Don't Need UTF
By default, ASP.NET comes configured to encode requests and responses as UTF-8.
If ASCII is all your application needs, eliminated the UTF overhead can give you back a few cycles. Note that this can only be done on a per-application basis.

27) Avoid Recursive Functions / Nested Loops
These are general things to adopt in any programming language, which consume lot of memory. Always avoid Nested Loops, Recursive functions, to improve performance.

28) Minimize the Use of Format ()
When you can, use toString () instead of format (). In most cases, it will provide you with the functionality you need, with much less overhead.

29) Place StyleSheets into the Header
Web developers who care about performance want browser to load whatever content it has as soon as possible. This fact is especially important for pages with a lot of content and for users with slow Internet connections. When the browser loads the page progressively the header, the logo, the navigation components serve as visual feedback for the user.
When we place style sheets near the bottom part of the html, most browsers stop rendering to avoid redrawing elements of the page if their styles change thus decreasing the performance of the page. So, always place StyleSheets into the Header

30) Put Scripts to the end of Document
Unlike StyleSheets, it is better to place scripts to the end of the document. Progressive rendering is blocked until all StyleSheets have been downloaded. Scripts cause progressive rendering to stop for all content below the script until it is fully loaded. Moreover, while downloading a script, browser does not start any other component downloads, even on different hostnames.
So,always have scripts at the end of the document.

31) Make JavaScript and CSS External
Using external files generally produces faster pages because the JavaScript and CSS files are cached by the browser. Inline JavaScript and CSS increases the HTML document size but reduces the number of HTTP requests. With cached external files, the size of the HTML is kept small without increasing the number of HTTP requests thus improving the performance.


Tips For Database Operations


1) Return Multiple Resultsets
The database code if has request paths that go to the database more than once then, these round-trips decreases the number of requests per second your application can serve.
Solution:
Return multiple resultsets in a single database request, so that you can cut the total time spent communicating with the database. You'll be making your system more scalable, too, as you'll cut down on the work the database server is doing managing requests.

2) Connection Pooling and Object Pooling
Connection pooling is a useful way to reuse connections for multiple requests, rather than paying the overhead of opening and closing a connection for each request. It's done implicitly, but you get one pool per unique connection string. Make sure you call Close or Dispose on a connection as soon as possible. When pooling is enabled, calling Close or Dispose returns the connection to the pool instead of closing the underlying database connection.
Account for the following issues when pooling is a part of your design:
? Share connections
? Avoid per-user logons to the database
? Do not vary connection strings
? Do not cache connections

3) Use SqlDataReader Instead of Dataset wherever it is possible
If you are reading a table sequentially you should use the DataReader rather than DataSet. DataReader object creates a read only stream of data that will increase your application performance because only one row is in memory at a time.

4) Keep Your Datasets Lean
Remember that the dataset stores all of its data in memory, and that the more data you request, the longer it will take to transmit across the wire.
Therefore Only put the records you need into the dataset.

5) Avoid Inefficient queries
How it affects performance:
Queries that process and then return more columns or rows than necessary, waste processing cycles that could best be used for servicing other requests.

Cause of Inefficient queries:
? Too much data in your results is usually the result of inefficient queries.
? The SELECT * query often causes this problem. You do not usually need to return all the columns in a row. Also, analyze the WHERE clause in your queries to ensure that you are not returning too many rows. Try to make the WHERE clause as specific as possible to ensure that the least number of rows are returned.
? Queries that do not take advantage of indexes may also cause poor performance.

6) Unnecessary round trips
How it affects performance:
Round trips significantly affect performance. They are subject to network latency and to downstream server latency. Many data-driven Web sites heavily access the database for every user request. While connection pooling helps, the increased network traffic and processing load on the database server can adversely affect performance.
Solution:
Keep round trips to an absolute minimum.

7) Too many open connections
Connections are an expensive and scarce resource, which should be shared between callers by using connection pooling. Opening a connection for each caller limits scalability.
Solution:
To ensure the efficient use of connection pooling, avoid keeping connections open and avoid varying connection strings.

8) Avoid Transaction misuse
How it affects performance:
If you select the wrong type of transaction management, you may add latency to each operation. Additionally, if you keep transactions active for long periods of time, the active transactions may cause resource pressure.
Solution:
Transactions are necessary to ensure the integrity of your data, but you need to ensure that you use the appropriate type of transaction for the shortest duration possible and only where necessary.

9) Avoid Over Normalized tables
Over Normalized tables may require excessive joins for simple operations. These additional steps may significantly affect the performance and scalability of your application, especially as the number of users and requests increases.

10) Reduce Serialization
Dataset serialization is more efficiently implemented in .NET Framework version 1.1 than in version 1.0. However, Dataset serialization often introduces performance bottlenecks.
You can reduce the performance impact in a number of ways:
? Use column name aliasing
? Avoid serializing multiple versions of the same data
? Reduce the number of DataTable objects that are serialized

11) Do Not Use CommandBuilder at Run Time
How it affects performance:
CommandBuilder objects such as as SqlCommandBuilder and OleDbCommandBuilder are useful when you are designing and prototyping your application. However, you should not use them in production applications. The processing required to generate the commands affects performance.
Solution:
Manually create stored procedures for your commands, or use the Visual Studio® .NET design-time wizard and customize them later if necessary.

12) Use Stored Procedures Whenever Possible
?Stored procedures are highly optimized tools that result in excellent performance when used effectively.
?Set up stored procedures to handle inserts, updates, and deletes with the data adapter
?Stored procedures do not have to be interpreted, compiled or even transmitted from the client, and cut down on both network traffic and server overhead.
?Be sure to use CommandType.StoredProcedure instead of CommandType.Text

13) Avoid Auto-Generated Commands
When using a data adapter, avoid auto-generated commands. These require additional trips to the server to retrieve meta data, and give you a lower level of interaction control. While using auto-generated commands is convenient, it's worth the effort to do it yourself in performance-critical applications.

14) Use Sequential Access as Often as Possible
With a data reader, use CommandBehavior.SequentialAccess. This is essential for dealing with blob data types since it allows data to be read off of the wire in small chunks. While you can only work with one piece of the data at a time, the latency for loading a large data type disappears. If you don't need to work the whole object at once, using
Sequential Access will give you much better performance.


Tips for Asp.Net applications developed using VB


1) Enable Option Strict and Option Explicit for your pages
With Option Strict on, you protect yourself from inadvertent late binding and enforce a higher level of coding discipline.

2) Use early binding in Visual Basic or JScript code
Visual Basic 6 does a lot of work under the hood to support casting of objects, and many programmers aren't even aware of it. In Visual Basic 7, this is an area that out of which you can squeeze a lot of performance.
Solution:
When you compile, use early binding. This tells the compiler to insert a Type Coercion is only done when explicitly mentioned.
This has two major effects:
? Strange errors become easier to track down.
? Unneeded coercions are eliminated, leading to substantial performance improvements.
When you use an object as if it were of a different type, Visual Basic will coerce the object for you if you don't specify. This is handy, since the programmer has to worry about less code.

3) Put Concatenations in One Expression
If you have multiple concatenations on multiple lines, try to stick them all on one expression. The compiler can optimize by modifying the string in place, providing a speed and memory boost. If the statements are split into multiple lines, the Visual Basic compiler will not generate the Microsoft Intermediate Language (MSIL) to allow in-place concatenation.

Summary
When we talk about ASP.Net performance, there are lots of factors in place.
Above discussed are the most critical of the speed improvements you can make in ASP.net that will have a dramatic impact on the user experience of your web application. If you like this article, subscribe to our http://www.feedburner.com/fb/images/pub/feed-icon16x16.pngRSS Feed. You can also subscribe via email to our Interview Questions, Codes and Forums section.

 

 

Introduction

ASP.NET is much more powerful than classic ASP, however it is important to understand how to use that power to build highly efficient, reliable and robust applications. In this article, I tried to highlight the key tips you can use to maximize the performance of your ASP.NET pages. The list can be much longer, I am only emphasizing the most important ones.

1. Plan and research before you develop

Research and investigate how .NET can really benefit you. .NET offers a variety of solutions on each level of application design and development. It is imperative that you understand your situation and pros and cons of each approach supported by this rich development environment. Visual Studio is a comprehensive development package and offers many options to implement the same logic. It is really important that you examine each option and find the best optimal solution suited for the task at hand. Use layering to logically partition your application logic into presentation, business, and data access layers. It will not only help you create maintainable code, but also permits you to monitor and optimize the performance of each layer separately. A clear logical separation also offers more choices for scaling your application. Try to reduce the amount of code in your code-behind files to improve maintenance and scalability.

2. String concatenation

If not handled properly, String Concatenation can really decrease the performance of your application. You can concatenate strings in two ways.

  • First, by using string and adding the new string to an existing string. However, this operation is really expensive (especially if you are concatenating the string within a loop). When you add a string to an existing string, the Framework copies both the existing and new data to the memory, deletes the existing string, and reads data in a new string. This operation can be very time consuming and costly in lengthy string concatenation operations.
  • The second and better way to concatenate strings is using the StringBuilder Class. Below is an example of both approaches. If you are considering doing any type of String Concatenation, please do yourself a favor and test both routines separately. You may be surprised at the results.

http://www.codeproject.com/images/minus.gifCollapse

'Concatenation using String Class 
 
Response.Write("<b>String Class</b>")
Dim str As String = ""
Dim startTime As DateTime = DateTime.Now
Response.Write(("<br>Start time:" + startTime.ToString()))
Dim i As Integer
For i = 0 To 99999
str += i.ToString()
Next i
Dim EndTime As DateTime = DateTime.Now
Response.Write(("<br>End time:" + EndTime.ToString()))
Response.Write(("<br># of time Concatenated: " + i.ToString))

Results: Took 4 minutes and 23 Seconds to to complete 100,000 Concatenations.

String Class

    • Start time: 2/15/2006 10:21:24 AM
    • End time: 2/15/2006 10:25:47 AM
    • # of time Concatenated: 100000

http://www.codeproject.com/images/minus.gifCollapse

'Concatenation using StringBuilder
 
 Response.Write("<b>StringBuilder Class</b>")
 Dim strbuilder As New StringBuilder()
 Dim startTime As DateTime = DateTime.Now
 Response.Write(("<br>Start time:" + startTime.ToString()))
 Dim i As Integer
 For i = 0 To 99999
 strbuilder.Append(i.ToString())
 Next i
 Dim EndTime As DateTime = DateTime.Now
 Response.Write(("<br>Stop time:" + EndTime.ToString()))
 Response.Write(("<br># of time Concatenated: " + i.ToString))

Results: Took less than a Second to complete 100,000 Concatenations.

StringBuilder Class

    • Start time: 2/15/2006 10:31:22 AM
    • Stop time:2/15/2006 10:31:22 AM
    • # of time Concatenated: 100000

This is one of the many situations in which ASP.NET provides extremely high performance benefits over classic ASP.

3. Avoid round trips to the server

You can avoid needless round trips to the Web Server using the following tips:

  • Implement Ajax UI whenever possible. The idea is to avoid full page refresh and only update the portion of the page that needs to be changed. I think Scott's article gave great information on how to implement Ajax Atlas and <atlas:updatepanel> control.
  • Use Client Side Scripts. Client site validation can help reduce round trips that are required to process user's request. In ASP.NET you can also use client side controls to validate user input.
  • Use Page.ISPostBack property to ensure that you only perform page initialization logic when a page is loaded the first time and not in response to client postbacks.

http://www.codeproject.com/images/minus.gifCollapse

If Not IsPostBack Then 
LoadJScripts()
End If
  • In some situations performing postback event handling are unnecessary. You can use client callbacks to read data from the server instead of performing a full round trip. Click here for details.

4. Save viewstate only when necessary

ViewState is used primarily by Server controls to retain state only on pages that post data back to themselves. The information is passed to the client and read back in a hidden variable. ViewState is an unnecessary overhead for pages that do not need it. As the ViewState grows larger, it affects the performance of garbage collection. You can optimize the way your application uses ViewState by following these tips:

Situation when you don't need ViewState

ViewState is turned on in ASP.NET by default. You might not need ViewState because your page is output-only or because you explicitly reload data for each request. You do not need ViewState in the following situations:

  • Your page does not post back. If the page does not post information back to itself, if the page is only used for output, and if the page does not rely on response processing, you do not need ViewState.
  • You do not handle server control events. If your server controls do not handle events, and if your server controls have no dynamic or data bound property values, or they are set in code on every request, you do not need ViewState.
  • You repopulate controls with every page refresh. If you ignore old data, and if you repopulate the server control each time the page is refreshed, you do not need ViewState.

Disabling viewstate

There are several ways to disable ViewState at various levels:

  • To disable ViewState for a single control on a page, set the EnableViewState property of the control to false.
  • To disable ViewState for a single page, set the EnableViewState attribute in the @ Page directive to false. i.e.

http://www.codeproject.com/images/minus.gifCollapse

<%@ Page EnableViewState="false" %> 
  • To disable ViewState for a specific application, use the following element in the Web.config file of the application:

http://www.codeproject.com/images/minus.gifCollapse

<pages enableViewState="false" />
  • To disable ViewState for all applications on a Web server, configure the <pages> element in the Machine.config file as follows:

http://www.codeproject.com/images/minus.gifCollapse

<pages enableViewState="false" />

Determine the size of your ViewState

By enabling tracing for the page, you can monitor the ViewState size for each control. You can use this information to determine the optimal size of the ViewState or if there are controls in which the ViewState can be disabled.

5. Use session variables carefully

Avoid storing too much data in session variables, and make sure your session timeout is reasonable. This can use a significant amount of server memory. Keep in mind that data stored in session variables can hang out long after the user closes the browser. Too many session variables can bring the server on its knees. Disable session state, if you are not using session variables in the particular page or application.

  • To disable session state for a page, set the EnableSessionState attribute in the @ Page directive to false.i.e.

http://www.codeproject.com/images/minus.gifCollapse

<%@ Page EnableSessionState="false" %> 
  • If a page requires access to session variables but will not create or modify them, set the EnableSessionState attribute in the@ Page directive to ReadOnly. i.e.

http://www.codeproject.com/images/minus.gifCollapse

<%@ Page EnableSessionState="ReadOnly" %>   
  • To disable session state for a specific application, use the following element in the Web.config file of the application.

http://www.codeproject.com/images/minus.gifCollapse

<sessionState mode='Off'/>
  • To disable session state for all applications on your Web server, use the following element in the Machine.config file:

http://www.codeproject.com/images/minus.gifCollapse

<sessionState mode='Off'/>

6. Use Server.Transfer

Use the Server.Transfer method to redirect between pages in the same application. Using this method in a page, with Server.Transfer syntax, avoids unnecessary client-side redirection. Consider Using Server.Transfer Instead of Response.Redirect. However, you cannot always just replace Response.Redirect calls with Server.Transfer. If you need authentication and authorization checks during redirection, use Response.Redirect instead of Server.Transfer because the two mechanisms are not equivalent. When you use Response.Redirect, ensure you use the overloaded method that accepts a Boolean second parameter, and pass a value of false to ensure an internal exception is not raised. Also note that you can only use Server.Transfer to transfer control to pages in the same application. To transfer to pages in other applications, you must use Response.Redirect.

7. Use server controls when appropriate and avoid creating deeply nested controls

The HTTP protocol is stateless; however, server controls provide a rich programming model that manage state between page requests by using ViewState. However nothing comes for free, server controls require a fixed amount of processing to establish the control and all of its child controls. This makes server controls relatively expensive compared to HTML controls or possibly static text. When you do not need rich interaction, replace server controls with an inline representation of the user interface that you want to present. It is better to replace a server control if:

  • You do not need to retain state across postbacks
  • The data that appears in the control is static or control displays read-only data
  • You do not need programmatic access to the control on the server-side

Alternatives to server controls include simple rendering, HTML elements, inline Response.Write calls, and raw inline angle brackets (<% %>). It is essential to balance your tradeoffs. Avoid over optimization if the overhead is acceptable and if your application is within the limits of its performance objectives.

Deeply nested hierarchies of controls compound the cost of creating a server control and its child controls. Deeply nested hierarchies create extra processing that could be avoided by using a different design that uses inline controls, or by using a flatter hierarchy of server controls. This is especially important when you use controls such as Repeater, DataList, and DataGrid because they create additional child controls in the container.

8. Choose the data viewing control appropriate for your solution

Depending on how you choose to display data in a Web Forms page, there are often significant tradeoffs between convenience and performance. Always compare the pros and cons of controls before you use them in your application. For example, you can choose any of these three controls (DataGrid, DataList and Repeater) to display data, it's your job to find out which control will provide you maximum benefit. The DataGrid control can be a quick and easy way to display data, but it is frequently the most expensive in terms of performance. Rendering the data yourself by generating the appropriate HTML may work in some simple cases, but customization and browser targeting can quickly offset the extra work involved. A Repeater Web server control is a compromise between convenience and performance. It is efficient, customizable, and programmable.

9. Optimize code and exception handling

To optimize expensive loops, use For instead of ForEach in performance-critical code paths. Also do not rely on exceptions in your code and write code that avoids exceptions. Since exceptions cause performance to suffer significantly, you should never use them as a way to control normal program flow. If it is possible to detect in code a condition that would cause an exception, do so. Do not catch the exception itself before you handle that condition. Do not use exceptions to control logic. A database connection that fails to open is an exception but a user who mistypes his password is simply a condition that needs to be handled. Common scenarios include checking for null, assigning a value to a String that will be parsed into a numeric value, or checking for specific values before applying math operations. The following example demonstrates code that could cause an exception and code that tests for a condition. Both produce the same result.

http://www.codeproject.com/images/minus.gifCollapse

'Unnecessary use of exception
 
 Try
     value = 100 / number
Catch ex As Exception
    value = 0
End Try
 
' Recommended code
 
If Not number = 0 Then
    value = 100 / number
Else
    value = 0
End If

Check for null values. If it is possible for an object to be null, check to make sure it is not null, rather then throwing an exception. This commonly occurs when you retrieve items from ViewState, session state, application state, or cache objects as well as query string and form field variables. For example, do not use the following code to access session state information.

http://www.codeproject.com/images/minus.gifCollapse

'Unnecessary use of exception
 
Try
value = HttpContext.Current.Session("Value").ToString
Catch ex As Exception
Response.Redirect("Main.aspx", False)
End Try
 
'Recommended code 
 
If Not HttpContext.Current.Session("Value") Is Nothing Then
value = HttpContext.Current.Session("Value").ToString
Else
Response.Redirect("Main.aspx", False)
End If

10. Use a DataReader for fast and efficient data binding

Use a DataReader object if you do not need to cache data, if you are displaying read-only data, and if you need to load data into a control as quickly as possible. The DataReader is the optimum choice for retrieving read-only data in a forward-only manner. Loading the data into a DataSet object and then binding the DataSet to the control moves the data twice. This method also incurs the relatively significant expense of constructing a DataSet. In addition, when you use the DataReader, you can use the specialized type-specific methods to retrieve the data for better performance.

11. Use paging efficiently

Allowing users to request and retrieve more data than they can consume puts an unnecessary strain on your application resources. This unnecessary strain causes increased CPU utilization, increased memory consumption, and decreased response times. This is especially true for clients that have a slow connection speed. From a usability standpoint, most users do not want to see thousands of rows presented as a single unit. Implement a paging solution that retrieves only the desired data from the database and reduces back-end work on the database. You should optimize the number of rows returned by the Database Server to the middle-tier web-server. For more information read this article to implement paging at the Database level. If you are using SQL Server 2000, please also look at this article.

12. Explicitly Dispose or Close all the resources

To guarantee resources are cleaned up when an exception occurs, use a try/finally block. Close the resources in the finally clause. Using a try/finally block ensures that resources are disposed even if an exception occurs. Open your connection just before needing it, and close it as soon as you're done with it. Your motto should always be "get in, get/save data, get out." If you use different objects, make sure you call the Dispose method of the object or the Close method if one is provided. Failing to call Close or Dispose prolongs the life of the object in memory long after the client stops using it. This defers the cleanup and can contribute to memory pressure. Database connection and files are examples of shared resources that should be explicitly closed.

http://www.codeproject.com/images/minus.gifCollapse

Try
_con.Open()
Catch ex As Exception
Throw ex
Finally
If Not _con Is Nothing Then
_con.Close()
End If
End Try

13. Disable tracing and debugging

Before you deploy your application, disable tracing and debugging. Tracing and debugging may cause performance issues. Tracing and debugging are not recommended while your application is running in production. You can disable tracing and debugging in the Machine.config and Web.config using the syntax below:

http://www.codeproject.com/images/minus.gifCollapse

<configuration>
   <system.web>
      <trace enabled="false" pageOutput="false" />
      <compilation debug="false" />
   </system.web>
</configuration>

14. Precompile pages and disable AutoEventWireup

By precompiled pages, users do not have to experience the batch compile of your ASP.NET files; it will increase the performance that your users will experience.

In addition, setting the AutoEventWireup attribute to false in the Machine.config file means that the page will not match method names to events and hook them up (for example, Page_Load). If page developers want to use these events, they will need to override the methods in the base class (for example, they will need to override Page.OnLoad for the page load event instead of using a Page_Load method). If you disable AutoEventWireup, your pages will get a slight performance boost by leaving the event wiring to the page author instead of performing it automatically.

15. Use stored procedures and indexes

In most cases you can get an additional performance boost by using compiled stored procedures instead of ad hoc queries.

Make sure you index your tables, and choose your indexes wisely. Try using Index Tuning Wizard and have it report to you what it thinks the best candidates for indexes would be. You don't have to follow all of its suggestions, but it may reveal things about your structure or data that will help you choose more appropriate indexes.

  • In SQL Server Management Studio (SQL Server 2005), highlight your query. Now from the Query menu, click Analyze Query in Database Engine Tuning Advisor.
  • You can do something similar in SQL Server 2000 to run the index tuning wizard? In Query Analyzer, highlight your query. From the Query menu, click Index Tuning Wizard.

 

ASP.NET 2.0 Security Best Practices - Must Read Article on MSDN

I printed out this fantastic article on MSDN, called Security Practices: ASP.NET 2.0 Security Practices at a Glance.  If you do nothing else this weekend, I recommend you check out the article here and see where you can improve the security of your applications.

Here are just a few of the items worth noting.  I hope to go into them all in more detail in future posts:

 

Use PrincipalPermission to Demand Role-Base Security

[PrincipalPermission(SecurityAction.Demand, Role="Admin")]
public class AdminOnlyPage : BasePage
{
   // ...
}

 

Securing a Particular Directory in ASP.NET for Specific Roles

<location path="Secure" >
  <system.web>
    <authorization>
      <deny users="?" />
    </authorization>
  </system.web>
</location>

 

Prevent SQL Injection by Using SqlParameters

using System.Data;
using System.Data.SqlClient;

using (SqlConnection connection = new SqlConnection(connectionString))
{
  DataSet userDataset = new DataSet();
  SqlDataAdapter myCommand = new SqlDataAdapter(
             "LoginStoredProcedure", connection);
  myCommand.SelectCommand.CommandType = CommandType.StoredProcedure;
  myCommand.SelectCommand.Parameters.Add("@au_id", SqlDbType.VarChar, 11);
  myCommand.SelectCommand.Parameters["@au_id"].Value = SSN.Text;

  myCommand.Fill(userDataset);
}

 

Turn On Custom Errors To Keep Errors Private

<customErrors mode="On" defaultRedirect="YourErrorPage.htm" />


 

Create a Global Error Handler for Your ASP.NET Applications

<%@ Application Language="C#" %>
<%@ Import Namespace="System.Diagnostics" %>

<script language="C#" runat="server">
void Application_Error(object sender, EventArgs e)
{
   //get reference to the source of the exception chain
   Exception ex = Server.GetLastError().GetBaseException();

   //log the details of the exception and page state to the
   //Event Log
   EventLog.WriteEntry("My Web Application",
     "MESSAGE: " + ex.Message +
     "\nSOURCE: " + ex.Source +
     "\nFORM: " + Request.Form.ToString() +
     "\nQUERYSTRING: " + Request.QueryString.ToString() +
     "\nTARGETSITE: " + ex.TargetSite +
     "\nSTACKTRACE: " + ex.StackTrace,
     EventLogEntryType.Error);

   //Optional email or other notification here...
}
</script>


 

Prevent Cross-Site Scripting Using HtmlEncode and UrlEncode

Response.Write(HttpUtility.HtmlEncode(Request.Form["name"]));

Response.Write(HttpUtility.UrlEncode(urlString));

// Encode the string input from the HTML input text field
StringBuilder sb = new StringBuilder(HttpUtility.HtmlEncode(htmlInputTxt.Text));
// Selectively allow <b> and <i>
sb.Replace("&lt;b&gt;", "<b>");
sb.Replace("&lt;/b&gt;", "</b>");
sb.Replace("&lt;i&gt;", "<i>");
sb.Replace("&lt;/i&gt;", "</i>");


 

The article contains a lot of other great words of wisdom when securing your applications.  I recommend reading the article this weekend and implementing those that make sense in your applications.

 

 

Encrypting Custom Configuration Sections

The ASP.NET IIS Registration Tool (Aspnet_regiis.exe) can encrypt and decrypt sections of web.config. There is no special code required in an application, as ASP.NET 2.0 will magically decrypt sections at runtime.

The tool and runtime can also work together to encrypt and decrypt custom configuration sections. So if I have the following in web.config:

<configSections>
   <
section
      
name="sampleSection"
      
type="System.Configuration.SingleTagSectionHandler"
   />
</
configSections>

<
MySecrets
   FavoriteMusic="Disco"
   
FavoriteLanguage="COBOL"
   
DreamJob="Dancing in the opening ceremonies of the Olympics"
/>

All I need to do from the command line, is:

aspnet_regiis -pef MySecrets .

It’s easier than a double pirouette…

 

 

Best Practices for Speeding Up Your Web Site

The Exceptional Performance team has identified a number of best practices for making web pages fast. The list includes 35 best practices divided into 7 categories.

Filter by category:

  • Content
  • Server
  • Cookie
  • CSS
  • JavaScript
  • Images
  • Mobile
  • All

Minimize HTTP Requests

tag: content

80% of the end-user response time is spent on the front-end. Most of this time is tied up in downloading all the components in the page: images, stylesheets, scripts, Flash, etc. Reducing the number of components in turn reduces the number of HTTP requests required to render the page. This is the key to faster pages.

One way to reduce the number of components in the page is to simplify the page's design. But is there a way to build pages with richer content while also achieving fast response times? Here are some techniques for reducing the number of HTTP requests, while still supporting rich page designs.

Combined files are a way to reduce the number of HTTP requests by combining all scripts into a single script, and similarly combining all CSS into a single stylesheet. Combining files is more challenging when the scripts and stylesheets vary from page to page, but making this part of your release process improves response times.

CSS Sprites are the preferred method for reducing the number of image requests. Combine your background images into a single image and use the CSS background-image and background-position properties to display the desired image segment.

Image maps combine multiple images into a single image. The overall size is about the same, but reducing the number of HTTP requests speeds up the page. Image maps only work if the images are contiguous in the page, such as a navigation bar. Defining the coordinates of image maps can be tedious and error prone. Using image maps for navigation is not accessible too, so it's not recommended.

Inline images use the data: URL scheme to embed the image data in the actual page. This can increase the size of your HTML document. Combining inline images into your (cached) stylesheets is a way to reduce HTTP requests and avoid increasing the size of your pages. Inline images are not yet supported across all major browsers.

Reducing the number of HTTP requests in your page is the place to start. This is the most important guideline for improving performance for first time visitors. As described in Tenni Theurer's blog post Browser Cache Usage - Exposed!, 40-60% of daily visitors to your site come in with an empty cache. Making your page fast for these first time visitors is key to a better user experience.

top | discuss this rule

Use a Content Delivery Network

tag: server

The user's proximity to your web server has an impact on response times. Deploying your content across multiple, geographically dispersed servers will make your pages load faster from the user's perspective. But where should you start?

As a first step to implementing geographically dispersed content, don't attempt to redesign your web application to work in a distributed architecture. Depending on the application, changing the architecture could include daunting tasks such as synchronizing session state and replicating database transactions across server locations. Attempts to reduce the distance between users and your content could be delayed by, or never pass, this application architecture step.

Remember that 80-90% of the end-user response time is spent downloading all the components in the page: images, stylesheets, scripts, Flash, etc. This is the Performance Golden Rule. Rather than starting with the difficult task of redesigning your application architecture, it's better to first disperse your static content. This not only achieves a bigger reduction in response times, but it's easier thanks to content delivery networks.

A content delivery network (CDN) is a collection of web servers distributed across multiple locations to deliver content more efficiently to users. The server selected for delivering content to a specific user is typically based on a measure of network proximity. For example, the server with the fewest network hops or the server with the quickest response time is chosen.

Some large Internet companies own their own CDN, but it's cost-effective to use a CDN service provider, such as Akamai Technologies, Mirror Image Internet, or Limelight Networks. For start-up companies and private web sites, the cost of a CDN service can be prohibitive, but as your target audience grows larger and becomes more global, a CDN is necessary to achieve fast response times. At Yahoo!, properties that moved static content off their application web servers to a CDN improved end-user response times by 20% or more. Switching to a CDN is a relatively easy code change that will dramatically improve the speed of your web site.

top | discuss this rule

Add an Expires or a Cache-Control Header

tag: server

There are two aspects to this rule:

  • For static components: implement "Never expire" policy by setting far future Expires header
  • For dynamic components: use an appropriate Cache-Control header to help the browser with conditional requests

 

Web page designs are getting richer and richer, which means more scripts, stylesheets, images, and Flash in the page. A first-time visitor to your page may have to make several HTTP requests, but by using the Expires header you make those components cacheable. This avoids unnecessary HTTP requests on subsequent page views. Expires headers are most often used with images, but they should be used on all components including scripts, stylesheets, and Flash components.

Browsers (and proxies) use a cache to reduce the number and size of HTTP requests, making web pages load faster. A web server uses the Expires header in the HTTP response to tell the client how long a component can be cached. This is a far future Expires header, telling the browser that this response won't be stale until April 15, 2010.

      Expires: Thu, 15 Apr 2010 20:00:00 GMT

 

If your server is Apache, use the ExpiresDefault directive to set an expiration date relative to the current date. This example of the ExpiresDefault directive sets the Expires date 10 years out from the time of the request.

      ExpiresDefault "access plus 10 years"

 

Keep in mind, if you use a far future Expires header you have to change the component's filename whenever the component changes. At Yahoo! we often make this step part of the build process: a version number is embedded in the component's filename, for example, yahoo_2.0.6.js.

Using a far future Expires header affects page views only after a user has already visited your site. It has no effect on the number of HTTP requests when a user visits your site for the first time and the browser's cache is empty. Therefore the impact of this performance improvement depends on how often users hit your pages with a primed cache. (A "primed cache" already contains all of the components in the page.) We measured this at Yahoo! and found the number of page views with a primed cache is 75-85%. By using a far future Expires header, you increase the number of components that are cached by the browser and re-used on subsequent page views without sending a single byte over the user's Internet connection.

top | discuss this rule

Gzip Components

tag: server

The time it takes to transfer an HTTP request and response across the network can be significantly reduced by decisions made by front-end engineers. It's true that the end-user's bandwidth speed, Internet service provider, proximity to peering exchange points, etc. are beyond the control of the development team. But there are other variables that affect response times. Compression reduces response times by reducing the size of the HTTP response.

Starting with HTTP/1.1, web clients indicate support for compression with the Accept-Encoding header in the HTTP request.

      Accept-Encoding: gzip, deflate

 

If the web server sees this header in the request, it may compress the response using one of the methods listed by the client. The web server notifies the web client of this via the Content-Encoding header in the response.

      Content-Encoding: gzip

 

Gzip is the most popular and effective compression method at this time. It was developed by the GNU project and standardized by RFC 1952. The only other compression format you're likely to see is deflate, but it's less effective and less popular.

Gzipping generally reduces the response size by about 70%. Approximately 90% of today's Internet traffic travels through browsers that claim to support gzip. If you use Apache, the module configuring gzip depends on your version: Apache 1.3 uses mod_gzip while Apache 2.x uses mod_deflate.

There are known issues with browsers and proxies that may cause a mismatch in what the browser expects and what it receives with regard to compressed content. Fortunately, these edge cases are dwindling as the use of older browsers drops off. The Apache modules help out by adding appropriate Vary response headers automatically.

Servers choose what to gzip based on file type, but are typically too limited in what they decide to compress. Most web sites gzip their HTML documents. It's also worthwhile to gzip your scripts and stylesheets, but many web sites miss this opportunity. In fact, it's worthwhile to compress any text response including XML and JSON. Image and PDF files should not be gzipped because they are already compressed. Trying to gzip them not only wastes CPU but can potentially increase file sizes.

Gzipping as many file types as possible is an easy way to reduce page weight and accelerate the user experience.

top | discuss this rule

Put Stylesheets at the Top

tag: css

While researching performance at Yahoo!, we discovered that moving stylesheets to the document HEAD makes pages appear to be loading faster. This is because putting stylesheets in the HEAD allows the page to render progressively.

Front-end engineers that care about performance want a page to load progressively; that is, we want the browser to display whatever content it has as soon as possible. This is especially important for pages with a lot of content and for users on slower Internet connections. The importance of giving users visual feedback, such as progress indicators, has been well researched and documented. In our case the HTML page is the progress indicator! When the browser loads the page progressively the header, the navigation bar, the logo at the top, etc. all serve as visual feedback for the user who is waiting for the page. This improves the overall user experience.

The problem with putting stylesheets near the bottom of the document is that it prohibits progressive rendering in many browsers, including Internet Explorer. These browsers block rendering to avoid having to redraw elements of the page if their styles change. The user is stuck viewing a blank white page.

The HTML specification clearly states that stylesheets are to be included in the HEAD of the page: "Unlike A, [LINK] may only appear in the HEAD section of a document, although it may appear any number of times." Neither of the alternatives, the blank white screen or flash of unstyled content, are worth the risk. The optimal solution is to follow the HTML specification and load your stylesheets in the document HEAD.

top | discuss this rule

Put Scripts at the Bottom

tag: javascript

The problem caused by scripts is that they block parallel downloads. The HTTP/1.1 specification suggests that browsers download no more than two components in parallel per hostname. If you serve your images from multiple hostnames, you can get more than two downloads to occur in parallel. While a script is downloading, however, the browser won't start any other downloads, even on different hostnames.

In some situations it's not easy to move scripts to the bottom. If, for example, the script uses document.write to insert part of the page's content, it can't be moved lower in the page. There might also be scoping issues. In many cases, there are ways to workaround these situations.

An alternative suggestion that often comes up is to use deferred scripts. The DEFER attribute indicates that the script does not contain document.write, and is a clue to browsers that they can continue rendering. Unfortunately, Firefox doesn't support the DEFER attribute. In Internet Explorer, the script may be deferred, but not as much as desired. If a script can be deferred, it can also be moved to the bottom of the page. That will make your web pages load faster.

top | discuss this rule

Avoid CSS Expressions

tag: css

CSS expressions are a powerful (and dangerous) way to set CSS properties dynamically. They were supported in Internet Explorer starting with version 5, but were deprecated starting with IE8. As an example, the background color could be set to alternate every hour using CSS expressions:

      background-color: expression( (new Date()).getHours()%2 ? "#B8D4FF" : "#F08A00" );

 

As shown here, the expression method accepts a JavaScript expression. The CSS property is set to the result of evaluating the JavaScript expression. The expression method is ignored by other browsers, so it is useful for setting properties in Internet Explorer needed to create a consistent experience across browsers.

The problem with expressions is that they are evaluated more frequently than most people expect. Not only are they evaluated when the page is rendered and resized, but also when the page is scrolled and even when the user moves the mouse over the page. Adding a counter to the CSS expression allows us to keep track of when and how often a CSS expression is evaluated. Moving the mouse around the page can easily generate more than 10,000 evaluations.

One way to reduce the number of times your CSS expression is evaluated is to use one-time expressions, where the first time the expression is evaluated it sets the style property to an explicit value, which replaces the CSS expression. If the style property must be set dynamically throughout the life of the page, using event handlers instead of CSS expressions is an alternative approach. If you must use CSS expressions, remember that they may be evaluated thousands of times and could affect the performance of your page.

top | discuss this rule

Make JavaScript and CSS External

tag: javascript, css

Many of these performance rules deal with how external components are managed. However, before these considerations arise you should ask a more basic question: Should JavaScript and CSS be contained in external files, or inlined in the page itself?

Using external files in the real world generally produces faster pages because the JavaScript and CSS files are cached by the browser. JavaScript and CSS that are inlined in HTML documents get downloaded every time the HTML document is requested. This reduces the number of HTTP requests that are needed, but increases the size of the HTML document. On the other hand, if the JavaScript and CSS are in external files cached by the browser, the size of the HTML document is reduced without increasing the number of HTTP requests.

The key factor, then, is the frequency with which external JavaScript and CSS components are cached relative to the number of HTML documents requested. This factor, although difficult to quantify, can be gauged using various metrics. If users on your site have multiple page views per session and many of your pages re-use the same scripts and stylesheets, there is a greater potential benefit from cached external files.

Many web sites fall in the middle of these metrics. For these sites, the best solution generally is to deploy the JavaScript and CSS as external files. The only exception where inlining is preferable is with home pages, such as Yahoo!'s front page and My Yahoo!. Home pages that have few (perhaps only one) page view per session may find that inlining JavaScript and CSS results in faster end-user response times.

For front pages that are typically the first of many page views, there are techniques that leverage the reduction of HTTP requests that inlining provides, as well as the caching benefits achieved through using external files. One such technique is to inline JavaScript and CSS in the front page, but dynamically download the external files after the page has finished loading. Subsequent pages would reference the external files that should already be in the browser's cache.

top | discuss this rule

Reduce DNS Lookups

tag: content

The Domain Name System (DNS) maps hostnames to IP addresses, just as phonebooks map people's names to their phone numbers. When you type www.yahoo.com into your browser, a DNS resolver contacted by the browser returns that server's IP address. DNS has a cost. It typically takes 20-120 milliseconds for DNS to lookup the IP address for a given hostname. The browser can't download anything from this hostname until the DNS lookup is completed.

DNS lookups are cached for better performance. This caching can occur on a special caching server, maintained by the user's ISP or local area network, but there is also caching that occurs on the individual user's computer. The DNS information remains in the operating system's DNS cache (the "DNS Client service" on Microsoft Windows). Most browsers have their own caches, separate from the operating system's cache. As long as the browser keeps a DNS record in its own cache, it doesn't bother the operating system with a request for the record.

Internet Explorer caches DNS lookups for 30 minutes by default, as specified by the DnsCacheTimeout registry setting. Firefox caches DNS lookups for 1 minute, controlled by the network.dnsCacheExpiration configuration setting. (Fasterfox changes this to 1 hour.)

When the client's DNS cache is empty (for both the browser and the operating system), the number of DNS lookups is equal to the number of unique hostnames in the web page. This includes the hostnames used in the page's URL, images, script files, stylesheets, Flash objects, etc. Reducing the number of unique hostnames reduces the number of DNS lookups.

Reducing the number of unique hostnames has the potential to reduce the amount of parallel downloading that takes place in the page. Avoiding DNS lookups cuts response times, but reducing parallel downloads may increase response times. My guideline is to split these components across at least two but no more than four hostnames. This results in a good compromise between reducing DNS lookups and allowing a high degree of parallel downloads.

top | discuss this rule

Minify JavaScript and CSS

tag: javascript, css

Minification is the practice of removing unnecessary characters from code to reduce its size thereby improving load times. When code is minified all comments are removed, as well as unneeded white space characters (space, newline, and tab). In the case of JavaScript, this improves response time performance because the size of the downloaded file is reduced. Two popular tools for minifying JavaScript code are JSMin and YUI Compressor. The YUI compressor can also minify CSS.

Obfuscation is an alternative optimization that can be applied to source code. It's more complex than minification and thus more likely to generate bugs as a result of the obfuscation step itself. In a survey of ten top U.S. web sites, minification achieved a 21% size reduction versus 25% for obfuscation. Although obfuscation has a higher size reduction, minifying JavaScript is less risky.

In addition to minifying external scripts and styles, inlined <script> and <style> blocks can and should also be minified. Even if you gzip your scripts and styles, minifying them will still reduce the size by 5% or more. As the use and size of JavaScript and CSS increases, so will the savings gained by minifying your code.

top | discuss this rule

Avoid Redirects

tag: content

Redirects are accomplished using the 301 and 302 status codes. Here's an example of the HTTP headers in a 301 response:

      HTTP/1.1 301 Moved Permanently
      Location: http://example.com/newuri
      Content-Type: text/html

 

The browser automatically takes the user to the URL specified in the Location field. All the information necessary for a redirect is in the headers. The body of the response is typically empty. Despite their names, neither a 301 nor a 302 response is cached in practice unless additional headers, such as Expires or Cache-Control, indicate it should be. The meta refresh tag and JavaScript are other ways to direct users to a different URL, but if you must do a redirect, the preferred technique is to use the standard 3xx HTTP status codes, primarily to ensure the back button works correctly.

The main thing to remember is that redirects slow down the user experience. Inserting a redirect between the user and the HTML document delays everything in the page since nothing in the page can be rendered and no components can start being downloaded until the HTML document has arrived.

One of the most wasteful redirects happens frequently and web developers are generally not aware of it. It occurs when a trailing slash (/) is missing from a URL that should otherwise have one. For example, going to http://astrology.yahoo.com/astrology results in a 301 response containing a redirect to http://astrology.yahoo.com/astrology/ (notice the added trailing slash). This is fixed in Apache by using Alias or mod_rewrite, or the DirectorySlash directive if you're using Apache handlers.

Connecting an old web site to a new one is another common use for redirects. Others include connecting different parts of a website and directing the user based on certain conditions (type of browser, type of user account, etc.). Using a redirect to connect two web sites is simple and requires little additional coding. Although using redirects in these situations reduces the complexity for developers, it degrades the user experience. Alternatives for this use of redirects include using Alias and mod_rewrite if the two code paths are hosted on the same server. If a domain name change is the cause of using redirects, an alternative is to create a CNAME (a DNS record that creates an alias pointing from one domain name to another) in combination with Alias or mod_rewrite.

top | discuss this rule

Remove Duplicate Scripts

tag: javascript

It hurts performance to include the same JavaScript file twice in one page. This isn't as unusual as you might think. A review of the ten top U.S. web sites shows that two of them contain a duplicated script. Two main factors increase the odds of a script being duplicated in a single web page: team size and number of scripts. When it does happen, duplicate scripts hurt performance by creating unnecessary HTTP requests and wasted JavaScript execution.

Unnecessary HTTP requests happen in Internet Explorer, but not in Firefox. In Internet Explorer, if an external script is included twice and is not cacheable, it generates two HTTP requests during page loading. Even if the script is cacheable, extra HTTP requests occur when the user reloads the page.

In addition to generating wasteful HTTP requests, time is wasted evaluating the script multiple times. This redundant JavaScript execution happens in both Firefox and Internet Explorer, regardless of whether the script is cacheable.

One way to avoid accidentally including the same script twice is to implement a script management module in your templating system. The typical way to include a script is to use the SCRIPT tag in your HTML page.

      <script type="text/javascript" src="menu_1.0.17.js"></script>

 

An alternative in PHP would be to create a function called insertScript.

      <?php insertScript("menu.js") ?>

 

In addition to preventing the same script from being inserted multiple times, this function could handle other issues with scripts, such as dependency checking and adding version numbers to script filenames to support far future Expires headers.

top | discuss this rule

Configure ETags

tag: server

Entity tags (ETags) are a mechanism that web servers and browsers use to determine whether the component in the browser's cache matches the one on the origin server. (An "entity" is another word a "component": images, scripts, stylesheets, etc.) ETags were added to provide a mechanism for validating entities that is more flexible than the last-modified date. An ETag is a string that uniquely identifies a specific version of a component. The only format constraints are that the string be quoted. The origin server specifies the component's ETag using the ETag response header.

      HTTP/1.1 200 OK
      Last-Modified: Tue, 12 Dec 2006 03:03:59 GMT
      ETag: "10c24bc-4ab-457e1c1f"
      Content-Length: 12195

 

Later, if the browser has to validate a component, it uses the If-None-Match header to pass the ETag back to the origin server. If the ETags match, a 304 status code is returned reducing the response by 12195 bytes for this example.

      GET /i/yahoo.gif HTTP/1.1
      Host: us.yimg.com
      If-Modified-Since: Tue, 12 Dec 2006 03:03:59 GMT
      If-None-Match: "10c24bc-4ab-457e1c1f"
      HTTP/1.1 304 Not Modified

 

The problem with ETags is that they typically are constructed using attributes that make them unique to a specific server hosting a site. ETags won't match when a browser gets the original component from one server and later tries to validate that component on a different server, a situation that is all too common on Web sites that use a cluster of servers to handle requests. By default, both Apache and IIS embed data in the ETag that dramatically reduces the odds of the validity test succeeding on web sites with multiple servers.

The ETag format for Apache 1.3 and 2.x is inode-size-timestamp. Although a given file may reside in the same directory across multiple servers, and have the same file size, permissions, timestamp, etc., its inode is different from one server to the next.

IIS 5.0 and 6.0 have a similar issue with ETags. The format for ETags on IIS is Filetimestamp:ChangeNumber. A ChangeNumber is a counter used to track configuration changes to IIS. It's unlikely that the ChangeNumber is the same across all IIS servers behind a web site.

The end result is ETags generated by Apache and IIS for the exact same component won't match from one server to another. If the ETags don't match, the user doesn't receive the small, fast 304 response that ETags were designed for; instead, they'll get a normal 200 response along with all the data for the component. If you host your web site on just one server, this isn't a problem. But if you have multiple servers hosting your web site, and you're using Apache or IIS with the default ETag configuration, your users are getting slower pages, your servers have a higher load, you're consuming greater bandwidth, and proxies aren't caching your content efficiently. Even if your components have a far future Expires header, a conditional GET request is still made whenever the user hits Reload or Refresh.

If you're not taking advantage of the flexible validation model that ETags provide, it's better to just remove the ETag altogether. The Last-Modified header validates based on the component's timestamp. And removing the ETag reduces the size of the HTTP headers in both the response and subsequent requests. This Microsoft Support article describes how to remove ETags. In Apache, this is done by simply adding the following line to your Apache configuration file:

      FileETag none

 

top | discuss this rule

Make Ajax Cacheable

tag: content

One of the cited benefits of Ajax is that it provides instantaneous feedback to the user because it requests information asynchronously from the backend web server. However, using Ajax is no guarantee that the user won't be twiddling his thumbs waiting for those asynchronous JavaScript and XML responses to return. In many applications, whether or not the user is kept waiting depends on how Ajax is used. For example, in a web-based email client the user will be kept waiting for the results of an Ajax request to find all the email messages that match their search criteria. It's important to remember that "asynchronous" does not imply "instantaneous".

To improve performance, it's important to optimize these Ajax responses. The most important way to improve the performance of Ajax is to make the responses cacheable, as discussed in Add an Expires or a Cache-Control Header. Some of the other rules also apply to Ajax:

 

Let's look at an example. A Web 2.0 email client might use Ajax to download the user's address book for autocompletion. If the user hasn't modified her address book since the last time she used the email web app, the previous address book response could be read from cache if that Ajax response was made cacheable with a future Expires or Cache-Control header. The browser must be informed when to use a previously cached address book response versus requesting a new one. This could be done by adding a timestamp to the address book Ajax URL indicating the last time the user modified her address book, for example, &t=1190241612. If the address book hasn't been modified since the last download, the timestamp will be the same and the address book will be read from the browser's cache eliminating an extra HTTP roundtrip. If the user has modified her address book, the timestamp ensures the new URL doesn't match the cached response, and the browser will request the updated address book entries.

Even though your Ajax responses are created dynamically, and might only be applicable to a single user, they can still be cached. Doing so will make your Web 2.0 apps faster.

top | discuss this rule

Flush the Buffer Early

tag: server

When users request a page, it can take anywhere from 200 to 500ms for the backend server to stitch together the HTML page. During this time, the browser is idle as it waits for the data to arrive. In PHP you have the function flush(). It allows you to send your partially ready HTML response to the browser so that the browser can start fetching components while your backend is busy with the rest of the HTML page. The benefit is mainly seen on busy backends or light frontends.

A good place to consider flushing is right after the HEAD because the HTML for the head is usually easier to produce and it allows you to include any CSS and JavaScript files for the browser to start fetching in parallel while the backend is still processing.

Example:

      ... <!-- css, js -->
    </head>
    <?php flush(); ?>
    <body>
      ... <!-- content -->

 

Yahoo! search pioneered research and real user testing to prove the benefits of using this technique.

top

Use GET for AJAX Requests

tag: server

The Yahoo! Mail team found that when using XMLHttpRequest, POST is implemented in the browsers as a two-step process: sending the headers first, then sending data. So it's best to use GET, which only takes one TCP packet to send (unless you have a lot of cookies). The maximum URL length in IE is 2K, so if you send more than 2K data you might not be able to use GET.

An interesting side affect is that POST without actually posting any data behaves like GET. Based on the HTTP specs, GET is meant for retrieving information, so it makes sense (semantically) to use GET when you're only requesting data, as opposed to sending data to be stored server-side.

top

Post-load Components

tag: content

You can take a closer look at your page and ask yourself: "What's absolutely required in order to render the page initially?". The rest of the content and components can wait.

JavaScript is an ideal candidate for splitting before and after the onload event. For example if you have JavaScript code and libraries that do drag and drop and animations, those can wait, because dragging elements on the page comes after the initial rendering. Other places to look for candidates for post-loading include hidden content (content that appears after a user action) and images below the fold.

Tools to help you out in your effort: YUI Image Loader allows you to delay images below the fold and the YUI Get utility is an easy way to include JS and CSS on the fly. For an example in the wild take a look at Yahoo! Home Page with Firebug's Net Panel turned on.

It's good when the performance goals are inline with other web development best practices. In this case, the idea of progressive enhancement tells us that JavaScript, when supported, can improve the user experience but you have to make sure the page works even without JavaScript. So after you've made sure the page works fine, you can enhance it with some post-loaded scripts that give you more bells and whistles such as drag and drop and animations.

top

Preload Components

tag: content

Preload may look like the opposite of post-load, but it actually has a different goal. By preloading components you can take advantage of the time the browser is idle and request components (like images, styles and scripts) you'll need in the future. This way when the user visits the next page, you could have most of the components already in the cache and your page will load much faster for the user.

There are actually several types of preloading:

  • Unconditional preload - as soon as onload fires, you go ahead and fetch some extra components. Check google.com for an example of how a sprite image is requested onload. This sprite image is not needed on the google.com homepage, but it is needed on the consecutive search result page.
  • Conditional preload - based on a user action you make an educated guess where the user is headed next and preload accordingly. On search.yahoo.com you can see how some extra components are requested after you start typing in the input box.
  • Anticipated preload - preload in advance before launching a redesign. It often happens after a redesign that you hear: "The new site is cool, but it's slower than before". Part of the problem could be that the users were visiting your old site with a full cache, but the new one is always an empty cache experience. You can mitigate this side effect by preloading some components before you even launched the redesign. Your old site can use the time the browser is idle and request images and scripts that will be used by the new site

top

Reduce the Number of DOM Elements

tag: content

A complex page means more bytes to download and it also means slower DOM access in JavaScript. It makes a difference if you loop through 500 or 5000 DOM elements on the page when you want to add an event handler for example.

A high number of DOM elements can be a symptom that there's something that should be improved with the markup of the page without necessarily removing content. Are you using nested tables for layout purposes? Are you throwing in more <div>s only to fix layout issues? Maybe there's a better and more semantically correct way to do your markup.

A great help with layouts are the YUI CSS utilities: grids.css can help you with the overall layout, fonts.css and reset.css can help you strip away the browser's defaults formatting. This is a chance to start fresh and think about your markup, for example use <div>s only when it makes sense semantically, and not because it renders a new line.

The number of DOM elements is easy to test, just type in Firebug's console:
document.getElementsByTagName('*').length

And how many DOM elements are too many? Check other similar pages that have good markup. For example the Yahoo! Home Page is a pretty busy page and still under 700 elements (HTML tags).

top

Split Components Across Domains

tag: content

Splitting components allows you to maximize parallel downloads. Make sure you're using not more than 2-4 domains because of the DNS lookup penalty. For example, you can host your HTML and dynamic content on www.example.org and split static components between static1.example.org and static2.example.org

For more information check "Maximizing Parallel Downloads in the Carpool Lane" by Tenni Theurer and Patty Chi.

top

Minimize the Number of iframes

tag: content

Iframes allow an HTML document to be inserted in the parent document. It's important to understand how iframes work so they can be used effectively.

<iframe> pros:

  • Helps with slow third-party content like badges and ads
  • Security sandbox
  • Download scripts in parallel

<iframe> cons:

  • Costly even if blank
  • Blocks page onload
  • Non-semantic

top

No 404s

tag: content

HTTP requests are expensive so making an HTTP request and getting a useless response (i.e. 404 Not Found) is totally unnecessary and will slow down the user experience without any benefit.

Some sites have helpful 404s "Did you mean X?", which is great for the user experience but also wastes server resources (like database, etc). Particularly bad is when the link to an external JavaScript is wrong and the result is a 404. First, this download will block parallel downloads. Next the browser may try to parse the 404 response body as if it were JavaScript code, trying to find something usable in it.

top

tag: cookie

HTTP cookies are used for a variety of reasons such as authentication and personalization. Information about cookies is exchanged in the HTTP headers between web servers and browsers. It's important to keep the size of cookies as low as possible to minimize the impact on the user's response time.

For more information check "When the Cookie Crumbles" by Tenni Theurer and Patty Chi. The take-home of this research:

  • Eliminate unnecessary cookies
  • Keep cookie sizes as low as possible to minimize the impact on the user response time
  • Be mindful of setting cookies at the appropriate domain level so other sub-domains are not affected
  • Set an Expires date appropriately. An earlier Expires date or none removes the cookie sooner, improving the user response time

top

tag: cookie

When the browser makes a request for a static image and sends cookies together with the request, the server doesn't have any use for those cookies. So they only create network traffic for no good reason. You should make sure static components are requested with cookie-free requests. Create a subdomain and host all your static components there.

If your domain is www.example.org, you can host your static components on static.example.org. However, if you've already set cookies on the top-level domain example.org as opposed to www.example.org, then all the requests to static.example.org will include those cookies. In this case, you can buy a whole new domain, host your static components there, and keep this domain cookie-free. Yahoo! uses yimg.com, YouTube uses ytimg.com, Amazon uses images-amazon.com and so on.

Another benefit of hosting static components on a cookie-free domain is that some proxies might refuse to cache the components that are requested with cookies. On a related note, if you wonder if you should use example.org or www.example.org for your home page, consider the cookie impact. Omitting www leaves you no choice but to write cookies to *.example.org, so for performance reasons it's best to use the www subdomain and write the cookies to that subdomain.

top

Minimize DOM Access

tag: javascript

Accessing DOM elements with JavaScript is slow so in order to have a more responsive page, you should:

  • Cache references to accessed elements
  • Update nodes "offline" and then add them to the tree
  • Avoid fixing layout with JavaScript

For more information check the YUI theatre's "High Performance Ajax Applications" by Julien Lecomte.

top

Develop Smart Event Handlers

tag: javascript

Sometimes pages feel less responsive because of too many event handlers attached to different elements of the DOM tree which are then executed too often. That's why using event delegation is a good approach. If you have 10 buttons inside a div, attach only one event handler to the div wrapper, instead of one handler for each button. Events bubble up so you'll be able to catch the event and figure out which button it originated from.

You also don't need to wait for the onload event in order to start doing something with the DOM tree. Often all you need is the element you want to access to be available in the tree. You don't have to wait for all images to be downloaded. DOMContentLoaded is the event you might consider using instead of onload, but until it's available in all browsers, you can use the YUI Event utility, which has an onAvailable method.

For more information check the YUI theatre's "High Performance Ajax Applications" by Julien Lecomte.

top

tag: css

One of the previous best practices states that CSS should be at the top in order to allow for progressive rendering.

In IE @import behaves the same as using <link> at the bottom of the page, so it's best not to use it.

top

Avoid Filters

tag: css

The IE-proprietary AlphaImageLoader filter aims to fix a problem with semi-transparent true color PNGs in IE versions < 7. The problem with this filter is that it blocks rendering and freezes the browser while the image is being downloaded. It also increases memory consumption and is applied per element, not per image, so the problem is multiplied.

The best approach is to avoid AlphaImageLoader completely and use gracefully degrading PNG8 instead, which are fine in IE. If you absolutely need AlphaImageLoader, use the underscore hack _filter as to not penalize your IE7+ users.

top

Optimize Images

tag: images

After a designer is done with creating the images for your web page, there are still some things you can try before you FTP those images to your web server.

  • You can check the GIFs and see if they are using a palette size corresponding to the number of colors in the image. Using imagemagick it's easy to check using
    identify -verbose image.gif
    When you see an image useing 4 colors and a 256 color "slots" in the palette, there is room for improvement.
  • Try converting GIFs to PNGs and see if there is a saving. More often than not, there is. Developers often hesitate to use PNGs due to the limited support in browsers, but this is now a thing of the past. The only real problem is alpha-transparency in true color PNGs, but then again, GIFs are not true color and don't support variable transparency either. So anything a GIF can do, a palette PNG (PNG8) can do too (except for animations). This simple imagemagick command results in totally safe-to-use PNGs:
    convert image.gif image.png
    "All we are saying is: Give PiNG a Chance!"
  • Run pngcrush (or any other PNG optimizer tool) on all your PNGs. Example:
    pngcrush image.png -rem alla -reduce -brute result.png
  • Run jpegtran on all your JPEGs. This tool does lossless JPEG operations such as rotation and can also be used to optimize and remove comments and other useless information (such as EXIF information) from your images.
    jpegtran -copy none -optimize -perfect src.jpg dest.jpg

top

Optimize CSS Sprites

tag: images

  • Arranging the images in the sprite horizontally as opposed to vertically usually results in a smaller file size.
  • Combining similar colors in a sprite helps you keep the color count low, ideally under 256 colors so to fit in a PNG8.
  • "Be mobile-friendly" and don't leave big gaps between the images in a sprite. This doesn't affect the file size as much but requires less memory for the user agent to decompress the image into a pixel map. 100x100 image is 10 thousand pixels, where 1000x1000 is 1 million pixels

top

Don't Scale Images in HTML

tag: images

Don't use a bigger image than you need just because you can set the width and height in HTML. If you need
<img width="100" height="100" src="mycat.jpg" alt="My Cat" />
then your image (mycat.jpg) should be 100x100px rather than a scaled down 500x500px image.

top

Make favicon.ico Small and Cacheable

tag: images

The favicon.ico is an image that stays in the root of your server. It's a necessary evil because even if you don't care about it the browser will still request it, so it's better not to respond with a 404 Not Found. Also since it's on the same server, cookies are sent every time it's requested. This image also interferes with the download sequence, for example in IE when you request extra components in the onload, the favicon will be downloaded before these extra components.

So to mitigate the drawbacks of having a favicon.ico make sure:

  • It's small, preferably under 1K.
  • Set Expires header with what you feel comfortable (since you cannot rename it if you decide to change it). You can probably safely set the Expires header a few months in the future. You can check the last modified date of your current favicon.ico to make an informed decision.

Imagemagick can help you create small favicons

top

Keep Components under 25K

tag: mobile

This restriction is related to the fact that iPhone won't cache components bigger than 25K. Note that this is the uncompressed size. This is where minification is important because gzip alone may not be sufficient.

For more information check "Performance Research, Part 5: iPhone Cacheability - Making it Stick" by Wayne Shea and Tenni Theurer.

top

Pack Components into a Multipart Document

tag: mobile

Packing components into a multipart document is like an email with attachments, it helps you fetch several components with one HTTP request (remember: HTTP requests are expensive). When you use this technique, first check if the user agent supports it (iPhone does not).

Avoid Empty Image src

tag: server

Image with empty string src attribute occurs more than one will expect. It appears in two form:

  1. straight HTML

<img src="">

  1. JavaScript

var img = new Image();
img.src = "";

Both forms cause the same effect: browser makes another request to your server.

  • Internet Explorer makes a request to the directory in which the page is located.
  • Safari and Chrome make a request to the actual page itself.
  • Firefox 3 and earlier versions behave the same as Safari and Chrome, but version 3.5 addressed this issue[bug 444931] and no longer sends a request.
  • Opera does not do anything when an empty image src is encountered.

 

Why is this behavior bad?

  1. Cripple your servers by sending a large amount of unexpected traffic, especially for pages that get millions of page views per day.
  2. Waste server computing cycles generating a page that will never be viewed.
  3. Possibly corrupt user data. If you are tracking state in the request, either by cookies or in another way, you have the possibility of destroying data. Even though the image request does not return an image, all of the headers are read and accepted by the browser, including all cookies. While the rest of the response is thrown away, the damage may already be done.

 

The root cause of this behavior is the way that URI resolution is performed in browsers. This behavior is defined in RFC 3986 - Uniform Resource Identifiers. When an empty string is encountered as a URI, it is considered a relative URI and is resolved according to the algorithm defined in section 5.2. This specific example, an empty string, is listed in section 5.4. Firefox, Safari, and Chrome are all resolving an empty string correctly per the specification, while Internet Explorer is resolving it incorrectly, apparently in line with an earlier version of the specification, RFC 2396 - Uniform Resource Identifiers (this was obsoleted by RFC 3986). So technically, the browsers are doing what they are supposed to do to resolve relative URIs. The problem is that in this context, the empty string is clearly unintentional.

HTML5 adds to the description of the tag's src attribute to instruct browsers not to make an additional request in section 4.8.2:

The src attribute must be present, and must contain a valid URL referencing a non-interactive, optionally animated, image resource that is neither paged nor scripted. If the base URI of the element is the same as the document's address, then the src attribute's value must not be the empty string.

Hopefully, browsers will not have this problem in the future. Unfortunately, there is no such clause for <script src=""> and <link href="">. Maybe there is still time to make that adjustment to ensure browsers don't accidentally implement this behavior.

This rule was inspired by Yahoo!'s JavaScript guru Nicolas C. Zakas. For more information check out his article "Empty image src can destroy your site".

top