As Google say “Fast is better than slow” and they’re right.
We developed the Avon and Somerset Constabulary website using Scrum and in our ‘definition of done’ we had several points covering site performance.
Website performance impacts business success
The speed of our website is part of the user experience. Google and other search engines have been factoring page speed into their search algorithms for several years.
Search engines measure website performance in terms of page speed, ease of navigation, user experience and responsiveness as a subset of Search Engine Optimization.
In this post we are going to focus on the technical side of page speed.
How do webpages load?
When a device requests either a full webpage or a piece of data using AJAX there are three main components that made up the time it takes for the request to complete:
- How quickly the data (request/response) can get between the client and server
- How quickly the server takes to prepare the data
- How quickly the client can process and display the data
Let’s look at the server processing to begin with.
User request speed
When a request comes in, the fastest way to respond is to have the information the user has requested packaged up and good to go. In short, by caching as much as possible.
If an unauthenticated request comes in for a specific page on the website then everyone will see exactly the same information. This means that we can easily cache the page which, in our case, we’ve done using the MVC Cache Profiles as it’s easy to alter their settings.
If a request requires personalised information then typically a database hit is needed to retrieve the required data to fulfil the request.
Server data retrieval
We ensure that our database has adequate indexing on the tables to respond to queries as fast as possible and, in some cases, we’ve denormalized to increase read performance.
Again it’s still possible to use a memory cache on the website to prevent identical and wasted database requests occurring.
We use Red Gate profiling tools as part of our standard development procedure to ensure that there are no wasted repeat calls happening or other bottlenecks. Tools like this are invaluable, as they allow a developer to glimpse at exactly what’s happening, rather than guessing or perhaps stumbling towards premature optimization.
Of course with caching and especially when several layers are in use simultaneously it’s vital to understand that what you gain in speed, you lose in freshness of information. In short; once caching has been set up it needs to be fine-tuned to ensure that users aren’t served out of date information.
Server to user speed
Once the server has gathered up the information ready to send to the client, we need to get it to them as quickly as possible. Ideally there will be a high speed connection between the server and client, but that’s not always possible. For example if a user is on a 3G connection, the more data that’s sent the slower (and potentially more expensive) it will be.
Shrinking Data
To shrink the data being served down as much as possible we make use of minification, bundling and compression on everything that’s being sent.
- Minification: Reduces the size of a file by taking out unnecessary information (whitespace, replace long words with short words)
- Bundling: Reduces the number of requests needed to load a page, by combining multiple files into one
- Compression: Shrinks the size of files, either through lossless (HTML/CSS/JS) or lossy (Images) compression
Thankfully these days there are more tools and support than ever to make this fairly straight-forward. With .NET MVC 4 Microsoft made it extremely easy to both minify and bundle CSS and JavaScript.
Compression has been built into IIS for several versions now, although it’s much easier to enable and configuration in IIS 7 over previous versions.
How well optimised for speed is your website?
Two good online tools for checking how well optimized your website is are: