Skip to main content

How to measure your website's performance KPI's

I put together this quick overview on web “user experience” KPI’s for a client but I thought it was worth re-posting it here for future reference… what they wanted to know what “how long did a given page request take on the website”, with the idea of defining certain KPI’s e.g. login must take less that 5 seconds, or saving your user profile less than 10 secs etc.

“I had a look at some of the end-user KPI's that were requested for the web side of things so I thought I would drop you a quick email as I have a fair bit of experience in this area (when I used to run www.totaljobs.com website).

There are basically 3 ways of doing this - external site monitoring, page tagging (javascript) and appliance-based.
External site monitoring is basically where a 3rd party simulated a user page request every 5 or 10 minutes and records how long it took. Basically they "GET" a web page URL. Companies such as Site Confidence (UK), Gomez or Axzona are leaders in this area.

"Page tagging" is where you insert some client-side javascript into the page that is sent to the client's browser. This then sends that information back to a collector server in the data centre that stores it in a database and you can generate reports etc. This is a much closer measure of REAL end-user experience for all of your clients and gives you more information to troubleshoot e.g. performance for users in Italy is slow, or users with IE7 on Vista is slow but Firefox is fine. The downside is that you generally have to make code level changes to insert the tags into the code. Solutions in this space include CA Wily Customer Experience Manager and Gomez's Actual Experience Manager XF. For .Net sites Avicode UX Monitor doesn't need application changes as it just hooks directly into the .Net Management framework.

Appliance-based solutions are basically devices (they look a bit like a router box) that plug-in to the main routers in the data centre and basically monitor the traffic on the way in/out to measure performance and the user experience. They can also actually insert "page tagging" type code "on the fly" and thus avoid application-level changes. Downside is getting the networking set up so you can tap into a promiscuous port can be a pain. Leaders in this area are TeaLeaf and Coradiant."
It is worth noting that you also need your classic “web analytics” (Webtrends / Omniture / Hitbox or even Google Analytics if your on a zero budget) to give you the classic unique users, page impressions etc stuff.

There are obviously many more KPI’s involved in running a website(customer service, system availability, email deliverability, SEO, marketing conversion etc etc) but at the end of the day I think that the “user experience” KPI’s sit at the top of the KPI pyramid.

Well, except for for the net profit KPI, of course :-)

Comments

Popular posts from this blog

So what else does Operations do? Well, there is a whole organisation run by the UK govermnent to help answer that question! ITIL , or the IT Infrastructure Library, is a library of best practice information that basically tells you everything you need to do to run an IT department. Similarly developers have development methodologies such as RAD, JAD, Agile/XP, and Project Managers have PM methodologies such as Prince 2, PMBok etc to cover off their areas in more specific detail. ITIL breaks it down into 7 key areas: Service Support - deals with the actual provision of IT services such as the service (help) desk, incident management, problem management, release management etc Service Delivery - deals with ensuring that you can continue to DELIVER the service support functions with things like contigency planning, capacity management, service levels etc The Business Perspective - helps to ensure that the IT function is aligned with the organisation's business strategy and that how to

Top 13 Website Crashes of 2010?

I was doing a bit of research for an article and I started compiling a list of high-profile website crashes in 2010. Pingdom have published a list here - http://www.readwriteweb.com/archives/major_internet_incidents_and_outages_of_2010.php as have Alertsite here - http://www.huffingtonpost.com/2010/12/29/the-biggest-web-outages-o_n_801943.html But I decided to compile my own list from a more UK-centric perspective and came up with my “baker’s dozen” below. # Site Date News Link 1 National Rail Jan-10 http://www.theregister.co.uk/2010/01/05/rail_chaos/ 2 Outnet Apr-10 http://www.guardian.co.uk/lifeandstyle/blog/2010/apr/16/outnet-sale-website-crash 3 Apple (iPhone 4 Launch) Jun-10 http://www.dailymail.co.uk/sciencetech/article-1286756/Apple-iPhone-4-pre-order-Website-crashes-new-iPhone-goes-sale.html 4 ITV.com (World C

Real-time Performance Analytics with Pion and WebTuna

One of my goals is to create an easy to implement real-time web performance analytics solution that doesn’t rely on fragile, inaccurate javascript tags and I have been playing around with an idea on the weekend. I used the performance measurement and analytics stream generation capabilities of Atomic Lab’s Pion to inspect the HTTP traffic directly off the network and measure the page load performance. I then used some simple Python scripting within Pion to generate a beacon to www.webtuna.com , a UK-based performance analytics provider. I then fired up webpagetest.org and generated some traffic from different nodes around the world and you can see the results graphically in the screen shot below. The end result is a proof of concept that works brilliantly to tell you who is on your website, where they come from, what pages they have visited… and how fast the page appeared to load from the end-user’s perspective. Keep in mind these are page load times, not server response