Application performance has always been an important topic in Information Technology, however often seems to be neglected. So, what is performance? A simple abstraction is, “how much, in how long.” This can be broken down into many technical metrics e.g. efficiency, throughput, utilization etc… however it is often easy to forget the emotional measures e.g. the perceived performance, time to action, usability, number of clicks etc… . “How much a user can achieve, in how long” is just as important as “How much a computer can achieve, in how long.” The fastest solution is of little use if it takes the user an eternity to use it. Think how much quicker you are able to use many applications once you learn the various shortcuts e.g. CTRL+C instead of (move mouse)->Edit->Copy. Over the years, there have been many views on when it is best to think about the performance of an application. A common approach has often been to “test at the end and fix performance problem then, rather than architecting for performance.” The core concept here is that for the vast majority of applications, raw performance is not a critical factor for much of the functionality, so why waste effort addressing performance upfront where it is of little impact? At the other end of the scale, raw performance can be a key success factor for the majority of the application logic. This makes it important to architect for performance and regularly assess. One interesting formal realisation of this is ‘Performance Driven Development.’ Which approach is best? It all depends. Many areas require thought when thinking about performance. It is easy to immediately think about individual algorithms and routines, however the higher level technical and physical architecture require careful consideration too. Things such and caching, load on demand, batch processing, asynchronous & parallel processing, machine sizing, network architecture, component  distribution etc… can all have a huge impact on the application performance. From the user perception side of things, error prevention, self-diagnosis, UI consistency, visibility of system status etc… can also have a huge impact. If we take "Visibility of system status," this can quickly improve the perceived application performance. One example of this is adding a progress bar. Many studies have shown that users consistently feel an application performs better when they can see the progress. Further to this, for a fixed amount of time, the rate at which a progress bar fills dramatically affects the perceived change in performance. Observations suggest a progress bar that increases at the same rate, or a progress bar that accelerates towards the end, provoke better results in user perception. A bad choice here can however have the opposite effect and give the impression of worse performance. In conclusion, performance is a wide topic that goes beyond simply making the quickest algorithms. There are both technical and emotional factors to consider.  Like much in Information Technology, it is important to remain pragmatic when considering the many factors, along with finding a balance between architecting for performance versus tuning during testing.

2 Comments

  1. Hi Jas, this goes along with the advice (since we had that command): place a "SHOW" in front of the exec trigger so the user sees at least something has changed and is much more willing to wait until data are delivered. Hope we get more tools/means to please our endusers with the 9.6 Uli
  2. Thanks Uli, Indeed, small changes can make a huge difference to perceptions and the user's experience. This is often very important for application acceptance. Do others have any quick tips to share please? Cheers, Jas.