Measuring and Monitoring Web 2.0 Applications

The Internet and the Web continue to evolve to deliver new customer experiences and increased uiuc self service utility. The label “Web 2.0,” while imprecise, signifies the newest and best examples of this evolutionary process.Organizations are now adopting these Web 2.0 technologies and design methods to enable the creation of richer and more responsive interactions. But to be effective, the resulting applications must also be significantly more complex than traditional Web sites, complicating performance management and imposing new requirements on performance measurement tools.

The Internet and the Web have become the primary vehicle for business communications, evolving to subsume and replace older technologies. As software technologies exploit steady advances in the Internet hardware platform, the Web continues to evolve to deliver new user experiences and increased application utility. The most advanced example of the Web becoming a platform is the rich Internet application (RIA), reflects the gradual transition of Web applications from the simple thin-client Web browser to a richer distributed-function paradigm that behaves more like the desktop in a client/server model.

This architecture complicates performance measurement, whose goal is to understand the customer’s experience. In an RIA, the time to complete a Web page download may no longer correspond to something a user perceives as important, because (for example) the client engine may be prefetching some of the downloaded content for future use. Standard tools that measure the time for Web page downloads to complete can record misleading data for RIAs. To implement RIAs successfully, enterprises must re-evaluate their approach to performance management. Instead of relying on the definition of physical Web pages to drive the subdivision of application response times, RIA developers or tool users must break the application into logical pages. Measurement tools must recognize meaningful application milestones or markers that signal logical boundaries of interest for reporting, and subdivide the application’s response time accordingly.

What does Web application performance mean to you? Most business executives would evaluate the success of a Web application by looking at business performance metrics such as revenue, costs, and customer satisfaction. Because an application may be created to serve customers, partners, members of an organization, or employees, the relative importance of those metrics may vary. For any Web application, effectiveness means simply fulfilling the planned design and delivery objectives, delivering online experiences that lead to satisfied customers, and so meeting the intended business performance goals.

The characteristics of Web 2.0 applications highlighted earlier-the network as a platform, collaborative environment, social networking, mashups, and rich media interfaces-create several additional challenges for all measurement tools. In increasing degree of complexity, these are:

To sum up the importance of these issues, consider the popular saying coined by Tom DeMarco: You can’t control what you don’t measure. Measuring the wrong things, or basing key management decisions on reports that contain incomplete data, is as bad, if not worse, than not measuring at all. So it’s not wise to measure Web 2.0 applications using only the tools and approaches developed for traditional Web sites. Inaccurate data undermines the effectiveness of any program of systematic performance management and causes performance-tuning skills and resources to be applied in ways that are not optimal. It can also lead to unproductive interdepartmental conflicts and disputes over service-level agreements with internal or external service providers when staff question the accuracy of the data, or discover discrepancies in data from different sources.

Success in every one of these five performance management activities depends crucially on an organization’s ability to gather and report meaningful, timely, and accurate measurement data with the focus on the right metrics. Since a key idea of Web 2.0 is enhancing the user’s experience, it is vital to measure actual customer experience proactively.

On Web 2.0 sites, personalization options allow customers to tailor their experience of a site to their individual preferences, and sites are carefully designed to download and display contents efficiently and successfully in all major browsers. Because customers’ experience depends on their Internet connectivity, sites may even adjust their content based on the browser’s connection speed. Measurement data must reflect this diversity.

In traditional Web applications, customers consume content, so all performance measurement efforts have focused on download times as the key metric. But as Web 2.0 applications add collaboration and social networking features, customers also supply content. To ensure the quality of a customer’s experience, it’s therefore necessary to measure and report upload performance as well.

For example, the user of a collaborative application might navigate to a product or member directory, complete a login or authentication dialog,search for a certain subject or interest area, browse the results, select a particular area of interest, proceed to an upload page or dialog, complete a browse dialog to select content to be uploaded from their laptop or desktop, entering some additional descriptive metadata as appropriate for the application, and click the Upload button. Throughout this interaction download activities are minimal, and may respond rapidly. But if, after all this work, the upload stage is painfully slow, or fails altogether, because of congestion at the server end, that customer may be lost forever.

Leave a Reply

Your email address will not be published. Required fields are marked *