Tuesday, September 18, 2012

WWW Citation Timeline (2010)

Citation Timeline (2010):

Mohan Raj Rajamanickam, Russell MacKenzie, Billy Lam, and Tao Su. 2010. A task-focused approach to support sharing and interruption recovery in web browsers. In Proceedings of the 28th of the international conference extended abstracts on Human factors in computing systems(CHI EA '10). ACM, New York, NY, USA, 4345-4350.

In this paper, authors developed a task-oriented mobile web browser on an iphone. They relay on a previous study which state that most web users leave the browser without finishing their task. Thus, they argue that when a user return back to browser the current techniques such as: history and bookmarks, do not provide good information about what was the task and what was done.
First, they conducted semi-structured interviews with 9 students (aged between 18-29), with different browsing skill sets. They discussed that most of the participants do not use browser's history and bookmarks, but they havely use the autocomplete features, and multiple tabs. Participants often use some sort of annotation or text files to save their task state before leaving computer. They use these techniques to help them when they return back to their tasks.

As a reslt of their semi-structured interviews, authors have concluded their prospect browser design to contain the following features: (a) group multi webpages under one task, (b) enable easy way to stop any task, and to come back and continue it, (c) include some artifacts about each task such as: task history and task bookmarks, (d) users can annotate webpage while performing their tasks, (e) tasks can be shared with other users. With that in mind, they developed a prototype of a browser called: TabFour. Due to their short implementation time, they did not include all of their design requirements in this prototype. To validate their requirements and design, they conducted a study on 8 subjects to check if their designed prototype is suitable. In their short conclusion, they indicated that experiment users reported that this type of browsers indicated that this browser was useful to them.

In relation to [ref], this paper indicated the [ref] as a previous study which comes handy in studying web users, however for this study they elicit design requirements for an existing problem from browser users, then they used these requirements to built a prototype which lead to an experiment similar to the one at [ref] in order to validate their work.

WWW Citation Timeline (2004)


Citation Timeline (2004):

Marco A. Winckler, Philippe Palanque, and Carla M. D. S. Freitas. 2004. Tasks and scenario-based evaluation of information visualization techniques. In Proceedings of the 3rd annual conference on Task models and diagrams (TAMODIA '04). ACM, New York, NY, USA, 165-172.

This paper propose a new task visual model to validate other task visual  techniques. Also, it can be used to compare two techniques which one is better in performing user tasks than the other. They start by building multiple user scenario for each task using ConcurTaskTree (CTT) notation. Later, apply all scenarios to each of the selected techniques to evaluate if it serve the needed usability or not. For example, in their case study, they compared accessing and searching files using two techniques: Treemaps, and Hyperbolic browser. One of the techniques did not have all the scenarios. The following paragraphs will explain CTT and how it was used in this work.

ConcurTaskTree (CTT) notation [ref 12] is task modeling technique which model four types tasks as the following:
- Abstract: describes any complex form of actions either by user, system, or both.
- User: describes tasks that are performed only by the user (no system involvement).
- Interactive: describes tasks that include operations by both user and system.
- Application: describes tasks that are entirely performed by the system without user involvement.

CTT is a hierarchical structure where tasks can be linked together if they have any relationships in between. The relationships represented using Language Of Temporal Ordering Specification (LOTOS) operators. Example of such operators are: choice[], enabling >>, task interruption [>. [ref 10] has one tool to edit and write tasks (called CTTE).




WWW Citation Timeline (2002)

Citation Timeline (2002):

Andrei Broder. 2002. A taxonomy of web search. SIGIR Forum 36, 2 (September 2002), 3-10.

The author of this paper discusses the information retrieval (IR) in the sense of how users search for information over the web. From an IR-HCI point of view, the author discusses that as user tasks plays a major role in retrieving the information on the web, the HCI should be involved in the classic IR model (figure 2). They used the following three taxonomies for web search tasks needs: 

(a) navigational, 
(b) informational, 
and (c) transactional. 

Then they conducted a survey on search engine (Altavista) users. Altavista was famous and popular when they conducted that study.

Although, their work was similar to [ref], they didn't actually extend it nor use it in their work. Another fact about this paper, is that it is a very statistical paper in nature, and surprisingly it was cited over than 300 times! I think most citations were on IR field giving the fact that they employed task taxonomies on the IR field.

WWW timeline (2001)


Citation Timeline (2001):

Stuart K. Card, Peter Pirolli, Mija Van Der Wege, Julie B. Morrison, Robert W. Reeder, Pamela K. Schraedley, and Jenea Boshart. 2001. Information scent as a driver of Web behavior graphs: results of a protocol analysis method for Web usability. In Proceedings of the SIGCHI conference on Human factors in computing systems (CHI '01). ACM, New York, NY, USA, 498-505.

This paper provides a new WWW protocol analysis methodology for studying users on the web and any other intensive information tasks. Using their developed protocol, they conducted an experiment on WWW users while they’re surfing the internet. They used a similar statistical analysis to [ref]. They emphasise that from [ref] that majority of users time over the www spent on finding information while reading surfing the WWW. Thus, they used a previous work on information foraging theory [ref 11 and 16 on paper] to analyze user tasks for finding information. They used, information patches; a representation for information needed for user tasks where user has to navigate through them; and information scent which provide information about the navigation cost and value.
   
Before conducting their experiment, they expanded the WWW experiment [ref] by genrating a WWW task bank that would have most of the tasks WWW users would perform while browsing the WWW. Additionally, they used a previous work at Georgia Tech [ref 8] and used their tasks classification to classify their test banks taxonomies under the following: (a) Purpose, (b) Method, and (c) Content.

Experiment was consisted of 14 students from Stanford University. The mean age was 23. Participants were asked to browse the internet as what they do in their daily life. 


Their protocol was consisted of the following: Current URL, a screenshot of the current website, event time (from the log file), a transcript of user verbal words (they were asked to explain what they’re doing while they were surfing the www). Every event has its own code-name. Additionally, they used a combination of recorded eye movements along with the mouse movements of each task with their recorded data.


(Discussion item)

WWW timeline (2000)

Citation Timeline (2000):

Melody Y. Ivory. 2000. Web TANGO: towards automated comparison of information-centric web site designs. In CHI '00 extended abstracts on Human factors in computing systems (CHI EA '00). ACM, New York, NY, USA, 329-330.

This paper discusses the usability of websites. The author argue that, the design recommendations and guidelines for website are not enough. Thus, the need for the usability studies is there. They focus their usability study to enhance the information architecture. They proposed a new automated methodology and tool called TANGO (Tool for Assessing Navigation and Organization), to help organizing information flow in websites by providing an information-centric web sites.

Web Tango employe typical information retrieval techniques along with Monte Carlo simulation to simulate user behavior on website. Web Tango uses the taxonomy model (from the WWW paper) as the underlying model for it. Web page designer should provide the following information to the tool: page metadata, page complexity, links, and links out of website. Additionally, designer should provide information about user information tasks, along with some other data. After designer fill out those info, the tool will simulate the user navigational behavior.

Despite that author stated that this project is still ongoing project, I was wondering about the use of Monte Carlo simulation to simulate users and how accurate it is to the actual users. I haven’t seen any comments from the author about this issue, or an intentional attempt to validate their methodology and compare it with real users through a controlled experiment.

Andrew Sears and Julie A. Jacko. 2000. Understanding the relation between network quality of service and the usability of distributed multimedia documents. Hum.-Comput. Interact. 15, 1 (March 2000), 43-68.

This paper is a study about the usability of distributed  multimedia over the internet. The author focuses about the problem of network delays when retrieving multimedia on a distributed environment.
Their relation to the (ref) work, they have used their taxonomy capture users activities in their experiment. Additionally,they added two activities: providing information, and seeking information. Note that sometimes, user can have both activities. For example: using online payment, providing payment info, and seeking confirmation.  

Michael J. Albers and Loel Kim. 2000. User web browsing characteristics using palm handhelds for information retrieval. In Proceedings of IEEE professional communication society international professional communication conference and Proceedings of the 18th annual ACM international conference on Computer documentation: technology \& teamwork (IPCC/SIGDOC '00).

This paper discusses the web page interfaces for personal digital assistants (PDA's) like Palm device. They discuss the limitations of web presentation and retrieval on these devices. Then, they provide the existing guidelines for web page design. Finally, discuss differences between web interface design for desktops and handheld devices. This paper only cited our (ref) to emphasize the existence of task-driven approaches! So, it did not build on it rather than indicating the existence of such techniques.   

Timelines for WWW paper

I reviewed earlier a web task analysis paper called “The tangled Web we wove: a taskonomy of WWW use” which was published at 1999. My plan was to check the future work which referred to this work, and build up on it. I found 51 papers which cited this paper. Due to the large number of citations, I have only selected some papers (mostly) on different years and reviewed only the ones I believe that they did build on this work.

Throughout next posts, I will state paper(s) name(s), year, and then discuss how they relate to our source paper. For some the related papers, I will provide a full review. Please note that I will use [ref], or (ref) as a reference to the WWW paper.


Update:
Here is a list of the related 'timeline trace' for this papers:



Tuesday, September 11, 2012

Chapter 5

Chapter 5: Making the business case for site visits


The main reason behind the visiting user site is to verify that our assumption about the user are correct. Through this chapter, authors explain the expected risitance that you will (as a designer) face with your company when you propose to visit customer site to learn more about them.

Common objections to task and task analysis:

We’re changing the process; why to bother and check the current process?
Even though you’re changing the process, you still want to learn about the environment and the current workflow and tasks. One way for the new process is to adopt the current workflow and tasks as the users are familiar with. This will provide a good transition for users to the new process rather than having a new process (or system) which is incompatible with the environment.  
This is totally new; nothing it out there to go and see!
Even if you’re proposing a new design, you still can learn from its older design. For example, if you’re designing a fax machine, you still want to learn about how people send messages.
Users all do it differently; how would you know who to watch?
In all cases variations between users are expected, cultural variations, shortcuts, workarounds, etc.
we’re just changing one part; you don’t need to go beyond that!
Even if the change is one a small part, you still want to study the other parts as they affect the whole user experience. Also, you can learn from other tasks that interact with the tasks affected in your new change (or part).
What can we learn from few users?
recommendation is to study small number of users, so small-scale user set is totally acceptable and beneficial.
Why not to use the information we already have?
Usually, we hear such question from market researchers, and business employees. Of course, we still need their data and studies, however their studies questions were focused to serve their purpose and do not answer what we really need.