Tuesday, October 2, 2012

Robot tracing (2003)

Endsley, M. R., Bolstad, C. A., Jones, D. G., & Riley, J. M.(2003). Situation awareness oriented design: From user's cognitive requirements to creating effective supporting technologies. In Proceedings of the 47th Annual Meeting ofthe Human Factors & Ergonomics Society (pp. 268-272).Santa Monica, CA: Human Factors & Ergonomics Society

This paper explains briefly an SA-oriented design framework; how to go from a system requirements to construct an SA-oriented design. Authors, had written a book on SA design from a perspective of a user-centric design. In this paper, authors try to focus on steps needed to conduct an SA Oriented design, and then provides an example of such design from military. The novelty of their work was, the conduction of an SA oriented design where they integrate the SA tasks with user goals.

So, first lets discuss Situation Awareness (SA). SA is human state where a human knows what are the factors that happen in the surrounding environment that would affect his/her decisions in performing any task. Example, user is driving a car. When user wants to stop, he/she will build the SA from looking at the street and other cars, looking at the dashboard, fuel level, etc, to help him/her to stop probably.

When do we need situation awareness when designing system? When user task involves decision making which relay on many variables , or factors, user errors can be heavily driven from bad interpretation, or missing data of those variables. Thus, SA is important for decision making user tasks. Another use of SA is to help managing multiple system. Example, in the previous paper, single user wants to control multi robots, with all the given variables, this can be difficult, that’s we need a good SA design to come along with such system.

One major problem about SA is that it reside inside human operator mind. Unlike user goals which can be written well. So, the key idea here is to link user goals (or requirements) with SA to form a better decision support system interface. The SA oriented design process include three main steps:

  1. SA requirements analysis
  2. SA-oriented design principles
  3. SA measurements
Step (2) and (3) can be repeated to enhance this model.

Step1: SA Requirements Analysis
The goal of this step is to conduct SA requirements by identifying user cognitive tasks. User cognitive tasks can be identified using Goal-Directed task analysis (GDTA) - GDTA before in the blog. So, using the GDTA, we can conduct the requirements by identifying the following for every user task:

  • Goal
  • Information needed to achieve that goal
  • Integration of such information to be presented to operator (user) to be meaningful and therefore support decision
So, SA requirements will be associated with every user goal, or subgoal to explain the above. One thing to put in mind when conducting SA requirements, that the focus here is user goals, not user tasks!

Step2: SA-Oriented Design
In this step, they cited a book which three of the authors wrote earlier (at 2003). They indicated that the book provided 50 design principles for SA design. Those principles focuses on the dynamic switching between goal-driven process, and data-driven process. So, they indicated that in order to conduct an SA oriented design, you should follow SA-oriented design principles. The paper didn't dive much into details about how to conduct an SA-oriented design. In fact, authors selected some design principles and discussed them.


Step3: SA Design Measurements
After conducting the design to enhance operator’s (user) SA, we need to verify if the new design does really help increasing user situation awareness, or not? In order to verify the design, authors suggested in their framework the use of: Situation Awareness Global Assessment Technique (SAGAT) (1995). Again, this approach was just mentioned but never discussed in the paper.


The rest of this paper explains an example of SA-oriented implementation on the military field application. In their example, they had a battle field where decisions has to be made according to the data came from the field. They developed different interfaces to different level of commands. Higher commanders would need an abstract SA-oriented design, yet detailed as needed to support their decisions.

HTA exercise

This is a small exercise where I'm performing Hierarchical Task Analysis (HTA) on Pizza Delivery System (PDS). Briefly, PDS is a system which runs pizza shop starting from order taking, queuing items in kitchen, preparing items, stacking order items, and delivering items. In the following excises, I have conducted an HTA of five major user tasks. I will give an HTA diagram of each with title,and brief description in this entry. Click on pictures to enlarge.

Task 1: Place Order
Order taker takes an order from customer, place it in the system, then calculate the total and estimated time to be delivered.
0. Using "Place Order"
1. Select Item
2. Enter item quantity 
3. Enter additional ingredients
  3.1 Select ingredients
  3.2 cluck add ingredients
4. Submit order
5. verify order

Plan 0: do 1-2 in that order, then 3 if necessary, in the previous order repeat as necessary, then 4-5 in order
Plan 3: do 3-4 in that order and repeat as necessary


Task 2: Change Order
Customer calls the store, and asks to change his/her order.



0. Using "Change order"
1. Query order name
   1.1 Enter order name
   1.2 Enter phone number
2. Change item
   2.1 Change quantity (0 to delete)
   2.2 Add ingredient
   2.3 Delete ingredient

Plan 0: do 1-2 in that order.
Plan 1: do 1.1-1.2 if necessary in any order.
Plan 2: do 2.1-2.2-2.3 if necessary and repeat in any order.



Task 3: Cancel Order
Customer calls the store, and asks to cancel order.



0. Using "Cancel order"
1. Query order
   1.1 Enter order number
   1.2 Enter phone number
2. Cancel specific order
   2.1 Click cancel button
   2.2 confirm cancellation

Plan 0: do 1-2 in that order.
Plan 1: if necessary do in any order 1.1-1.2.
Plan 2: do 2.1-2.2 in that order.


Task 4: Cook Order
Order has been placed, it will be forwarded to the kitchen and placed in a queue to be prepared. Cook will take the next item in the queue and start preparing it.



0. Using "Cook order"
1. Prepare order
   1.1 Read item details off the screen
   1.2 Prepare item to be placed in the oven
2. Place item in the oven
3. Take item out of the oven
4. Finish order
   4.1 Read order items off the screen
   4.2 Stack order items together to be delivered

Plan 0: do 1-2-3-4 in that order.
Plan 1: do 1.1-1.2 in that order.
Plan 4: do 4.1-4.2 in that order.


Task 5: Change Store Functions
Store Manager at the begging of the every work day can manage: store menu items, cooks, oven, and deliverers.



0. using change "store functions"
1. Manage menu items
   1.1 Add new menu item
   1.2 Change item
   1.3 Delete item
2. Manage cooks
   2.1 Add cook
   2.2 Change cook
   2.3 Delete Cook
3. Manage ovens
   3.1 Add oven
   3.2 Delete oven
4. Manage deliverers
   4.1 Add deliverers
   4.2 Delete deliverer
5. Manage delivery locations
   5.1 Add location
   5.2 Change location
   5.3 Delete location

Plan 0: if necessary do any of the following in any order or combination 1-2-3-4-5
Plan 1: if necessary do any of the following in any order or combination 1.1-1.2-1.3
Plan 2: if necessary do any of the following in any order or combination 2.1-2.2-2.3
Plan 3: if necessary do any of the following in any order or combination 3.1-3.2
Plan 4: if necessary do any of the following in any order or combination 4.1-4.2
Plan 5: if necessary do any of the following in any order or combination 5.1-5.2-5.3

Friday, September 28, 2012

From HCI to Interaction Design


I just attended  a talk by Dr. Oli Mival; A professor at Edinburgh Napier University; which discusses in general sense the current HCI issues and how research is heading in the future. He also, discusses some of his project; especially ICE project.

Dr. Mival went over his big research project "Interactive Collaboration Environment (ICS)" which is an interactive meeting room designed to help in conducting a more engaging and effective meetings. He discussed design issues and obstacles that faced them when they started the project.  The idea of the meeting room is room equipped with different equipment's to enable participants in their meetings in better a way. The following is a list with what Mival indicated as a components in the ICE:

  • Interactive multi touch large screen embedded in the meeting table, which can be controlled by different participants
  • A control pannel can be connected with most of the common used devices
  • Dropbox can be used to transfer files from and to the system
  • Analog switch can turn on/off the whole system
  • Three different cameras distributed around the meeting room to help when video conferencing with others
  • Interactive touch screens placed on the walls for discussions which needs participants to stand and explain
One fact about this project, that I liked was the use of analog switch to turn on/off the system was a cleaver idea. I've been in the user shoe and I know how frustrated are users when they get around this type of systems.

Dr. Mival also indicated that spaces and places should be included in any user interface design. He closed his talk by indicating that now as the HCI research and industry is getting advanced, any further work in the interface design should consider all the following factors: people, places, activities, context.

Interesting facts and issues raised by Dr. Mival were:
  • HCI and Interface design should adapt people tasks not the other way around. He gave an example about their ICE design, the multi-touch screen in the table was designed to handle mugs, laptops, etc in top of them. He mentioned that, you can't ask people not to bring coffee or tea in their meetings!
  •  Hands wave gesture interfaces are not usable. Simply, user will get tired after a couple of minutes.
Here is a video about the ICE system. I thought it would be cool to share it with you:

Tuesday, September 25, 2012

Joint Application Development (JAD)/ Discovery Sessions

JAD session, is a meeting where analysts meet with all stakeholders in one room to identify the system requirements. It is similar to focus groups, except that in here all stakeholders brought together in one meeting to agree on system requirements. JAD session has a potential risk of undiscovered clashes. Typically, stakeholders with more power can resolve any disagreement in their favor (their point of view), so it is essential for analyst to understand that and to be aware of any potential risks.
JAD session is intended to capture requirements from multiple stakeholders. Additional tools may be used within JAD session to help in modeling requirements, and therefore help stakeholders to understand requirements being captured. Tools can be any modeling tools like: use-case diagrams, activity diagrams, etc. Brainstorming can be used on ambiguous points, also prototyping can be used.

According to our referenced book [3], a good JAD session should contain between 5-10 participants, but can go up to 15-20 if needed. Participants can be categorized under the following types:

  • The Facilitator: is the person who will guide the session and controls it. According to [3], this is the most important type to the success of JAD session. A good facilitator would stop any unneeded discussion, and will make sure that session is working as planned. Some of the facilitator roles are as the follows: timekeeper, mediator of conflicts, coordinate different groups within the session (if any), summarizing discussions.
  • Business Analyst: the one who manage session goals, and should explain to the facilitator the goals up front.
  • Scribe: the person who take notes during the meeting.
  • User (customer).
  • Domain expert (subject matter expert): sometimes it is important to bring some expertise in certain technology or certain thing needed to enrich meetings.
  • Developer: this will bridge the gap between developer and customer needs.
  • Sponsor: OF COURSE
  • Observer(s): if you need to train someone about JAD session, we would call them observers. Note, that an observer could be someone who have an interest in attending the JAD session but will be minimally engaged.

JAD session can be conducted in four simple steps as shown in the figure below[3]:

  • Establish Goals and Objectives: explain meeting goals and objectives to the participants.
  • Prepare for the Session: making sure that participants are there, everything is in place, and all the needed tools are there.
  • Conduct the session.
  • Follow-up: to complete the documentation. Also, by presenting the results to the sponsor, and following up any issues, or outstanding issues.



In case of any non-agreed upon points, small JAD session might be assigned to some members of a smaller JAD session. JAD sessions will provide a good support from users since they were involved in the sessions, however as any other meeting types, JAD session can end up in generating conflicts between participants which makes it harder to be resolved.

Observation (Job Shadowing)

Observation (or job shadowing) is an effective way to elicit requirements from an end-user point of view [3]. Through this technique, requirement engineer can define what are the steps needed to perform every user task. This can be in details, or in an abstract view (as needed). Our reference [3] indicated three methods to conduct observation (or job shadowing) as the following:
  • Job shadowing: analyst will be at the user site, and interact with user to understand how he/she perform his tasks. The analyst note the needed actions for every task.
  • Videos: if available video recording of user activities which explains how user perform his/her tasks. Analyst can perform the observation offsite. Also, it worth mentioning that this method saves time.
  • one-way mirror-type setup: this type is in between the two above, analyst will observe a real time activities of a user on a medium; like live video camera (with user permission). The advantage here is to observe more than one user at a time.

Observation is a great technique to capture the current requirement or workflow, however it might have the following drawbacks:
  • Sometimes, users tend to over act when they’re observed.
  • Time consuming: Analyst may spend a lot of time to observe users.
  • Analyst needs to be critical to ignore any user behavior outside the job being observed.

Interviews

Interviews [3]


Interviews is part of the Ethnographic Techniques, that I blogged about before. Here I will provide a detailed article about this technique.

Despite the fact that interviews are involved in any elicitation technique, what we’re discussing here is a stand-alone interview technique.
The basic form of interviews will aim to meet every stakeholder a one-to-one meeting to note his/her requirements, opinion, and assumptions about the system. This basic approach works handy when we have the following:

  • Less disagreement between different stakeholders
  • Stakeholders are geographically distributed and can’t meet togeather
  • Some stakeholders can’t leave their workplace and engage in workshops and meetings

With the above in mind, it is important to keep the requirements shared between all stakeholders to eliminate any conflicts or disagreements. If any stakeholder disagree with requirements, then a formal walk-through would be needed with the disagreed stakeholder to verify and eliminate it.


Interviews can be categorized in two types as the following:
  1. Open-ended interviews: requirements engineer (or interviewer) start the interview with an open-ended questions as the start point, the record the response and elaborate with stakeholder if needed. This type is useful when few we have few stakeholders, and when the interviewer is experienced.
  2. Closed-ended interviews (Surveys): in this type, interviewer will design a survey; similar to survey technique; in which interviewer go on the survey questions by question with stakeholder and record the response. This type is beneficial when the interviewer is less experienced, and when we have many stakeholders (less time).

Interview process can be repeated until stakeholder understand and provide his/her best feedback. The following steps describes a good way to prepare an interview:
  1. Keep in mind that you can’t interview all stakeholders, so use the stakeholders analysis to select the most important stakeholders (key users) to interview, then for each, identify a high-level goals for the interview; what you want to get from that interview; in which you will focus in the interview to capture a good requirements.
  2. Schedule an appointment with stakeholders, and let them know how important are their feedback to the system.
  3. Before meeting each stakeholder, learn something about their knowledge about the current system (or business workflow); and try to realize what area on the system they can provide a good feedback about.
  4. Write some questions specified for every stakeholder according to their role and knowledge, and then refine the questions.
  5. Send out the questions to the customer before meeting, so they can realize how important this meeting is, and how committed you are to the interview.

In addition to the above about preparation, a requirement engineer may use the following type of questions when designing an interview:
  • Open-ended questions. When you want the customer (stakeholder) to express his/her opinion about some system process. This is also a useful technique to let the customer open up to you in an interview.
  • Closed-ended questions. When you need precise and short answer from the customer.
  • Probing questions. Used when you want the customer to elaborate about a requirement they mentioned. For example, if a customer said “ Our system should function 100% in our working hours ” , so you may ask “Can you specify what are the working hours?”.
  • Validating questions. To go over requirements and models (i.e charts) and show them to customer and check their agreement, or disagreements.

The main point of interviews is to discover the unknowns. Reference [3] provides a step-by-step process to discover the unknowns throughout interview starting with the known. Steps are as the follows:

  1. Interviewer should understand the problem boundaries and describe it using any method; [3] suggested using Functional Decomposition Diagram (FDD) see figure [3] below; then should explain it to the customer and make sure that customer understand it. This will help in bringing the customer in the same page.
  2. Interviewer should define the problem being proposed to be solved here, either a system problem or a business problem.  
  3. Identify the root causes for the problem being discussed. Interviewer need to be focus causes other than the main root cause which was already identified.
  4. “Envision the future.” Discuss with customer what he/she and you think about the current situation (AS-IS) and what they think will be when this system is done (TO-BE).


At the end, interviews is a good elicitation technique, which provides an insight into details needed for good requirements. However, some customers may not be committed to those interviews, and sometimes its hard to control the interruptions in interviews.

Monday, September 24, 2012

Focus groups


Focus group is a structural guided meeting with sampled potential customers to determine their feedback about an upcoming product (or system) [2]. Typical session would take between an hour and two. Like brainstorming session, focus groups needs a moderator to lead the discussion about the product. To have a good results, the sampled group should represents the prospect users.
A business analysis book [2] suggested that the ideal focus group should contain between 6 to 12 people due to the non feasibility of large groups.
One major drawback of focus groups is that it depends heavily on the way moderator handles the groups discussion. People who participate might not share their true opinion when they discuss issues in the focus group.

Tabular Elicitation Techniques


In this category, we use tables to capture and note stakeholders requests to be met in the system. The nature of these technique makes it more precise and unambiguous. Two techniques under this category are widely used: decision tables and state tables.

A. Decision tables
According to [1], this technique is common tabular techniques. Rows  represent conditions, columns represents rules. This technique is useful when capturing business workflow or rules. One famous example is the tax tables [irs.gov], in the table rows are cases, while columns are conditions such as (single, marrid, etc). 

Example: tax tables


B. State tables
State tables is a technique that uses the idea of state machine (in computer science) to capture the process (or workflow). As state machines, only one start point is allowed, and one or more final state. Although book[1] classified this as elicitation techniques, I only can see this as modeling technique.

Quality Function Deployment (QFD) Method

QFD was developed by Mizuno and Akao in 1990. QFD goal is to identify customer needs to be translated into the product design. According to QFD Institute [http://www.qfdi.org/], QFD can do the following:
  • Seeks customer needs spoken and unspoken.
  • Discover qualities that impress the customer.
  • Translate the above into design characteristics.
  • Deliver product quality to achieve goals and customer satisfaction.
currently, QFD is often a part of Six sigma program[1].

Ethnographic Techniques (Surveys and Interviews)

Ethnographic Techniques are techniques originated from ethnographic research which focuses its effort on studying specific community or culture. The most common ways to conduct this type of elicitation techniques is through interviews and surveys. This method is very useful when studying large population where statistical analysis is used to represent the entire population.

Kano Modeling
Kano modeling is a survey model. According to [1] it is the most common survey method to analyze customer preferences in system features. Kano modeling has three variables: one-dimensional, expected, and attractive quality.
  • One-dimensional (or linear quality) is when a system feature increases linearly with the potential customer value to the product. Example [1], in refrigerators, the more energy efficient, the more likely for customers to buy it.
  • Expected quality is a feature that is essential to the successful of system.
  • Attractive quality is a non required feature of the system, however if it was added it would give a good reason for users to use your system. Some attracted quality can be changed into expected with time, for example, cell phone camera used to be attracted but now its an expected feature.
One good fact about Kano modeling is that it is sensitive to the cultural differences. This means that values can change between different cultures and countries, and even states.

Surveys and interviews in general can capture the emotional and cultural responses and preferences to the end-users.


Note: Interviews technique is explained in this blog entry.

Brainstorming

Brainstorming Sessions are a common techniques to elicit stakeholders initial requirements for a system. Typical session would include many stakeholders to think and discuss the needed system together. This technique needs an experienced facilitator to manage discussions and conflicts between stakeholders. Typically, such sessions can take between one to two days [1,2].
Session time and duration should be agreed on before the start of the session. In the session, participants may start throwing ideas that they think are important to the product, facilitator can use sticky notes to note every idea on the board. The facilitator can use the following flow:

  1. Allow everyone to bring up their ideas on the upcoming system (even if they're redundant).
  2. group the ideas into different groups, and remove related ideas
  3. assign ideas into categories with participants agreement (or the majority of them)
  4. break brainstorming group into multiple groups to study in details each category.
  5. in each group, ideas will be expanded and prioritize.
  6. the meeting is concluded with the results being presented to the whole group and agreed on
  7. If customer was not involved, further session might be needed.
The figure below gives an abstract view of the process:


Goal models

One important step in requirements elicitation is the identification of business goals [1]. This step can be ignored if not needed, however in most cases since business people are the driving force for software projects, their goals needs to be accommodated and put in mind when requirements are elicited. Goal modeling is a common techniques to elicit business goals. Goals can be break down into different levels. Typically, the higher level of goals would be more abstract and general, while lower level of goals would be more specific. In other words, the more levels you break goals, the more details you get. Goals can be conflicted with each other, thus once goals are collected it should be followed by a refinement step where those nonfunctional requirements can bring some important issues in regard to the system requirements. Goal model can be as simple or as complex as it needed. No standard for the level of details, it just the current need drive the modeling.
Another approach to use goal model is to use it along with quality assessment methods (QAMs). QAMs are used to identify identify the quality goals that meets business goals. The goal behind using QAMs with goals models is to make sure that important nonfunctional requirements are not missed. Also, to make sure that nonfunctional requirements can be tested.


The following figure contain a simple example of a simple goal model


Requirements elicitation

requirements elicitation: is the process of identifying and understanding the needs and constraints for a system that can be transformed into manageable requirements that meets stakeholders expectations [1].

Elicitation is sometimes confused with the analysis. Elicitation is to interact with stakeholders to write down their needs, while in the analysis, requirements are refined to meet stakeholders expected needs into more formal refined specification [1].


I will write different blog posts about different elicitation techniques. I've used the following sources books:
[1] Berenbach, Brian, Daniel J. Paulish, Juergen Kazmeier, and Arnold Rudorfer. "Chapter 3 - Eliciting Requirements". Software & Systems Requirements Engineering: In Practice. McGraw-Hill/Osborne. © 2009.
[2] Weese, Susan, and Terri Wagner. CBAP/CCBA: Certified Business Analysis Study Guide. Sybex. © 2011.
[3] Jonasson, Hans. "Chapter 7 - Ways to Gather Requirements". Determining Project Requirements. Auerbach Publications. © 2008.


I will refer to each source with its number between brackets.

Tuesday, September 18, 2012

Robot tracing (2005)

Adams, J. A. (2005). Human-Robot Interaction Design:Understanding User Needs and Requirements, In Proceedings of the 2005 Human Factors and Ergonomics Society 49th Annual Meeting, September 2005.


In contrast to the previous paper (Robot), this paper uses robots to help in the safety and rescue missions. Author was working in cooperation with the Nashville Metro Police department's Bomb Squad and the Nashville Metro Fire Department’s HAZMAT team. Their aim is to let robots do the job instead of humans. Another goal is to advance Human-Robot Interaction (HRI) research by allowing single human to supervise large number of robots (up to 100 robots). With the current robot design, human can only supervise up to 5 robots at a single time! Author argue that the current robot design needs to be changed to accommodate that initiative. Later, they discussed the initial movement for to change the design of robots to be a Human Centred Design (HCD).

For their multi-robot design to be controlled by single human, they needed to employee Situation Awareness (SA) when designing their robots. SA is a representation of the human understanding of the current situation around him/her and the upcoming consequences in the future.

They used GDTA to identify robot operator to identify: basic goals, major goals, and SA requirements. In their methodology, they started by obtaining HRI SA requirements as its the core part of this work. In order to do that, they used GDTA and SA principals to help them in identifying and modeling the UCD and SA requirements. Since they're working in the field of CBRNE search and rescue domain, they started in two directions: (a) interviewing domain experts to gain a generic understanding of the rescue field, and (b) exploring the supporting document that was provided to them by Nashville Metros Bomb Squad. The previous directions helped them to identified the basic high-level goals and action steps in this field. Their next future step was to use their resulted high-level requirements analysis, and conduct a more focused interviews, and observations to sterengthen their requirements and detailed them.

This work still in its early stages, thus a preliminary results were only discussed on this paper. They provided a goal hierarchy for the communication, in addition to the special task force tasks breakdown. Author, discussed the communication as an essential part of their analysis, he indicated that communication can be very different from a robot controlled to human. For example, a human does not need to communicate while navigating a building, while on the other hand, robot needs to do so.

Tracing back Robot paper

Last week, I reviewed a paper named "User, robot and automation evaluations in high-throughput biological screening processes". Now, my mission is to trace back this paper and see what was the work which they built on. The tracing focus on the core part of this work and from which work it was cited and extended. The core components of this work are: Human-Robot Interaction (HRI), and Cognitive Task Analysis (CTA).  I started tracing back the first paper which was cited and extended in this paper. In the second paper, I was able to trace back the paper name, however I wasn't able to access it. I did requested an access from the library and I'm still waiting! In my next post, I will review  the first traced paper.

Update Papers list:

  • 2005: Adams, J. A. (2005). Human-Robot Interaction Design:Understanding User Needs and Requirements, In Proceedings of the 2005 Human Factors and Ergonomics Society 49th Annual Meeting, September 2005.
  • 2003: Endsley, M. R., Bolstad, C. A., Jones, D. G., & Riley, J. M.(2003). Situation awareness oriented design: From user's cognitive requirements to creating effective supporting technologies. In Proceedings of the 47th Annual Meeting ofthe Human Factors & Ergonomics Society (pp. 268-272).
  • 2003 (book not reviewed but mentioned for my reference): Endsley, M. R., Bolté, B. & Jones, D. G. (2003). Designing for Situation Awareness: An approach to User-Centered Design. London: Taylor & Francis. 

WWW Citation Timeline (2010)

Citation Timeline (2010):

Mohan Raj Rajamanickam, Russell MacKenzie, Billy Lam, and Tao Su. 2010. A task-focused approach to support sharing and interruption recovery in web browsers. In Proceedings of the 28th of the international conference extended abstracts on Human factors in computing systems(CHI EA '10). ACM, New York, NY, USA, 4345-4350.

In this paper, authors developed a task-oriented mobile web browser on an iphone. They relay on a previous study which state that most web users leave the browser without finishing their task. Thus, they argue that when a user return back to browser the current techniques such as: history and bookmarks, do not provide good information about what was the task and what was done.
First, they conducted semi-structured interviews with 9 students (aged between 18-29), with different browsing skill sets. They discussed that most of the participants do not use browser's history and bookmarks, but they havely use the autocomplete features, and multiple tabs. Participants often use some sort of annotation or text files to save their task state before leaving computer. They use these techniques to help them when they return back to their tasks.

As a reslt of their semi-structured interviews, authors have concluded their prospect browser design to contain the following features: (a) group multi webpages under one task, (b) enable easy way to stop any task, and to come back and continue it, (c) include some artifacts about each task such as: task history and task bookmarks, (d) users can annotate webpage while performing their tasks, (e) tasks can be shared with other users. With that in mind, they developed a prototype of a browser called: TabFour. Due to their short implementation time, they did not include all of their design requirements in this prototype. To validate their requirements and design, they conducted a study on 8 subjects to check if their designed prototype is suitable. In their short conclusion, they indicated that experiment users reported that this type of browsers indicated that this browser was useful to them.

In relation to [ref], this paper indicated the [ref] as a previous study which comes handy in studying web users, however for this study they elicit design requirements for an existing problem from browser users, then they used these requirements to built a prototype which lead to an experiment similar to the one at [ref] in order to validate their work.

WWW Citation Timeline (2004)


Citation Timeline (2004):

Marco A. Winckler, Philippe Palanque, and Carla M. D. S. Freitas. 2004. Tasks and scenario-based evaluation of information visualization techniques. In Proceedings of the 3rd annual conference on Task models and diagrams (TAMODIA '04). ACM, New York, NY, USA, 165-172.

This paper propose a new task visual model to validate other task visual  techniques. Also, it can be used to compare two techniques which one is better in performing user tasks than the other. They start by building multiple user scenario for each task using ConcurTaskTree (CTT) notation. Later, apply all scenarios to each of the selected techniques to evaluate if it serve the needed usability or not. For example, in their case study, they compared accessing and searching files using two techniques: Treemaps, and Hyperbolic browser. One of the techniques did not have all the scenarios. The following paragraphs will explain CTT and how it was used in this work.

ConcurTaskTree (CTT) notation [ref 12] is task modeling technique which model four types tasks as the following:
- Abstract: describes any complex form of actions either by user, system, or both.
- User: describes tasks that are performed only by the user (no system involvement).
- Interactive: describes tasks that include operations by both user and system.
- Application: describes tasks that are entirely performed by the system without user involvement.

CTT is a hierarchical structure where tasks can be linked together if they have any relationships in between. The relationships represented using Language Of Temporal Ordering Specification (LOTOS) operators. Example of such operators are: choice[], enabling >>, task interruption [>. [ref 10] has one tool to edit and write tasks (called CTTE).




WWW Citation Timeline (2002)

Citation Timeline (2002):

Andrei Broder. 2002. A taxonomy of web search. SIGIR Forum 36, 2 (September 2002), 3-10.

The author of this paper discusses the information retrieval (IR) in the sense of how users search for information over the web. From an IR-HCI point of view, the author discusses that as user tasks plays a major role in retrieving the information on the web, the HCI should be involved in the classic IR model (figure 2). They used the following three taxonomies for web search tasks needs: 

(a) navigational, 
(b) informational, 
and (c) transactional. 

Then they conducted a survey on search engine (Altavista) users. Altavista was famous and popular when they conducted that study.

Although, their work was similar to [ref], they didn't actually extend it nor use it in their work. Another fact about this paper, is that it is a very statistical paper in nature, and surprisingly it was cited over than 300 times! I think most citations were on IR field giving the fact that they employed task taxonomies on the IR field.

WWW timeline (2001)


Citation Timeline (2001):

Stuart K. Card, Peter Pirolli, Mija Van Der Wege, Julie B. Morrison, Robert W. Reeder, Pamela K. Schraedley, and Jenea Boshart. 2001. Information scent as a driver of Web behavior graphs: results of a protocol analysis method for Web usability. In Proceedings of the SIGCHI conference on Human factors in computing systems (CHI '01). ACM, New York, NY, USA, 498-505.

This paper provides a new WWW protocol analysis methodology for studying users on the web and any other intensive information tasks. Using their developed protocol, they conducted an experiment on WWW users while they’re surfing the internet. They used a similar statistical analysis to [ref]. They emphasise that from [ref] that majority of users time over the www spent on finding information while reading surfing the WWW. Thus, they used a previous work on information foraging theory [ref 11 and 16 on paper] to analyze user tasks for finding information. They used, information patches; a representation for information needed for user tasks where user has to navigate through them; and information scent which provide information about the navigation cost and value.
   
Before conducting their experiment, they expanded the WWW experiment [ref] by genrating a WWW task bank that would have most of the tasks WWW users would perform while browsing the WWW. Additionally, they used a previous work at Georgia Tech [ref 8] and used their tasks classification to classify their test banks taxonomies under the following: (a) Purpose, (b) Method, and (c) Content.

Experiment was consisted of 14 students from Stanford University. The mean age was 23. Participants were asked to browse the internet as what they do in their daily life. 


Their protocol was consisted of the following: Current URL, a screenshot of the current website, event time (from the log file), a transcript of user verbal words (they were asked to explain what they’re doing while they were surfing the www). Every event has its own code-name. Additionally, they used a combination of recorded eye movements along with the mouse movements of each task with their recorded data.


(Discussion item)

WWW timeline (2000)

Citation Timeline (2000):

Melody Y. Ivory. 2000. Web TANGO: towards automated comparison of information-centric web site designs. In CHI '00 extended abstracts on Human factors in computing systems (CHI EA '00). ACM, New York, NY, USA, 329-330.

This paper discusses the usability of websites. The author argue that, the design recommendations and guidelines for website are not enough. Thus, the need for the usability studies is there. They focus their usability study to enhance the information architecture. They proposed a new automated methodology and tool called TANGO (Tool for Assessing Navigation and Organization), to help organizing information flow in websites by providing an information-centric web sites.

Web Tango employe typical information retrieval techniques along with Monte Carlo simulation to simulate user behavior on website. Web Tango uses the taxonomy model (from the WWW paper) as the underlying model for it. Web page designer should provide the following information to the tool: page metadata, page complexity, links, and links out of website. Additionally, designer should provide information about user information tasks, along with some other data. After designer fill out those info, the tool will simulate the user navigational behavior.

Despite that author stated that this project is still ongoing project, I was wondering about the use of Monte Carlo simulation to simulate users and how accurate it is to the actual users. I haven’t seen any comments from the author about this issue, or an intentional attempt to validate their methodology and compare it with real users through a controlled experiment.

Andrew Sears and Julie A. Jacko. 2000. Understanding the relation between network quality of service and the usability of distributed multimedia documents. Hum.-Comput. Interact. 15, 1 (March 2000), 43-68.

This paper is a study about the usability of distributed  multimedia over the internet. The author focuses about the problem of network delays when retrieving multimedia on a distributed environment.
Their relation to the (ref) work, they have used their taxonomy capture users activities in their experiment. Additionally,they added two activities: providing information, and seeking information. Note that sometimes, user can have both activities. For example: using online payment, providing payment info, and seeking confirmation.  

Michael J. Albers and Loel Kim. 2000. User web browsing characteristics using palm handhelds for information retrieval. In Proceedings of IEEE professional communication society international professional communication conference and Proceedings of the 18th annual ACM international conference on Computer documentation: technology \& teamwork (IPCC/SIGDOC '00).

This paper discusses the web page interfaces for personal digital assistants (PDA's) like Palm device. They discuss the limitations of web presentation and retrieval on these devices. Then, they provide the existing guidelines for web page design. Finally, discuss differences between web interface design for desktops and handheld devices. This paper only cited our (ref) to emphasize the existence of task-driven approaches! So, it did not build on it rather than indicating the existence of such techniques.   

Timelines for WWW paper

I reviewed earlier a web task analysis paper called “The tangled Web we wove: a taskonomy of WWW use” which was published at 1999. My plan was to check the future work which referred to this work, and build up on it. I found 51 papers which cited this paper. Due to the large number of citations, I have only selected some papers (mostly) on different years and reviewed only the ones I believe that they did build on this work.

Throughout next posts, I will state paper(s) name(s), year, and then discuss how they relate to our source paper. For some the related papers, I will provide a full review. Please note that I will use [ref], or (ref) as a reference to the WWW paper.


Update:
Here is a list of the related 'timeline trace' for this papers:



Tuesday, September 11, 2012

Chapter 5

Chapter 5: Making the business case for site visits


The main reason behind the visiting user site is to verify that our assumption about the user are correct. Through this chapter, authors explain the expected risitance that you will (as a designer) face with your company when you propose to visit customer site to learn more about them.

Common objections to task and task analysis:

We’re changing the process; why to bother and check the current process?
Even though you’re changing the process, you still want to learn about the environment and the current workflow and tasks. One way for the new process is to adopt the current workflow and tasks as the users are familiar with. This will provide a good transition for users to the new process rather than having a new process (or system) which is incompatible with the environment.  
This is totally new; nothing it out there to go and see!
Even if you’re proposing a new design, you still can learn from its older design. For example, if you’re designing a fax machine, you still want to learn about how people send messages.
Users all do it differently; how would you know who to watch?
In all cases variations between users are expected, cultural variations, shortcuts, workarounds, etc.
we’re just changing one part; you don’t need to go beyond that!
Even if the change is one a small part, you still want to study the other parts as they affect the whole user experience. Also, you can learn from other tasks that interact with the tasks affected in your new change (or part).
What can we learn from few users?
recommendation is to study small number of users, so small-scale user set is totally acceptable and beneficial.
Why not to use the information we already have?
Usually, we hear such question from market researchers, and business employees. Of course, we still need their data and studies, however their studies questions were focused to serve their purpose and do not answer what we really need.