# Introduction sability test is a necessary process in the human computer interface design. It is a process of through systematically collecting the usability data of interface and assessing and improving the data. Designers can enhance the usability through testing and improving the present interface; designers can also evaluate the usability of the present interface, borrowing its strongpoint, improving its shortcomings, and applying in the new design. By doing this, the design of the interface can achieve its usability goal more effectively, reduce the learning time of users, and improve the using efficiency and satisfaction. On the other hand, usability test can also help designers highlight the interface characteristics of the product, reduce the expenditure of development and support, and boost its market competitiveness [1].One of the factors that affect the acceptability of software is its usability. Smith & Mayes [2] state that "usability is now recognized as a vital determining factor in the success of any new computer system or computer-based service." Human computer interface is a medium in the communication, a platform in the flow of information and feedbacks, and a way to interact between human and computer. Human computer interface is also called user interface. A good design of user interface can make the communication more effective, more easily and less mistaking guidance for users. User interface should meet different kinds of proper needs of various users, so the usability research of interface design has become particularly important. As a basic and important term in the interaction design, usability is an overall rating of the degree of use in the human computer interaction, which guarantees the realization of interaction. It is also a quality term from the point of users to evaluate whether the product is effective, easy to learn, safe, efficient, easy to remember and few mistakes or not. Besides, it also needs to consider the expectation and experience of users, which should bring some larruping and unexpected feelings to users [4]. The primary goal of usability is to have products developed to maximize the users' ease of use.International Standards Organization in the ISO 9241-11 Guidance of Usability defined usability as "[t]he extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use." Jakob Nielsen, in his online column of August 2003, further defined usability by five quality components. The first problem that should be solved is the cognition of users in the usability design of human computer interface. First of all, users must know and understand the interface, and then they can use it. However, how to know the interface depends on how the interface expresses its functions to users. Designers should solve the express of functions by adding less formats and actions, and intentionally design the interface on the basis of goal. Users must clearly understand what the input language needs, which requires approaches to realize functions concisely, and what the output language expresses, which needs understandable and proper feedback channels [5]. This paper first look into the give an introduction to usability, then usability testing is discussed in detail. The various methods of usability testing is examined in order to investigate the usability of human-computer interaction interfaces. Evaluation of methods and finally identifying the strengths and weaknesses of the methods is the objectives of this research. # II. # Usability Human-Computer-Interaction (HCI) is the area where usability emerged. Several books or papers about HCI present a definition or characterization of usability. For instance, Hix&Hartson [6] consider that usability is related to the interface efficacy and efficiency and to user reaction to the interface. Nielsen [7] [8] integrates usability as one of the parameters associated with system acceptability. He associates five attributes to usability: easy to learn, efficient to use, easy to remember, few errors (the prevention of catastrophic errors is relevant for applications such as process control or medical applications), and pleasant to use. Shackel [9] refers to four aspects of interest in usability testing: learnability (easy of learn), throughout, flexibility, and attitude. Rubin [10]accepts that usability includes one or more of the four factors outlined by Booth [11]: usefulness, effectiveness (ease of use), learnability, and attitude (likeability). For Smith and Mayes [2] usability focuses on three aspects: easy to learn, easy to use and user satisfaction in using the system.In international standards, usability refers to effectiveness and efficiency to achieve specified goals and users satisfaction. "Usability:the extent to which a product can be used by specified users to achieve a specified goals with effectiveness, efficiency and satisfaction in a specified context of use" (ISO/DIS 9241-11; European Usability Support Centres).Based on these opinions about usability we may conclude that there are two broad areas to collect relevant data: system and user performance (efficacy, efficiency, easiness to learn and easiness to use) and user satisfaction in using it. The primary goal of usability is to have products developed to maximize the users' ease of use. International Standards Organization in the ISO 9241-11 Guidance of Usability defined usability as "[t]he extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use."Jakob Nielsen, in his online column of August 2003, further defined usability by five quality components: 1. Learnability: How easy is it for a user to complete a basic task at their first use of a system? 2. Efficiency: How quickly can a user familiar with the system perform tasks? 3. Memorability: How easy is it for a returned user to reestablish proficiency regarding the system? 4. Errors: How many errors does a user make using the system? How severe are the mistakes, and how difficult or easy is it to recover from the mistakes? 5. Satisfaction: How satisfactory is it to use the product? III. # Usability Test Usability testing, the process by which products are tested by those who will use them, is intended to help product developers -including information product developers -create, modify, or improve products to better meet the needs of actual or intended users to make those products user-friendly [12].According to Dumas &Redish [13], authors of A Practical Guide to Usability Testing, usability testing helps product developers determine whether "the people who use the product can do so quickly and easily to accomplish their own tasks". Usability tests identify areas where people struggle with a product and help you make recommendations for improvement. The goal is to better understand how real users interact with your product and to improve the product based on the results. The primary purpose of a usability test is to improve a design. In a typical usability test, real users try to accomplish typical goals, or tasks, with a product under controlled conditions. Researchers, stakeholders, and development team members watch, listen, collect data, and take notes.Since usability testing employs real customers accomplishing real tasks, it can provide objective performance data, such as time on task, errorrate, and task success. There is also no substitute for watching users struggle with or have great success in completing a task when using a product. This observation helps designers and developers gain empathy with users, and help them think of alternative designs that better support tasks and workflow [14]. Usability evaluations (UE) consist of methodologies for measuring the usability aspects of a system's user interface (UI) and identifying specific problems.They are an important part of the overall user interface design process, which consists of iterative cycles of designing, prototyping, and evaluating. According to Preece [15], evaluation is concerned with gathering data about the usability of a design or product by a specified group of users for a particular activity within a specified environment or work context. Ivory and Hearst [17] suggested that the main activities involved in an evaluation include: ? Capture: Collecting usability data, such as task completion time, errors, guideline violations and subjective ratings; ? Analysis: interpreting usability data to identify usability problems in the interface; ? Critique: suggest solutions or improvements to mitigate problems. Usability test is a necessary process in the human computer interface design. It is a process of through systematically collecting the usability data of interface and assessing and improving the data. Designers can enhance the usability through testing and # Global Journal of C omp uter S cience and T echnology Volume XV Issue I Version I Year ( ) A improving the present interface; designers can also evaluate the usability of the present interface, borrowing its strongpoint, improving its shortcomings, and applying in the new design. By doing this, the design of the interface can achieve its usability goal more effectively, reduce the learning time of users, and improve the using efficiency and satisfaction. On the other hand, usability test can also help designers highlight the interface characteristics of the product, reduce the expenditure of development and support, and boost its market competitiveness [1]. IV. # Usability Test Methods In this section, we present countermeasure methods that have been proposed for Usability testing. A comparison and critical discussion on the proposed ideas will be detailed in section 6. # a) Heuristic evaluation Method Heuristic evaluation is an informal system inspection method where a small group of evaluators are presented with an interface design and asked to judge whether each of its elements follows a set of established usability principles [18]. The method is intended to be a "discount usability engineering" method [18] that provides a way to do a usability evaluation more quickly, and with less cost. Because of its "discount" nature, heuristic evaluation was found to be the most commonly used UEM in a survey to the practitioners [19].Heuristic evaluation can be performed by experts and non-experts. It is difficult to do a heuristic evaluation with a single evaluator; it is near impossible for one person to find all usability problems. Yet it has been shown that when there are multiple evaluators, each were able to find different usability problems, thus the effectiveness of the problem can be improved by having a group of evaluators. Usually, 4 or 5 evaluators are able to report near 70% of the usability problems; additional evaluators often are not able find much more additional problems [20] [18].The main advantage of heuristic evaluation is its ability to be done in a short period of time with limited resources. The method is also very flexible and does not require advanced planning; it could be carried out as soon as the group of evaluators is assembled and that there is a product or a prototype to evaluate. Heuristic evaluation has also proved to be highly effective in finding usability problems [21] [22]. However, there are also several drawbacks. The effectiveness depends largely on the evaluators' skill and experience. Though non-experts are able to perform the evaluation as well as experts, it is very likely that they would not be able to find as many usability problems as the experts. A "bad" evaluator is also more likely to miss the problems that a better evaluator did not pick up, thus lowering the aggregated count of problems found [18].The flexibility given to the evaluators, allowing them to inspect the system anyway they want also means a lack of support and structure to the inspection process [23]. When the evaluators are not well informed about the product domain, the inspection may be not as effective. # b) Cognitive walkthrough Method Cognitive walkthrough [24] [25] [26] is a theoretically structured usability evaluation process that focuses on a user's cognitive activities, especially while performing a task. It can be carried out by individuals or groups, software developers or usability specialists, and on finished products or paper prototypes. Based on a theory of exploratory learning and corresponding interface design guidelines, cognitive walkthrough is a task-based methodology that centers an evaluator's attention on the user's goals and actions during a task, and on whether the system design supportsor hinders the effective accomplishment of those goals. Moreover, it is a form-based evaluation methodology in which relies on a set of forms to guide the evaluation process. The theory behind the method describes humancomputer interaction in four steps: the user sets a goal to be accomplished with the system, the user searches the interface for action options, the user selects the action that seems to make progress towards the goal, and finally the user performs the action and evaluates the system feedback [27]. Cognitive walkthrough has shown to be an effective UEM [24].It also provided an option for evaluating a system in early development with relatively lower cost. But the details of the procedure created difficulties in its execution. The walkthrough methodology presupposes knowledge of cognitive science terms, concepts, and skills from the evaluators [25]. A lack of familiarity with the terminologies in the form, such as the definitions of goal and action, could lead to misunderstandings and affect the outcome. At least one evaluator needs to be familiar with the concepts of the cognitive walkthrough theory, and the cognitive science terminologies used during the process in order for the walkthrough to be effective. Lewis et al. [24] conducted cognitive walkthrough with four evaluators, three of which have deep understandings of the core principles of the theory. Throughout the walkthrough, there was a high level of agreement among the three evaluators, but less with the fourth. The fourth evaluator also found fewer errors that the other evaluators [27]. # c) Scenario-based Method Scenario-based methods is the description of people using technology and it is essential in discussing and analyzing how the technology is (or could be) used to reshape their activities. A scenario describes a sequence of events when interacting with a system from the users' perspective and the scenario descriptions can be created before a system is built and its impacts felt. 'Scenarios' are similar to 'Use Cases', which describe interactions at a technical level, but scenarios can be easily understood by anyone regardless of the level of their technical knowledge. Scenarios are especially useful when you need to remove the focus from the technology in order to consider other design possibilities. Scenarios focus in terms of tasks rather than the technology used to support them. E.g. "User enters his pin" is incorrect because it mentions the technology used, whereas "User identifies himself" is okay because it keeps open other alternatives [28]. # d) Remote Testing Usability Method Most of the time, usability evaluations are conduct dinausability laboratory. People that were recruited are invited to come to the test facilities consisting of a test room, where the participants will accomplish specific tasks, an observation room and the "recording" room. A usability laboratory may contain complex and sophisticated audio/visual recordings and analysis facilities. In this context, test sessions are conducted individually. Although this situation has advantages it also has drawbacks, as we will see. Remote usability evaluation refers to a situation in which the evaluators and the test participants are not in the same roomor location. Two approaches to remote usability evaluation have been developed: synchronous and asynchronous. Each approach uses specific tools. In the synchronous approach, a facilitator and the evaluators collect the data and manage the evaluation session in real time with a participant who is remote (the participant may be at home, at work or in another room). The evaluation may require video conferencing applications or remote applications sharing tools that allow to share computer screens so as to allow the evaluator to see what is happening on the user's screen. Incontrast, with a synchronous methods, observers do not have access to the data in real time, and there is no facilitator interacting with the user during data collection. Asynchronous methods also include auto mated approaches, where by users' click streams are collected automatically (e.g., Web Quilt). The key advantage this technique offers is that many more test users can participate (in parallel), with little or no incremental cost per participant. For conducting these asynchronous tests, different strategies have been proposed. One strategy is to ask test participants to download and use an instrumented browser that will capture the users' click streams as well as screen shots, and transmit those data to the evaluator's host site for analysis (an example of this kind of browser is Ergo Browser, http://www.ergolabs.com/resources.htm).Anotherapproa ch consists in using a proxy. The test participants are invited to go to a specific Website and then to follow instructions. They are then brought to the Website under evaluation. The users' behaviors are captured, aggregated and visualized to show the web pages people explored. The visualization also shows the most common paths taken through the website for a given task, as well as the optimal path for that task as implemented by the designer [29].An example of this kind of approach is Web Quilt [30] and the work by Atterer, Wnukand Schmidt [31]. The asynchronous approach does not allow for observational data and recordings of spontaneous verbalizations during the remote test sessions. The qualitative data can only be recorded through post-test questionnaires or self-report forms. However, the asynchronous approach allows the recording of large groups of users as we said. The synchronous approach is favored by some authors [32] because it is analogous to laboratory testing and because it allows the capture of qualitative data. Incomparison to the laboratory user test, the synchronous remote testing is cost effective, especially for travel expenses when participants are recruited in different region in a given country. However, the costs associated with this approach may in some cases be quite similar to those of the laboratory testing (for the recruitment for instance). Two other reasons for preferring the remote synchronous approach to traditional user testing is the freedom from facilities (especially when the product or software can be distributed electronically or when testing a Website) and time saving. However synchronous remote testing can be perceived as more intrusive than traditional laboratory user testing [29]. # e) User-based Testing Method User-based evaluations are usability evaluation methods in which users directly participate. Users are invited to do typical tasks with a product, or simply asked to explore it freely, while their behaviors are observed and recorded in order to identify design flaws that cause user errors or difficulties. During these observations, the time required to complete a task, taskcompletion rates, and number and types of errors, are recorded. Once design flaws have been identified, design recommendations are proposed to improve the ergonomic quality of the product [29]. User testing is centered on the feedback of users interacting with a particular interface and is "usually conducted in a scenario-based environment" [33]. User testing is good at "assessing the system in action, at identifying problems users experience while performing real tasks" [34]. Also, internal issues can be detected quickly and potential problems can be fixed before the product ever reaches the market. User testing on the other hand is not 100% representative of the target population. The method is qualitative and therefore does not provide large samples of feedback. User testing on the other hand revealed more detail level problems of the interface because it required the users to enact the system at the task level. Despite the fact that user testing identified fewer problems,most were directly related to the true performance and/or user acceptance # Year ( ) A of the interface. In addition, it is assumed that user testing is time consuming [35]. f) Focus group method A focus group is a meeting of about six to nine users wherein users discuss issues relating to the system. The evaluator plays the role of the moderator (i.e., asks about pre-determined issues) and gathers the needed information from the discussion. This is valuable for improving the usability of future releases. This method is a technique used to study human-computer interaction and human factors [36]. A traditional focus group is done by inviting a small group of end users in to talk about a product. The discussion is presided over by an experienced moderator, and held in a room with a one-way observation mirror. The moderator takes notes of the happenings, leads the conversation into interesting tangents, encourages comments, prevents the discussion to be dominated by few of the participants, and all the while avoid having any effects on the session's outcome. Some practitioners believe that with well planning, proper guidelines and a good moderator, focus groups can gather valuable usability data.They believe that though it is not suited for comparative, competitive, or bench-marking studies, focus groups can be used to generate ideas, capture and validate user roles as well as tasks and workflows, and validate high level strategy. However, there are also some major drawbacks that led many practitioners to question its validity in gathering useful user data [27].Rauch [37] stated that "? the quality of the data obtained from usability focus groups is only as good as the quality of the participant selection and the questions asked." g) Contextual inquiry method Raven and Flanders [38] defines contextual inquiry as "a qualitative data-gathering and dataanalysis methodology adapted from the fields of psychology, anthropology, and sociology." It is a field research method wherein usability evaluators go to the users' workplaces, observes them at work, and asks questions regarding to the work content, process, or product usage. Several evaluators may observe different users at the same time. The data is gathered, compared and shared among product development team members after the observation [27]. It provides product designers an understanding of user work and usability; and further suggests generic principles of usability and work concepts that might become the initial frame work of new products [39].It is a structured field interviewing method, Contextual inquiry is based on three core principles: 1) understanding the context in which a product is used (the work being performed) is essential for elegant design, 2) that the user is a partner in the design process, 3) that the usability design processes, including assessment methods like contextual inquiry and usability testing, must have a focus. Contextual inquiry may take hours to months or even years to complete; it is a significant time investment to ask for and it is best used in the early stages of development to help develop product design guidelines [40]. # h) Model-based evaluation method Model-based evaluation methods can predict measures such as the time to complete a task or the difficulty of learning to use an interface. Some models have the potential advantage that they can be used without the need for any prototype to be developed. Models and simulations uses to evaluation when models can be constructed economically and user testing is not practical. However, setting up a model currently usually requires considerable effort, so model-based methods are cost effective in situations where other methods are impracticable, or the information provided by the model is a cost-effective means of managing particular risks [41]. V. # Evaluation Criteria for Usability Testing Methods Usability testing evaluation criteria will be described in this section. The criteria listed below are most common criteria that discussed in articles and researches with considering all usability test aspect. High Velocity: The time which takes to complete a task done. Low Cost: Costs required for testing (Building and maintenance of laboratory, equipment, the cost of users, costs related to the location and time that employees spend for meetings). Flexibility: The ability of the method to handle the limitation in the use of a special tool or framework and change in it. Resource Requirements: In usability test terminology, resources are required to carry out the test tasks. They can be people, equipment, facilities, funding, or anything else capable of definition required for the completion of test activities. How Many to Test: The number of participants who work with products. Each test methods requires different numbers of users, managers, observers, evaluator or scenario that the exact number of people required to perform each test is still not completely understood. Test Type: Two main approaches to consider the usability of the system are: Experimental and Analytical. The experimental procedure consists of testing systems with users while the analytical method includes the systems evaluation by using the created theories and methods. Impact of evaluators experience on test results: In the some methods for usability testing groupthink, evaluators experience and expertise, view of observers and other people involved in the testing process will affect the test results. Level of found problems: A usability problem is an aspect of the system and/ or a demand on the user which makes it unpleasant, inefficient, onerous or impossible for the user to achieve their goals in typical usage situations. In this paper usability problems categorize to two level: major and minor. Method purpose: The method purpose parameter specifies the basic building blocks of the discussed methods for usability test. The method purpose parameter is included to identify the evaluation requirements of the discussed us ability test methods. # VI. # Evaluation and Discussion All the methods discussed under the category of usability testing methods have been presented in Table 1 chronologically. Each method has been evaluated with reference to evaluation criteria discussed in Section 5. # Conclusion The usability design of human computer interface determines the market prospect of the product. Designers should be guided by the natural and human idea, also designers should optimize the use and operation of interface from many different areas, such as design, ergonomics, cognitive psychology, linguistics and semiotic, ultimately achieve the ideal goal of improving the usability of products.Usability evaluation is occupying a central part of software development based on the results extracted from quantitative and qualitative evaluations.This paper introduced and compared the some methods for conducting usability testing which most widely used in human-computer interaction user interfaces.The slandered evaluation criteria related with usability was addressed in this paper based on the previous researches. Based on the data collected, it was found that each method has unique advantages and limitations.According to the investigated research in this paper, none of these methods none of these methods is superior over others. In fact, the degree to which each of usability testing methods identify problems in the system depends on a number of factors and levels of complexity. # Global Journal of C omp uter S cience and T echnology Volume XV Issue I Version I Year ( ) A 1HighLowFlexibilityResourceHow ManyTest TypeImpactofLevelofMethodVelocityCostRequirementsto TestevaluatorsfoundpurposeexperienceproblemsontestresultsHeuristicYesYesYesLow3-5ExperimentalThemoreMajorProvideevaluationEvaluatorexperiencedexpertMethodevaluators,feedbackfind problemsonusermoreandinterfacesbetterCognitiveNoYesNoMedium4AnalyticalIftheMinorCheckwalkthroughEvaluatorevaluatorsstructureMethodarenotandfamiliar withcountercurrspecificent flow ofconcepts anduser goalsprinciples ofmethod, testisnotconductedwellScenario-YesYesNoMedium3-4Analytical-MinorRequiremebasedScenariontsMethoddescriptionandconceptualdesignsupportRemoteYesYesYesMedium-Experimental---TestingUsabilityMethodUser-basedNoNoYesHigh8 UserExperimentalMinorMeasuringTestingusabilityMethodandinteractionproblemsFocus groupYesYesYesLow1ExperimentalSometimesMinorExtractionmethodManager,groupthinkrequiremen6-4 Userpreventsts / userproperviewstestingthroughdiscussionContextualNoNoYesMedium-ExperimentalHighMinorProvideinquiryinformationmethodabout theuser's fieldModel-basedYesNoNo--Analytical--Findevaluationlearningmethodproblemsinusingtheinterface © 2015 Global Journals Inc. (US) 1 © 2015 Global Journals Inc. (US) * Human-computer interaction: The usability test methods and design principles in the human-computer interface design GongChao 2nd IEEE International Conference on 2009. 2009. Aug. 2009 285 * C& TSmith Mayes Telematics Applications for Education and Training: Usability Guide. Commission of the European Communities, DGXIII Project 1996 * Human-computer interaction: The usability test methods and design principles in the human-computer interface design GongChao 2nd IEEE International Conference on 2009. 2009. 8-11 Aug. 2009 285 283 * Computer Vision in Human-computer Interaction Nicusebe 2005 Springer Germany * Developing User Interfaces: Ensuring Usability through Product and Process DHix H&hartson 1993 John Wiley & Sons New York * JNielsen UsabilityEngineering 1993 Academic Press New Jersey * JNielsen Multimedia Hypertext: the Internet and beyond Boston AP Professional 1995 * Human factors and usability BShackel human-Computer Interaction: Selected Readings JPreece LKeller London Prentice Hall 1990 * Handbook of Usability Testing JRubin 1994 John Wiley and Sons New York * An Introduction to Human-Computer Interaction PBooth 1989 Lawrence ErlbaumAssociates London * Usability Testing: Developing Useful and Usable Products October 2002 Master of Technical andScientific Communication Program Miami University of Ohio * A practical guide to usability testing JSDumas JC&redish 1993 Ablex Publishing Company 4 Norwood, NJ * Usability Testing Basics, An Overview TechSmith 2009 * A guide to usability. Human factors in computing JPreece 1993 Addison-Wesley * The State of the Art in Automating Usability Evaluation of User Interfaces MelodyYIvory MartiHearst JNielsen Proceedings of the SIGCHI Conference on Human Factors in Computing Systems PBauersfeld JBennett GLynch the SIGCHI Conference on Human Factors in Computing SystemsMonterey, California; New York, NY ACM Press December 2001. 2001. 1992. May 03 -07, 1992 16 CHI '92. Retrieved on 9/30/05, from ACM Portal * A toolkit for strategic usability: results from workshops, panels, and surveys SRosenbaum JARohn JHumburg Proceedings of the SIGCHI Conference on Human Factors in Computing Systems the SIGCHI Conference on Human Factors in Computing SystemsThe Hague, The Netherlands; New York, NY ACM Press 2000. April 01 -06, 2000 CHI '00. Retrieved on 9/30/05, from ACM Portal * Heuristic Evaluation of User Interfaces JNielsen RMolich Proceedings of the SIGCHI Conference on Human Factors in Computing Systems: Empowering People JCChew JWhiteside the SIGCHI Conference on Human Factors in Computing Systems: Empowering PeopleSeattle, Washington, United States; New York, NY ACM Press 1990. April 01 -05, 1990 CHI '90. Retrieved on 10/10/05, from ACM Portal * User interface evaluation in the real world: a comparison of four techniques RJeffries JRMiller CWharton KUyeda Proceedings of the SIGCHI Conference on Human Factors in Computing Systems: Reaching Through Technology SPRobertson GMOlson JSOlson the SIGCHI Conference on Human Factors in Computing Systems: Reaching Through TechnologyNew Orleans, Louisiana, United States; New York, NY ACM Press 1991. April 27 -May 02, 1991 CHI '91. Retrieved on 9/30/05, from ACM Portal * Usability studies of WWW sites: heuristic evaluation vs. laboratory testing LKantner SRosenbaum Proceedings of the 15th Annual international Conference on Computer Documentation the 15th Annual international Conference on Computer DocumentationSalt Lake City, Utah, United States; New York, NY ACM Press 1997. October 19 -22, 1997 SIGDOC '97. Retrieved on 9/30/05, from ACM Portal * Analysis of strategies for improving and estimating the effectiveness of heuristic evaluation ELLaw ETHvannberg Proceedings of the Third Nordic Conference on Human-Computer interaction the Third Nordic Conference on Human-Computer interactionTampere, Finland; New York, NY ACM Press 2004. October 23 -27, 2004 82 NordiCHI '04. Retrieved on 10/15/05, from ACM Portal * Testing a walkthrough methodology for theory-based design of walk-up-and-use interfaces CLewis PPolson CWharton JRieman Proceedings of the SIGCHI Conference on Human Factors in Computing Systems: Empowering People the SIGCHI Conference on Human Factors in Computing Systems: Empowering PeopleSeattle, Washington, United States 1990. April 01 -05, 1990 * Whiteside CHI '90 New York, NY ACM Portal * Applying cognitive walkthroughs to more complex user interfaces: experiences, issues, and recommendations CWharton JBradford RJeffries MFranzke Proceedings of the SIGCHI Conference on Human Factors in Computing Systems PBauersfeld JBennett GLynch the SIGCHI Conference on Human Factors in Computing SystemsMonterey, California, United States; New York, NY ACM Press May 03 -07, 1992 CHI '92. Retrieved on 9/30/05, from ACM Portal * Usability evaluation with the cognitive walkthrough JRieman MFranzke DRedmiles Conference Companion on Human Factors in Computing Systems IKatz RMack LMarks New York, NY ACM Press 1995 CHI '95. Retrieved on 9/30/05, from ACM Portal * A survey of empirical usability evaluation methods PeishanTsai GSLIS Independent Study 2006 * Scenario-based design MaryBethRosson JohnMCarroll The human-computer interaction handbook JulieAJacko AndrewSears Hillsdale, NJ, USA 2002 * Usability testing: some current practices and research questions JMChristian Bastien 2010 * WebQuilt: a proxy-based approach to remote Web usability testing JIHong JHeer SWaterson JALanday ACM Transactions on Information Systems 19 3 2001 * Knowing the User's every move -User activity tracking for Website usability evaluation and implicit interaction RAtterer MWnuk ASchmidt Proceedings of the 15th International Conference on World Wide Web the 15th International Conference on World Wide WebEdinburgh, Scotland; New York, NY ACM May 23-26. 2006 * Remote possibilities? International usability testing at a distance SDray DSiegel interactions 2004. march + april * Web evaluation: Heuristic evaluation vs. user testing WTan DLiu R&bishu International Journal of Industrial Ergonomics 39 2009 * A comparison of usability techniques for evaluation design Doubleday Northampton Square 2009 School of Informatics, City University * Web Page Design: Heuristic Evaluation vs. User Testing JohnSmith December 7, 2010 * Usability Evaluation of User Interfaces, Melody Yvette Ivory, PhD Dissertation 2001 2 UC Berkeley Computer Science Division An Empirical Foundation for Automated Web Interface Evaluation * Focus groups in HCI: wealth of information or waste of resources SRosenbaum GCockton KCoyne MMuller TRauch CHI '02 Extended Abstracts on Human Factors in Computing Systems Minneapolis, Minnesota, USA; New York, NY ACM Press 2002. April 20 -25, 2002 CHI '02. Retrieved on 11/11/05, from ACM Portal * Using contextual inquiry to learn about your audiences MERaven AFlanders SIGDOC Asterisk Journal of Computer Documentation 20 1 1996 Retrieved on 10/1/05, from ACM Portal * Contextual design: an emergent view of system design DWixon KHoltzblatt SKnox Proceedings of the SIGCHI Conference on Human Factors in Computing Systems: Empowering People the SIGCHI Conference on Human Factors in Computing Systems: Empowering PeopleSeattle, Washington, United States 1990. April 01 -05, 1990 * Whiteside CHI '90 New York, NY ACM Portal * The Interaction Design Foundation KarenHoltzblatt HughRBeyer The Encyclopedia of Human-Computer Interaction MadsSoegaard RikkefriisDam Aarhus, Denmark 2014 Contextual Design. 2nd Ed * The evaluation of accessibility, usability and user experience HelenPetrie NigelBevan The universal access handbook CRC Press 2009