- The instructions to users could double as form a type of automated UI testing -- potentially giving you two benefits for the cost of one.
- You could guarantee that the instructions that you provide to users were correct. Incorrect instructions would fail to execute during your automated testing (which could be run after every build). This would catch cases where the developers change the UI without informing the documentation, testing and/or specification teams.
- The instructions could be analyzed using the GOMS methodology to give you free usability metrics (time to complete, learnability, some types of error prediction).
- Other types of analysis related to the structure of the user interface (e.g., which screens and/or controls are most documented, how similar are the procedures used on different screens, etc.)
- Hudson, S. E., John, B. E., Knudsen, K., & Byrne, M. D. (1999). “A Tool for Creating Predictive Performance Models from User Interface Demonstrations.” Proceedings of the ACM Symposium on User Interface Software and Technology, pp. 99-103. New York: ACM Press.
- Kieras, D., and Knudsen, K. (2006). Comprehensive Computational GOMS Modeling with GLEAN. In Proceedings of BRIMS 2006, Baltimore, May 16-18.
For various reasons, I decided to take a fresh approach.
- I designed my own Domain Specific Language (DSL) based on the guidelines found in the Microsoft Manual of Style for Technical Publications.
- I tried various methods to encode my DSL. I've been most happy with Scala so far and will have some posts about my experiments with both it's internal and external DSL capabilities.
- I've also incorporated Sikuli (a computer vision-based testing library) to provide a mechanism for automatically executing my procedures.
No comments:
Post a Comment