This paper contains an introductory examination of human cognition, interface design and cognetics. For reasons of space it would be unfeasible to cover every aspect of these subjects and for this reason, only the most salient aspects will be discussed. The discussion here presupposes a target-orientated approach to translation whereby the needs of the target audience take priority and where the usability of translations is the ultimate goal.
HCI – An Introduction
HCI means human-computer interaction or human-computer interface although the latter definition is less frequently used. It is the study of what happens when humans and computers interact with each other. HCI is also concerned with understanding the nature of these interactions and the principles underlying them. By understanding the factors which affect human-computer interactions it should be possible to make these interactions easier.
In order to be effective, HCI needs to understand the different elements involved: humans, computers and the way they interact. HCI aims to explain how humans function and, in the case of their interactions with computers, the tasks they need to perform. After all, humans use computers to do something. With this knowledge, it is then possible to engineer computers in such a way as to make these tasks easier.
The Model Human Processor
When humans interact with computers, what they are really doing is exchanging information or communicating. Thus, the interaction can be regarded as a communicative act between a human and a non-human communicative partner (Bødker 1991:19). The object of the communicative act or interaction is to exchange, access and process information in order to perform a task.
Broadly speaking, we can say that computers are information processing systems. Information, or data, is manipulated, created, modified, accessed and stored. In this context we can speak of memory, processors, parameters, rules and interconnections. Similarly, the human mind can also be regarded as an information processing system. As such, the broad model of a computer can be used as an analogy for describing the human information processor (Card et al.1983:24; Downton 1991:20). We can draw several comparisons between the two models in that they can both be said to consist of memory, processors, interconnections, rules etc. However, such an approach can only be used for illustrative purposes as the structure of the model does not necessarily reflect the physical structure of the brain. Indeed, there is still some debate about whether certain components of the mind are distinct physical locations or merely different functions of the same physical location (Card et al.1983:23,36; Dix 1998:27; Faulkner 1998:33-34). If we return to the idea of a computer we can see on a very basic level that:
- • information is input into the computer
- • the information is processed, and
- • an appropriate response or output is prepared
Applying this schema to humans we can divide the human mind into the following subsystems (cf. Card et al. 1983:24; Downton 1991:20):
- the perceptual/sensory system
- the cognitive system
- the motor system
The Human Input System
As already stated, humans interact with the outside world and with computers through the exchange of information. This exchange relies on the input and output of information. Information input for humans takes place through the five senses: sight, hearing, touch, taste and smell. For most people, the first three senses are the most important, especially in terms of HCI (Faulkner 1998:13; Dix 1998:13). Even though the senses of taste and smell are valuable senses for humans, it is not clear how they could be utilised in HCI (Dix ibid.). Indeed, given the fact that they are not among our primary senses, they are not as developed as the other senses and are prone to being quite subjective (Downton 1991:20).
Perception
The basic human input system involves us receiving and preparing information for processing by the cognitive system. We have already examined sensation which provides the fundamental raw materials we process. Now we will look at what we do with the information we gather from our surroundings. Perception is more that just seeing or hearing. Perception is a complex and active process which allows us to interpret information. By interpreting the raw information provided by our sensory organs we, in a sense, prepare it for further processing in the cognitive system. If it were not for perception, we would simply be receivers of sensory information but we would not be able to use this information for anything.
Memory
Memory is fundamental to virtually every one of our actions from reading, eating and walking to writing and learning. Without it we would not know what to do with the information we receive through our senses. At its most basic physiological level, memory is “a physical change in the neuronal structure of the brain” (Coe 1996:69). When information is added to our memory it creates new neuronal pathways and connections. There are three types of memory:
- Sensory Memory
- Short-term Memory (STM)
- Long-term Memory (LTM)
These three types of memory work together, passing information between them to allow us to carry out cognitive processing.
Sensory Memory
Sensory memory, also known as sensory registers (Coe 1996:72) or sensory buffers (Dix 1998:27) is an area of memory which acts as a buffer, temporarily storing information received through the senses before it is passed on for processing. Each of our senses has its own sensory memory (Coe 1996:71), e.g. iconic memory for visual stimuli, echoic memory for aural stimuli and haptic memory for touch (Dix ibid.). Information stored here is unprocessed, i.e. it remains in its physical form and is not decoded (Downton 1991:22). In effect, this means that the information stored here is extremely detailed and accurate. However, because of the limited capacity of sensory memory, information stored here is the most short-lived and is constantly being overwritten. In general, information is stored in sensory memory for anything between 0.2 seconds (Downton ibid.) and 0.5 seconds (Dix ibid.). Echoic memory is more durable and lasts for approximately 2 seconds (Downton ibid.). Due to the brief duration of this type of memory, not all perceptions become proper memories (Raskin 2000:18).
Short-Term Memory (STM)
A popular way of explaining the concept of STM is to describe it as a “scratchpad” or as RAM in a computer (Dix 1998:28; Hill 1995:19). STM is responsible for storing information that we are currently using. It is where we carry out all of our memory processing, encoding and data retrieval. STM allows us to “do” things with information. We can also filter information here and discard information which is no longer needed.
Card et al.(1983:38) argues that STM (or working memory as he calls it) is really only an activated subset of information stored in long-term memory (LTM). While it is true that STM obtains some of its input from LTM, e.g. stored knowledge, procedures etc., information passed on from sensory memory also provides STM with its input.
In contrast to information stored in sensory memory, information in STM is stored in the form of symbolic representations or schema (Coe 1996:72). However, like sensory memory, information is only in STM temporarily. The information is lost, overwritten or replaced after 20-30 seconds (Downton 1991:24) although with practice information can be retained for several hours (Coe ibid.). That information is only stored temporarily is due to the limited capacity of STM. In 1956 Miller posited that the capacity of STM is seven chunks plus or minus 2 chunks. This “7±2” rule is universally accepted as fact (Faulkner 1998:34; Coe 1996:72; Downton 1991:23; Dix 1998:28) and is generally true for most people. This can be illustrated using the following sequence of numbers:
- 0352765994
The average person may find it difficult to remember each digit in this sequence. However, if we group the digits into smaller sequences as we would with a telephone number, each sequence can be treated as a single chunk:
- 035-276-5994
So, instead of remembering ten separate pieces of information, by chunking the information we reduce the amount of space required to remember them. An interesting property of chunks is that what actually constitutes a chunk depends on individual people and the content of their LTM (Card et al.1983:36). According to Downton (1991:24) the number of chunks which can be stored is independent of the amount of information each chunk contains. We can, therefore, combine small chunks to form larger chunks and so on. For example, letters (small chunks) form words (larger chunks) which can be combined to form sentences (even larger chunks) and so on (Faulkner 1998:73). With sufficient practice and rehearsal (i.e. repetition) in STM, several sentences can be treated as one single chunk.
Long-Term Memory (LTM)
Long-term memory is the final part of our memory system and it is here that information is stored once we receive it from STM. Whereas capacity and retention are key factors when discussing sensory memory and STM, they do not apply to LTM as this type of memory is essentially unlimited in its capacity and information is stored there forever (Faulkner 1998:35; Coe 1996:74; Downton 1991:25; Dix 1998:30).
It is widely held that there is really no such thing as “forgetting” information, rather the information is still stored in LTM but that as the memory grows older, the traces which lead to the information and help us locate it become increasingly feint and less defined. The result is that we simply cannot find the information we want (Faulkner 1998:35). Over time, it becomes more difficult to locate information and this can lead to this information being permanently “forgotten”. This condition is exacerbated by the fact that information which is most recently and frequently used is easiest to retrieve from memory (Downton 1991:25).
Types of Long-Term Memory
Coe (1996:74) categorises LTM according to two primary categories declarative memory and procedural memory. Declarative memory is described as “memory what” (ibid.). This is memory of events, facts and images. “Memory how” or procedural memory is memory for motor skills, cognitive skills, reflexes, how to do things etc.
Learning
Learning is a relatively permanent change in behaviour as a result of experience. In general there are two main approaches to learning theory: behaviourist and cognitive. Behaviourist learning focuses on objective, quantifiable behaviour rather than on mental acts which we cannot observe. It is concerned with the connection between actions, the role of reward in behaviour etc. Cognitive learning theories focus on mental acts such as conceiving, believing, expecting etc.
According to Coe (1996:34) we actually learn using a combination of theories from both schools of thought. The components of learning include:
- 1. experience,
- 2. schemata,
- 3. habits,
- 4. reinforcement,
- 5. interference,
Experience & Schemata
We learn from experience. When we are met with an experience we either create a new schema or we use / modify an existing one. Any information we provide must either take advantage of users’ existing schemata or help them create new schemata quickly and easily. The easiest way to leverage existing schemata is to give examples based on schemata they already have.
Habits
Habits are learned connections between a stimulus and a response. The strength of the connection is called the habit strength. Related habits are grouped into habit families, each of which has a hierarchical pecking order. The most effective habits which we tend to use first or most frequently are located higher up in the hierarchy.
Reinforcement
Reinforcement is the process of using events or behaviours to produce learning. These are known as reinforcers and they can be either positive or negative. There are three types of reinforcement:
Continuous
This type of reinforcement takes place each time an event occurs. This is the quickest way of promoting learning and it establishes very strong expectations for the user that reinforcement will always take place. The result of this is a dependence on the part of the user and consequently if the reinforcement stops, the learning stops too.
Intermittent
Intermittent reinforcement, as its name suggests, is the occasional reinforcement of learning. While initial learning is slower, learning will be more autonomous and will continue even if the reinforcement stops.
Vicarious
This type of reinforcement involves learning from the experiences of others. In other words, we learn to perform those tasks we see others rewarded for. Similarly, we learn not to perform those tasks we see others punished for. A classic example is of a vending machine. If we see someone insert money and obtain two cans instead of one, we will be inclined to do the same. Conversely, if we see a person insert their money but not receive anything, we will not risk the same fate ourselves.
However, with vicarious reinforcement, the learning continues until such time as a new observation is made. Returning to the vending machine, if we subsequently see another person successfully use the machine, we will change our knowledge and actions to incorporate this new learning. With reinforcement, we need to adapt the type and amount of reinforcement according to the audience and medium being used. For example, vicarious reinforcement is not particularly useful for hardcopy manuals and advanced users will take exception to frequent and unnecessary reinforcement.
Interference
Frequently, existing habit families will interfere with new learning. Of course, the opposite is also true. An example of this is a user who is proficient in Trados learning to use Transit. The commands and procedures used to open a translation segment in Trados may interfere with those of Transit because the user has developed habits from using Trados. On the other hand, interference between existing habits and new learning can sometimes be positive. Returning to the idea of a Trados user learning to use Transit, some of the habits learned from Trados will actually aid learning the new CAT tool.
Interfaces
When we speak about user interfaces many people assume we are referring specifically to the graphical user interfaces (GUIs) of modern computers. While GUIs are perhaps one of the most prolific and recognisable types of interface, they are precisely that – types of interface. The reality is that not all interfaces have windows, icons and menus: interfaces can be found on VCRs, mobile phones, digital watches, ATM machines and even microwave ovens. It is very easy to give examples of interfaces but actually defining interfaces is another matter. Card et al. (1983:4) state that it is easy to locate the interface between computer and human simply by starting at the CPU and tracing “a data path outward… until you stumble across a human being”. This, however, by the authors own admission is less than clear and we are left with no real clue as to the boundaries of the interface. Faulkner (1998:54) maintains that the human-computer interface mediates between the user and the computer system. Again, this is somewhat vague. Perhaps we should look to the function of the interface in order to understand what an interface is. Bødker (1991:77) proposes that “the basic role of the user interface is to support the user in acting on an object or with a subject”. She continues to say that a good user interface allows users to focus on the task at hand rather than on other objects or subjects. So, like software, the purpose of interfaces is to allow us to do something – in this case, to use the system. In other words, an interface is a tool or a means to an end. Such a view is echoed by Raskin (2000:2) who defines interfaces as “the way that you accomplish tasks with a product […] that’s the interface”.
The user guide as an interface
Admittedly, the definition of interfaces given is rather vague in terms of concrete physical details but it is sufficiently detailed in terms of function to allow for variations in the physical characteristics of interfaces and their areas of use (as mentioned above). This flexibility is essential when we consider the case of software user guides. Ostensibly, the purpose of such guides is to teach users how to use a software product. But without such training, many users would not be able to use the software; a small few may try to use it by a process or trial and error. In other words, user guides facilitate the use of software products and in a sense become part of the human-computer interface. If we were to be very specific, the user guide would be an interface between the user and the software’s graphical user interface but it is more convenient to simply regard it as an extension of the overall interface.
Cognetics
Ergonomics is the design of machines to take into account the physical variability of humans. For example, we know that a human cannot possibly be expected to press two buttons positioned three metres apart (Raskin 2000:10). With our knowledge of the human body and the standard level of variation among different humans, we engineer our physical world to suit our needs and capabilities. Similarly, we need to engineer our world to conform to our mental capabilities and limitations. Essentially, what we are talking about is an ergonomics of the mind. This is known as cognitive engineering or cognetics. In reality, cognetics is a branch of ergonomics but the term ergonomics is used primarily in relation to the physical aspects of human-orientated design.
A key factor which is frequently overlooked by software designers, engineers and even users is that computers are supposed to be tools which assist humans in doing something else. Computers should, therefore, reflect the needs, capabilities and limitations of the humans who use them. As Faulkner (1998:2) says, a computer “has to play up to [users’] strengths and compensate for their weaknesses. Raskin (2000:10) maintains “you wouldn’t design a system which requires users to multiply two 30-digit numbers in five seconds”. But this is an obvious example. Other factors are more subtle and relate to the way we perceive and process information, solve problems, learn and access knowledge – even how we read.
The main challenge facing software manufacturers (and technical communicators) is to produce systems which people really want, need and can use despite the complexity of the task being performed (Faulkner 1998:129). While decisions as regards what people want and need from a product are usually based on economic factors and made by people other than the actual system designers, ensuring the usability of systems remains the primary focus of cognetics.
Interface Design Goals
“A computer shall not waste your time or require you to do more work than is strictly necessary” (Raskin 2000:6)
“Making sure that the interface design accords with universal psychological facts is customarily omitted in the design process” (Raskin 2000:4). Faulkner (1998:7) argues that “the very best systems and the very best interfaces will be overlooked entirely by the user” and that ideally “all the user sees is the task and not the system”.
Usability Objectives
To ensure that a system is as usable as possible, there are three fundamental goals which need to be achieved: learnability, throughput and user satisfaction (Faulkner 1998:130-131). Learnability refers to the time required to learn the system or to reach a specific skill or performance level. This objective can be quantified by examining the frequency of errors, the type of errors made etc. Dix (1998:162) expands this category by including the sub-headings of predictability, familiarity, generalisability and consistency. Familiarity refers to the way information presented relates to users’ existing knowledge which they bring with them to the learning process. Generalisability relates to the ability of users to use the information learned in other situations.
Throughput refers to the ease and efficiency of use after an initial learning period. In the case of user guides, this refers to the rate at which users can work their way through the guide accurately. This is quantified by examining the time needed by users to perform tasks, their success rates when performing tasks, the time spent looking for help etc. User satisfaction is a subjective goal but it can give an overall picture of how well the system performs in the eyes of users. This can be quantified by asking users to fill out a questionnaire rating aspects of the systems performance etc. on a scale from (1) very bad to (5) very good.
Schneiderman (1998:15) adds an additional goal which he terms retention over time. This is particularly relevant to user guides in that their purpose is to teach users and facilitate their use of the system. Retention relates to how well users maintain their knowledge over time as well as the time needed for learning and frequency of use.
Design Strategies
“An interface is humane if it is responsive to human needs and considerate of human frailties” (Raskin 2000:6)
In order to ensure that an interface is both “humane” and usable, we need to take the various characteristics of the model human processor as described in the preceding paragraphs into account during the design process. In what he terms the “Eight Golden Rules of Interface Design”, Schneiderman (1998:74-75) sets out a series of strategies which play an important role in designing usable, user-friendly and effective interfaces. While there are numerous aspects of HCI that can be drawn upon in interface design, these rules serve as a concise and general overview of the areas that need attention:
- Strive for consistency, use similar prompts, commands, fonts, layout, situations, instructions etc.
- Enable frequent users to use shortcuts
- Offer informative feedback
- Organise sequences of actions so that they have a start, middle and end.
- Offer error prevention and simple error handling
- Permit easy reversal of actions
- Support the internal locus of control, this allows users to feel in charge of the computer and not vice versa.
- Reduce short-term memory load
Admittedly, only rules one and eight are directly applicable to translators in that the remainder are usually the responsibility of the technical writer and changes of this magnitude are generally beyond the authority of the translator. However, with the convergence of the roles and responsibilities of technical translators and technical writers (see Wright 1987:119 and Göpferisch 1993), it may be possible to make improvements in these areas. It is encouraging that some translation clients effectively give freelance translators free-reign to re-write manuals to a large extent. So how can we incorporate a knowledge of human cognitive abilities and limitations together with HCI design goals and strategies and improve translation quality? One way is to introduce Iconic Linkage into translations.
Iconic Linkage
Iconic Linkage (IL) is the repetition or re-use of target language translations for source language sentences which have the same meaning but different surface properties. In other words, sentences which are semantically identical but which are non-isomorphic are translated using the same target language sentence or construction. The term Iconic Linkage was coined by House (1981) although the phenomenon is quite similar to what technical writers call “parallelism”. Parallelism is a phenomenon which is widely recognised as a fundamental part of sentence structure (D’Agenais & Carruthers 1985:104; Mancuso 1990:231; White 1996:182). Essentially, parallelism means that parts of a sentence which are similar, or parallel, in meaning should be parallel in structure. Parallel constructions can also be described as instances where two or more groups of words share the same pattern (White 1996:182). Thus we can see that parallelism can occur on both a sentence level and on a sub-sentence level. The following sentences illustrate parallelism.
- If you want to open a file, click Open.
- If you want to close a file, click Close.
When there is a lack of parallelism, some of the grammatical elements in a sentence do not balance with the other elements in the sentence or another sentence. Consequently, the clarity and readability of a section of text are adversely affected. What makes this undesirable, apart from potential grammatical errors, is that it distracts the reader and prevents the message from being read quickly and clearly (Mancuso 1990:232). The following example shows sentences which are not parallel:
- If you want to open a file, click Open.
- The Close button is the button to press when you want to close a file.
Parallelism is not just important in avoiding grammatical and comprehension problems, it is also very useful in reinforcing ideas and learning. The grammatical symmetry of parallelisms helps readers remember information more easily (White 1996:183). Where my definition of IL differs from both House’s definition and parallelism is that parallelism and House’s definition deal with localised instances of structural similarity. Both deal with isolated pieces of text at particular locations, e.g. a sentence or list. Instead, IL as used here is an active strategy which is used throughout a text. Indeed, instances of IL can be separated by large stretches of text. In addition, rather than being restricted to individual phrases or sentences, IL can manifest itself in entire paragraphs or longer stretches of text.
In contrast to House’s definition, IL is actively introduced into a translation, rather than being a naturally occurring feature of a text or translation. Again, owing to space constraints it is not possible to discuss methods for introducing IL here.
Examples of Iconic Linkage
Original Instance |
Wechsel in den Programmier-Modus (PROG-Modus). |
Second Instance |
Wechseln Sie in den PROG-Modus. |
Translation |
Switch the machine into program mode (PROG Mode). |
|
|
Original Instance |
Maschine ist im Arbeits-Modus. |
Second Instance |
Die Maschine befindet sich im Arbeits-Modus. |
Translation |
The machine is now in Work Mode. |
|
|
Original Instance |
Wollen Sie ohne Begasung arbeiten, stellen Sie den Parameter-Wert Gas/Restvakuum auf den selben Wert ein, wie den Parameter-Wert Vakuum. |
Second Instance |
Für das Arbeiten ohne Begasung müssen Sie den Parameter-Wert Gas/Restvakuum auf den gleichen Wert einstellen wie den Parameter-Wert Vakuum. |
Translation |
If you intend working with gassing deactivated, use the same value for Gas/Residual Vacuum and Vacuum. |
Table 1: Examples of Iconic Linkage
How Exactly Does Iconic Linkage Help?
Introducing iconic linkage into a translation not only improves consistency but it also reduces the STM load for users by allowing optimum use of chunking. This chunking of information is facilitated by the repetition of structures which over time become automatic allowing individual chunks to contain more information. Iconic linkage also takes advantage of image memory in that the repeated structures have a visual property which makes retrieval of information easier. This also takes advantage of our ability to recognise information as opposed to processing new information or recalling existing information. The result of this is that the processing load is reduced and the retrieval of information is accelerated.
By virtue of the fact that text structures are repeated over and over again, iconic linkage increases the retention of information, making the information more predictable and more accessible. Borrowing from the theory behind parallelisms, the grammatical symmetry of parallelisms and IL help readers remember information more easily, reduces confusion, improves clarity and readability.
Conclusions and Future Applications
The preceding paragraphs have provided a small insight into the various factors that affect how we read and learn from texts. Admittedly, many areas were either omitted or discussed only fleetingly. However, we have covered sufficient ground to see that there are several areas where an understanding of human cognition can be of assistance. Such areas might include:
- Technical Translation
- Copywriting
- Literary Translation
- Translator Training
By understanding human cognitive processes we have another tool to allow us to convey information, recreate effects, create mental pictures, set scenes etc. And first and foremost, in the case of educational texts, it allows us to streamline the learning process and make the translations more effective.
Bibliography:
- Bødker, Susanne (1991) Through the interface: a human activity approach to user interface design, Hillsdale, N.J: L. Erlbaum
- Card, Stuart K., Moran, Thomas P. & Newell, Allen (1983) The Psychology of Human-Computer Interaction, NJ, USA:Lawrence Erlbaum Associates
- Coe, Marlana (1996) Human Factors for Technical Communicators, New York:John Wiley & Sons, Inc.
- D’Agenais, Jean & Carruthers, John (1985) Creating Effective Manuals, Cincinnati, Ohio: South-Western Publishing Co.
- Dix, Alan (1998) Human-computer Interaction – 2nd Ed, NJ, USA:Prentice Hall
- Downton, Andy (1991) Engineering the human-computer interface, London; New York: McGraw-Hill Book Co.
- Dumas, Joseph S., Redish, Janice C. (1993) A Practical Guide to Usability Testing, Exeter, England:Intellect Books
- Faulkner, Christine (1998) The Essence of Human-Computer Interaction, London / New York:Prentice Hall
- Göpferisch, Susanne (1993) Die translatorische Behandlung von Textsortenkonventionen in technischen Texten In Lebende Sprachen No. 2/93, pp.49-52: 1993
- Hill, Stephen (1995) A practical introduction to the human-computer interface, London: DP Publications
- Mancuso, Joseph C. (1990) Mastering Technical Writing, Menlo Park, California: Addison-Wesley Publishing Company
- Miller, George A. (1956). The Magical Number Seven, Plus of Minus Two: Some Limits on Our Capacity for Processing Information. Psychological Review, 63, 81-97
- Preece, Jenny (1993) A Guide to usability : human factors in computing, Wokingham, England; Reading, Mass: Addison-Wesley
- Preece, Jenny (1994) Human-computer interaction, Wokingham, England; Reading, Mass.: Addison-Wesley
- Raskin, Jeff (2000) The Humane Interface, New York:Addison-Wesley
- Schneiderman, Ben (1998) Designing the User Interface: Strategies for Effective Human-Computer Interaction, Reading, Mass.: Addison Wesley Lingman, Inc.
- Sime, M.E. (1983) Designing for human-computer communication, London; New York: Academic Press
- White, Fred D. (1996) Communicating Technology: Dynamic Processes and Models for Writers, New York: HarperCollins College Publishers
- Wright, Sue Ellen (1987) Translation Excellence in the Private Sector. In Marilyn Gaddis Rose (ed.), Translation Excellence: Assessment, Achievement, Maintenance. American Translators Association Scholarly Monograph Series, Volume 1,