...

Feedforward Control in Dynamic Situations Björn Johansson by

by user

on
Category: Documents
16

views

Report

Comments

Transcript

Feedforward Control in Dynamic Situations Björn Johansson by
Linköping Studies in Science and Technology
Thesis No. 1018
Feedforward Control in Dynamic Situations
by
Björn Johansson
Submitted to the School of Engineering at Linköping University in partial
fulfilment of the requirements for degree of Licentiate of Philosophy
Department of Computer and Information Science
Linköpings universitet
SE-581 83 Linköping, Sweden
Linköping 2003
Feedforward Control in Dynamic Situations
by
Björn Johansson
May 2003
ISBN 91-7373-664-3
Linköping Studies in Science and Technology
Thesis No. 1018
ISSN 0280-7971
LiU-Tek-Lic-2003:17
ABSTRACT
This thesis proposal discusses control of dynamic systems and its relation to time. Although much
research has been done concerning control of dynamic systems and decision making, little research exists
about the relationship between time and control. Control is defined as the ability to keep a target system/
process in a desired state. In this study, properties of time such as fast, slow, overlapping etc, should be
viewed as a relation between the variety of a controlling system and a target system. It is further concluded that humans have great difficulties controlling target systems that have slow responding processes
or "dead" time between action and response. This thesis proposal suggests two different studies to adress
the problem of human control over slow responding systems and dead time in organisational control.
This work has been supported by the National Defence College
Department of Computer and Information Science
Linköpings universitet
SE-581 83 Linköping, Sweden
Feedforward Control in Dynamic
Situations
Björn Johansson
91-7373-664-3 ISSN 0208-7971
PRINTED IN LINKÖPING, SWEDEN
ISBN
BY LINKÖPING UNIVERSITY
COPYRIGHT © 2003 BJÖRN JOHANSSON
To
Marcelle
Abstract
This thesis proposal discusses control of dynamic systems and its relation
to time. Although much research has been done concerning control of
dynamic systems and decision making, little research exists about the relationship between time and control. Control is defined as the ability to keep
a target system/process in a desired state. In this study, properties of time
such as fast, slow, overlapping etc., should be viewed as a relation
between the variety of a controlling system and a target system. It is further concluded that humans have great difficulties controlling target systems that have slow responding processes or "dead" time between action
and response. This thesis proposal suggests two different studies to adress
the problem of human control over slow responding systems and dead
time in organizational control.
Acknowledgements
This research has been financed by the National Defence College in
Stockholm, Sweden. It is a part of the research conducted in the ROLFieffort. The work has been performed in cooperation between the National
Defence College in Stockholm and the Department for Computer and
Information Science in Linköping. This means that I have been working in
close cooperation with a number of persons in two cities and institutions.
These persons have all been a great support, inspiration and company during the last two and half years. There are of course some that must be mentioned.
First of all Prof. Yvonne Wærn from the department of Communication
studies in Linköping who got it all started. Without her I would not be
doing this. Prof. Berndt Brehmer for supporting the studies and supervising me. Prof. Erik Hollnagel who dares to be my primary supervisor, a
very patient and wise man. Hopefully the reader of this thesis proposal can
catch a glimpse of his wisdom between the lines.
I then would like to move on to my co-authors. Our cooperation has
been very fruitful, at least if we look at all the publications we have managed to produce. I hope it will be at least as many in the next two years.
Thanks to Dr Rego Granlund, Prof. Yvonne Waern, Mats Persson, Dr
i. ROLF is an acronym for Joint Mobile Command and Control Concept (see Sundin & Friman, 2000).
Henrik Artman, Dr Per-Arne Persson, Prof. Erik Hollnagel, Åsa Granlund
and Dr Peter Mattson.
Special thanks to Rego and Helena for all help with C3fire and for being
great friends. Another special thanks to Mats and Agneta for all the times
I stayed over at your place, and not the least for the great company, food,
drinks and everything else. A special thanks also to Georgios Rigas for
advices and help with Moro. I also want to thank Eva Jensen for valuable
comments on this text.
Of course I have not forgotten all the nice people at the defence college.
Many boring evenings that I could have spent alone at my hotel room
turned into interesting discussion over a pint at St. Andrews Inn. See you
there Mats, Ulrik, Georgios, Johan, Gunnar, Lasse and all the others. Special thanks for all interesting discussions to Prof. Gunnar Arteus, a true
academic.
My fellow doctoral students in Linköping who also supported me,
bugged me, drank coffee with me, cheered me up and basically shared all
the pros and cons of being a PhD student: Jonas Lundberg, Mattias
Arvola, Åsa Granlund, Anna Andersson, Håkan Sundblad, and the rest of
you. Special thanks to the CSE-Ptech project. I also want to thank Birgitta
Franzen and Helené Wigert who has to handle all my travelling. You have
been doing a great job. Concerning travelling, I would like to thank SJ,
who more than any paper or teacher has taught me that time is a relative
thing.
Last of all, but not least, I would like to thank my family who always
supported me in my, sometimes, odd interests.
Contents
Abstract 5
Acknowledgements 7
Motivation and background 11
Outline of this thesis proposal 18
Contribution 19
Theoretical background 21
Control 22
What is a “construct”? 25
Goals and norms 26
Control requires a target system to be controlled 27
Context and complexity 28
The COCOM and ECOM models of Control 29
What is a Joint Cognitive System? 34
Control and Time 35
Controllers and time 38
Time and the ECOM 39
Human limitations in control 40
Synthesis 43
Method 49
Experimental research 51
Micro-worlds as a tool for experimentation 53
Characteristics of micro-worlds 54
Research approaches using micro-worlds 55
Possible methodological problems with micro-worlds 56
The choice of micro-worlds 58
Moro 59
C3fire 61
Suggested studies 65
Study 1 65
Number of subjects 67
Selection of subjects 68
Procedure 68
Study 2 69
Selection of subjects 71
Procedure 71
Possible threats to internal validity 72
Threats to External Validity 74
Conclusion 77
Further research 81
References 83
Chapter 1
Motivation and background
After the coalition success in the Gulf war 1991 the military community
have shown an increased interest in information technology for command
situations (Alberts, Gartska & Stein, 2000)ii. The fast progress in the first
Gulf-conflict was largely ascribed to technical superiority and, most
importantly, to information superiority. The ability to know exactly where
the enemy was combined with precision weapons and has in retrospect
been seen as the major contributors to the successful outcome. It is not difficult to understand why this has been so appealing to politicians and military organizations in the western world, since one of the major problems
in war situations always have been to understand what happens on the battlefield. Already 2500 years ago, Chinese war philosopher Sun Tzu was
aware of this when he wrote “know thy enemy and know thyself, and in a
hundred battles, you will always win”. In the light of this, we see why
visionaries in the field of command and control have been given so much
attention during the last years (Chebrowski & Gartska, 1998). These ideas
are a vision about “dominant battlespace awareness” that are to be
ii. This optimism is not without criticism, see for example Rochlin (1991a, 1991b).
It is also possible that the second Gulf conflict may lead to a re-evaluation of the
significance of information technology.
11
C HAPTER 1
achieved from advanced sensor aggregation, communication networks
and precision weapons (Alberts et al, 2000). The general idea is to
increase the speed of the own forces by providing the commanders with
fast and accurate information about a situation, giving them the possibility
to make fast and well informed decisions. The military organization is also
supposed to be able to take action faster than before by organizing in a networked fashion, both in terms of communication technology and command structure, allowing the participants to exchange and use
information, making it possible to delegate to a larger extent than today.
This is known as the “Network centric approach”. The time between data
retrieval and decision should simply be shorter since information can be
gathered directly from the source rather than propagated through an
organization.
Philosophically, this originates from the “rational economic man”, the
idea that a decision-maker with all available information always makes
optimal decisions, and that there is such a thing as an optimal decision.
There is another aspect of this that is implicit in the reasoning. Not only
shall the commanders make optimal decisions, they are also supposed to
make them faster than the opponent. This calls for not only accurate information, but also for fast information retrieval and the ability to use this
information in an efficient way very fast. Although it seems fair to assume
that a well-informed commander have better chances of making good
decisions than a less well-informed, it is not certain that he/she will be able
to do it faster. There are some characteristics of dynamic control that is
necessary to present to make this problem clearer. Dynamic control has
been described by Brehmer & Allard (1991) as having the following characteristics:
1. It requires a series of decisions.
1. these decisions are not independent
1. The environment changes both spontaneously and as a consequence of
12
the decision-maker’siii actions.
1. The time element is critical; it is not enough to make the correct decisions and to make them in the correct order, they also have to be made
at the correct moment in time.
I would also like to point out that the kind of control that is of interest in
this thesis proposal is characterized by uncertainty in the form of incomplete information and vague or lacking understanding of the system that is
to be controlled. Although many systems can be considered as dynamic
(for example process industry), it is possible that the controllers managing
them have at least a basic understanding of them, and also have the possibility to gather fast and precise information about them. The systems we
are discussing in this thesis are systems that are less well defined, like forest fires, ecological systems or wariv
There is however a well-known difficulty that has been given little
attention in the discussions about fast information retrieval in control situations. The difficulty is that human controllers are very bad at handling
slow-response systems, at least as long as they do not have an adequate
model of the system, which is the very definition of dynamic control.
Crossman & Cookev (1974) showed how delays in a system makes it very
difficult to learn how to master even very simple control tasks.The task
presented in the Crossman & Cooke study was to set the temperature of a
bowl of water by regulating the voltage input to an immersion heather in
the water. The subjects could read the temperature of the water from a
thermometer lowered in the water. In once condition, the temperature was
measured directly, with the thermometer lowered in the water. In the
iii. Brehmer & Allard uses the term “decision-maker”. In this thesis, I mostly use
“controller” or “control system”.
iv. See Johansson, Hollnagel & Granlund (2002) for a more elaborated discussion
about the differences between “natural” and “constructed” dynamic systems.
v. Actually, as we will see from the reasoning that follows, the title of the Crossman
& Cooke article “Manual Control of Slow Response systems” is somewhat misleading. The system is not “slow responding”, it is only the feedback that is
delayed. This is however not important when discussing the findings from the
paper, but it is worth mentioning.
13
C HAPTER 1
other, a delay was produced by putting the thermometer in a test tube lowered in the water, giving a delay of two minutes in the readings of the temperature. The study showed that when the system responded with a delay
to the actions taken, the subjects tended to create oscillation in the target
system state (see figure 1.1).
Figure 1.1: Figure 2 b from the Crossman & Cooke (1974) study, pp
54.
However, Crossman & Cooke also found that, although many subjects
in the non-delayed condition were able to reach a stable state already in the
first trial, most subjects in the delayed condition also learned how to create
stability in the delayed system, but after five or six trials. They also noted
that those subjects made very few adjustments to reach the desired state,
implying that the subjects had a good understanding of the system dynamics.
Brehmer & Allard (1991) have also done a study of feedback delays in
a more complex control task and reached similar conclusions. In the Breh-
14
M OTIVATION
AND BACKGROUND
mer & Allard task, the subject was to act as commander over a number of
simulated fire fighting units, with the task of extinguishing a forest fire.
Even without delays, this task requires that the subjects anticipate the
development of the forest fire since the fire develops during the time the
fire fighting units move from point A to B. Brehmer & Allard found that
even small delays concerning the state of the fire fighting units had devastating effects on the subjects ability to master the problem.
An interesting aspect in this type of control tasks is “dead time”. Dead
time is the time between when an action is executed and the effect of the
action. In order to control such a situation, the subject has to have a model
of the system that allows him/her to anticipate changes that will occur as a
result of their his/her actions. From this it is also evident that the control of
slow-response systems must be achieved by anticipatory control. It is not
the same thing as having to cope with delayed feedback in a system that
responds fast to actions taken, although is not evident that the controller
will ever notice. In such a system, you will have an immediate effect of
your actions, but you will not see the effect until later. But it is possible
that the controller never will realise this, or even understand that there are
delays at all. There are studies that have shown that subjects treat systems
with feedback delays like there were no delays at all (Brehmer & Allard,
1991; Brehmer & Svenmarck, 1994).
Although it is common with systems that provide delayed information,
the opposite is also well known, that the feedback is immediate, but that
the effects of the actions taken does not become clear until after some
time. This is often the case in process industry or ecological systems.
Many real-world situations are also confusing in the sense described by
Brehmer & Allard (1991) namely that it is difficult to determine whether
changes in the target system is an effect of own action or normal changes
in the target system. Such effects are of course especially difficult to identify when the system responds slowly. Dörner & Schaub (1994) have
observed this when they conclude that we humans live in the present. We
have a tendency to forget very quickly what we did a few minutes ago,
especially if we are under stress as in a dynamic control task. We can
therefore be “surprised” by changes in a target system, when the changes
actually occur as a consequence of our own actions, both because we do
15
C HAPTER 1
not understand the complex relationships in the target system, and because
we simply forget what we did earlier. Human controllers also often overreact when small change in a system occur (Dörner & Schaub, 1994, Langley, Paich & Sterman, 1998).
Further, when we face an uncertain situation with time-pressure, we
have a tendency to take action rather than to wait. This can be an explanation of why we have such difficulties to handle systems with delays. Many
small actions in a system may accumulate to large responses. If we look at
figure 1.1 again, we see that the subject almost did one regulatory action
every minute during the half an hour trial. In the sixth trial, when the subject had learned how to control the system, it only made six regulatory
actions, most of them much smaller than the ones in the first trial.
A very interesting question rises from this: we know that humans facing
uncertainty in a control task are subject to “trial and error”. We also know
that much input into a slow-responding dynamic system mostly creates
confusing feedback. What will happen if we do not allow a controller to
take action as often as he/she likes? If we for example have a system with
a response time of say, five minutes. If we then tell a subject who is not
familiar with that system, but who is allowed to interact at any time with it,
to control it, what will happen? It is likely that we will find a similar
behaviour as in the Crossman & Cooke experiment. The interesting point
is to see what happens if the subject only is allowed to interact with the
system ever fifth minute? The subject may very well be given immediate
feedback, but he/she will have more time to observe the development of
the system in relation to the actions taken. If the subject observes and
understands the development of the system, he/she can probably build up
a strong enough understanding, or model, of the system to gain control
over it, at least faster than if he/she is allowed to interact with it more regularly. If this hypothesis would prove to be true, it could have implications
for design of control systems. Many real world control systems have several built in regulations of the feedback/action cycle. What is even more
interesting is that these cycles origin from demands in the control organization rather than the target system. For example, Brehmer (1989) has
observed how the personnel on a hospital work on at least three different
time-scales. The doctors perform their work from the perspective of a 2416
M OTIVATION
AND BACKGROUND
hour cycle because that is the time between their meetings with their
patients on a ward. The nurses often base their action on a 6-hour cycle,
since that is the time between taking a test and getting an answer from the
lab. At last, the secondary nurses work on a very short cycle, since they
often meet with the patients. In order to successfully control a system, the
controller needs to work with at least the same pace as the process it is trying to control, or preferably faster (Ashby, 1956).
Although it is logical that the controlling system has to be able to take
action faster than the target system changes, little has been written about
the relation between feedback cycles/control loops and human controllers.
For example, in the case of the hospital, it is not sure that a 24-hour cycle
is the optimal “control loop” for the doctors. The 24-hour cycle is based on
clock time rather than the actual change of state in the patients health. Further, the six-hour cycle of the nurses are probably an effect of the limitations of the laboratory at the hospital. It takes six hours to get an answer,
and meanwhile the nurses will have to wait before they get any response to
base their reasoning on. Neither is this cycle based on the changes in the
patient’s health, but rather a consequence of work and organizational
aspects. If we think of the cycle by which the medical personnel work as a
pendulum, the “pendulum” of this activity swings with a speed that is
decided by the controller (the hospital) rather than the target system (the
patient).
This is just one example of how factors in the design of a control system
create temporal regularities in a control task that has little or nothing to do
with the actual temporal characteristics of the target system. Spencer
(1974) investigated individual differences between how operators regulate
processes at a oil refinery. The operators worked on eight-hour shifts. An
interesting observation is that the process they were to control responded
so slowly that many of the changes made during on shift had to be handled
in the next, something that naturally made it difficult for the operators to
learn what the actual effect of their actions was. Although the results were
not significant, Spencer found cases were operators differed greatly in the
“size” of the actions they took during their shifts.
The aim of my research is to examine the actual consequences of different temporal relations between the action cycle of a controlling system and
17
C HAPTER 1
the rate of change in a target system rather than accepting the prevailing
“as fast as possible is the best”-paradigm. I will discuss time in relation to
control and suggests two studies that will increase our understanding of
the complex relationship between the interaction of a (human) controller
and a dynamic target system.
1.1 Outline of this thesis proposal
The aim of this thesis proposal is to suggest studies that can increase the
knowledge about the relation between the rate of change in a controlling
system and the rate of change in a system that are to be controlled by the
former. The first chapter briefly describes the research problem.
The next chapter describes relevant theories that have studied control
and time, namely cognitive systems engineering and dynamic decision
making. Although there are many other theories concerning human control of complex systems like distributed cognition (Hutchins, 1995) or
activity theory (Vygotsky, 1978), they are not concerned with time from a
control perspective, and have therefore been left aside. The purpose of the
chapter is thus not to provide a complete overview of research on control
over dynamic/complex systems, but rather to discuss some of the theories
that investigates time in relation to control of such systems. The chapter
ends with a synthesis of the theory that highlights and elaborates the
research questions.
The third chapter concerns methodological issues. An experimental
approach using micro-worlds is suggested as a way to seek knowledge
about the research questions. Different methodological problems with
experiments and micro-worlds are discussed. The two suggested studies
are described in detail, and a way to conduct them is described and discussed.
The last chapter is a summary of the previous chapters were some
thoughts about the theories and hypothesis are presented.
18
1.2 Contribution
To consider temporal dimensions of dynamic systems is a crucial part of
the control task that has to be taken into account in actual control situations. Still, time is mostly a neglected issue in theory and models of control or human decision-making (Decortis & Cacciabue, 1988; Decortis et
al. 1989; DeKeyser, 1995; DeKeyser, d´Ydewalle & Vandierendonck,
1998; Brehmer & Allard, 1991; Brehmer, 1992; Hollnagel, 2002a).
Taking a stance in a model of control that describes control as parallel
ongoing activitiesvi striving towards goals on different time-scales, the
thesis proposes two studies that will increase knowledge about delays in
systems, both in terms of response and feedback, when performing a
dynamic control task.
Knowledge gained from such research has implications for the design
of systems and work procedures in organizations with the purpose of controlling dynamic systems that are difficult to understand/predict.
vi. The Extended Control Model, see below.
Chapter 2
Theoretical background
In this thesis, I present a theoretical ground based on Dynamic Decision
making and Cognitive Systems Engineering. An important similarity
between these fields is that they have a functional approach rather than a
structural approach. This may not be completely true for all directions in
dynamic decision making, but for example Brehmer (1992) promotes a
research approach in dynamic decision making that is based on performance in relation to changes in the environment rather than trying to connect individual (cognitive) capabilities to performance. I also agree that it
is more fruitful to apply a functional approach, since, as Hollnagel states:
“Functional approaches avoid the problems associated with the
notion of pure mental processes, and in particular do not explain
cognition as an epiphenomenon of information processing.”
(Hollnagel, 1998, pp 11)
I will try to describe the connections between these two fields, since
they both, in some sense, are depending on each other. According to Cognitive Systems Engineering (CSE) it is possible to view a number of per-
21
C HAPTER 2
sons and the equipment they use as a Joint Cognitive System, meaning that
the system as a whole strives toward a goal and that the system can modify
its behavioural pattern on the basis of past experience to achieve antientropic ends. Dynamic decision-making is relevant since it concerns the
characteristics of human decision-making in uncertain environments,
which is the primary interest of this thesis.
Below I will elaborate on the theoretical fundament of this thesis. The
chapters highlight different aspects of the same topic, namely control of
unpredictable systems, and especially human control of such systems.
2.1 Control
The term ”control” is widely used in a range of disciplines. According to
cybernetics as described by Ashby (1956), control is when a controller
keeps the variety of a target system within a desired performance envelope. A control situation consists of two components, a controlling system
and target system, were the controlling system is trying to control the state
of the target system.
A simple example is a thermostat that is designed to keep the temperature in a room at twenty degrees Celsius. It is normally attached to a radiator, or some other device that can change the temperature of the room.
The thermostat needs information about the current temperature in the
room so that it can turn on/turn off the radiator in accordance to the desired
temperature. If the temperature in the room is above twenty, the thermostat turns the radiator off. If the temperature decreases, the thermometer
trigger the radiator in order to increase it. This is a simple example of feedback driven regulation.
A completely feedforward driven construction could instead provide
the radiator with output signals in accordance with a model of the typical
temperatures of the room during a typical year, and hopefully produce
some kind of temperature close to twenty degrees. Feedforward can thus
exist without feedback and vice versa. However, most systems, just like
we humans, work with both feedforward and feedback driven control. The
reason for this is obvious. A system based only on feedback (like the ther-
22
T HEORETICAL
BACKGROUND
mostat above) will only take action if a deviation from the steady state
occurs. A completely feedforward-driven system on the other hand would
be able to take action in advance, but would not be able to adjust its performance in relation to the system it acts upon. Feedback control examines
the difference between a state and a desired state and adjusts the output in
accordance. Feedforward driven controllers use knowledge of the system
it is supposed to control to act directly on it, anticipating changes. Hollnagel (1998) has proposed a simple model of human control based on
Neissers’s (1976) perceptual cycle. Similar models exist in different
forms, like Brehmer’s Dynamic Decision Loop (DDL) (Brehmer, in press)
or Boyd’s OODA-loop (1987). There are also some similarities with
Miller, Galanter & Pribam’s TOTE-unit (1960).
Figure 2.1: The basic cyclic model of control (Hollnagel, 1998).
The controller, who is assumed to have a goal, a desired state that is to
be achieved, takes action based on a understanding, a construct, in his/her
effort to achieve or maintain control over a target system. This action produces some kind of response from the target system. These responses are
23
C HAPTER 2
the feedback to the controller. It is however not self-evident that the
observable reactions are purely a consequence of the controller’s action;
they may also be influenced by external events. The controller will then
maintain or change his/her construct depending on the feedback, and take
further action. The model above (figure 2.1) will be used as a reference
through the rest of this thesis, referred to as the “basic cyclical model”.
Above I have made a brief description of control. According to this
description, control is successful if the controller manages to perform a
task in accordance with a goal. When this fails, we refer to it as a deviation. But what is a deviation? According to Kjellén (1987), a deviation is
the classification of a systems variable when the variable takes a value that
falls outside a norm.
“All different classes of deviations are defined in relation to norms
at the systems level, i.e., with respect to the planned, expected or
intended production process. “
(Kjellén, 1987, pp 170)
Two basic elements in the definitions of deviations are identified by
Kjellén, and they are systems variable and norm. A norm and a system
variable can be described in different ways depending on the kind of system that is under focus. The norm is always some kind of desired state,
although the definition of these states can be of many different kinds, like
a discrete state or a performance envelope. The system variable/variables
is what we gather information about in order to judge whether or not the
system performance is within the desired state (see figure 2.2).
24
T HEORETICAL
BACKGROUND
Figure 2.2: Illustration to deviation. A process runs over time and is
ideally kept within a desired performance envelope. The possible performance envelop is, however, almost always larger than the desired,
otherwise the norm would be unnecessary. To leave the desired state
at any time is considered a deviation.
2.1.1 WHAT IS A “CONSTRUCT”?
Construct is the term used by Hollnagel to describe the current understanding of the situation in which control is exercised, and the understanding of how the controller is to reach its goal. The notion have clear
connections to terms like “mental model”, and “situation awareness”
(Endsley, 1997), but it does not make any claims of explaining the inner
workings of the human mind, like theories based on the information
processing paradigm does. In fact, the controller does not even have to be
human. What is important to recognize is that the construct is based on
competence (see the Contextual Control Model below) and that it is hard
for the controller to distinguish the feedback given in terms of whether it is
25
C HAPTER 2
a product of the own actions or of the environment. It is also easy to understand why the construct is the basis for control. Brehmer (1992) states a
similar requirement for control:
there must be a goal (the goal condition)
it must be possible to ascertain the state of the system (the observability
condition)
it must be possible to affect the state of the system (the action condition)
there must be a model of the system (the model condition).
Brehmer refers the last condition to Conant & Ashby´s classic paper
“Every good regulator of a system must be a model of that system” (1970).
If we do not have a good model, the only solution is to use feedback regulation, meaning that we respond to changes in the target system after they
actually occurred. Feedback regulation is therefore of great importance in
many systems, since perfect models of real-world systems rarely, if ever,
exist.
2.1.2 GOALS AND NORMS
Goals and norms are central concepts in control. A goal is something that
is needed to take meaningful action. Norms are the way we normally do
something, or the value that a systems variable normally has or should
have. There are some interesting distinctions that can be made between
different kinds of norms and goals. A goal can for example be that a variable should be kept within a certain performance envelope. A power plant
should produce a certain amount of megawatts per hour, not too many
since it may harm the equipment, and not too few since it will not be able
to supply the buyers of the electricity. The other kind is the goal referring
to a limit, which declares that a system variable may not pass a given
value. For example, I may not use a certain parking space longer than I
have paid for. Another important distinction is what norms and goals refer
to. If they refer to a discrete state, it is easy to determine deviations from it.
They may also refer to something less well defined where the boundary is
26
T HEORETICAL
BACKGROUND
stretching over a continuum; a value may for example be “acceptable”
although it is not perfect. In these cases it is much more difficult to determine exactly when a deviation occurs.
There can thus be a wide span of vagueness in these different definitions. In a technical regulation task, like a thermostat, the desired state can
be very precise and can also be measured. The “norm”vii for the thermostat
is the given desired temperature, and a deviation is any other temperature.
This norm is very clearly defined and so is the system variable it relates to,
the measured temperature. In other, more complex, technical systems, the
norm, or steady state, may be a composition of several different variables
that together defines the state of the system.
2.1.3 CONTROL REQUIRES A TARGET SYSTEM TO BE CONTROLLED
Control, as described above, is an action were a controller tries to change
the state of a target system into another state, or conversely try to prevent
the target system from changing state. The term “dynamic systems” is
used to describe systems that develop over time, independent of the controllers actions and a consequence of them. These are the target systems
that are of interest to this thesis. They may also be dynamic in the sense
that the development of the system is subject to change in a complex way
compared to the input given to it, largely depending on the preconditions
in the system. Such systems thus disobey proportionality or additivity,
even if they can seem to have these characteristics under some circumstances (Beyerschen, 1993). Brehmer has described three characteristics
found important to describe the problems a controller faces when trying to
control a dynamic system (Brehmer & Allard, 1985; Brehmer, 1987;
Brehmer & Allard, 1991):
vii. Of course thermostats do not have norms in the sense humans have. But we can
still use it as a valuable example, since the purpose of the thermostat, the goal, is
to keep the temperature at a desired level, and the “norm” for the thermostat is the
reference given by its user.
27
C HAPTER 2
1. It requires a series of decisionsviii. These decisions are not independent.
2. The environment changes both spontaneously and as a consequence of
the decision makers actions.
3. The time element is critical; it is not enough to make the correct decisions and to make them in the correct order, they also have to be made at
the correct moment in time.
The example Brehmer uses is a forest fire. Forest fires are conceptually
fairly easy to understand, but very hard to control, mainly because of the
difficulties in predicting its behaviour. Will the wind for example change
during the process of fighting the fire? If it does, the fire fighters have to
move to a different side of the fire, a large project if the fire is widespread. How fast will the wind blow? The speed of the fire can cause dangerous situations for the personnel fighting the fire and will also have
great implications for the logistics of the fire-fighting organization. We
must not forget that the dynamics largely emerges from the understanding
of the controlling system. Even simple systems may appear dynamic to
the controller if the controller lacks in understanding of the system
dynamics or has a faulty understanding of the system.
2.2 Context and complexity
Context, or the reality in which control executed, can be a source of friction which proves the difference between the construct or model that the
controller has and the actual development of the control process (Clausewitz, 1997, orig. 1832-1837; Neisser, 1976). Human performance is, as
pointed out above, largely determined by the situation. The environment,
our cognitive limitations and the temporal aspects of our activities constrain the possible alternatives we can choose from when faced with a
decision.
viii. When Brehmer writes “decisions”, I assume that he also means that these decisions are actually transformed into actions.
28
T HEORETICAL
BACKGROUND
If we consider a common task like driving to work, we quickly realize
that even though it mostly works out in the desired way, there is a large
number of things that possible can go wrong, and we always make several
adaptations to the surroundings while driving. Other drivers, construction
sites and animals are just a few of the things the have influence on the way
we drive our vehicles. On the other hand, context is very necessary for
driving since the limitations it provides at the same time structure the task.
Imagine driving to work without any roads, traffic rules or signs? The road
has the contextual feature of limiting the area we drive on. The rules of
traffic help us manoeuvre in traffic. By constantly reducing the number of
possible alternatives of choice with the system of “traffic”, it becomes
possible to move large and heavy vehicles at extensive speeds close to
each other, with a surprisingly low accident rate. Context thus provide
both structure and uncertainty at the same time. Clausewitz (1997) emphasizes the difference between “war on paper”ix and real war, and stressed
that it is the small things that we cannot foresee that really prove the difference. Bad weather, a missing bolt, a misunderstood message or a miscalculation is all things that isolated do not seem that serious. But a
missing bolt in a vehicle can block an entire road, bad weather can delay a
crucial assault on enemy lines, a misunderstood message can make the
decision-maker misjudge a situation. Context is thus the current needs and
constraints, the demand characteristics of the situation.
2.2.1 THE COCOM AND ECOM MODELS OF CONTROL
The Contextual Control Model (COCOM) (Hollnagel, 1993) provides a
framework for examining control in different contexts. Being a part of
CSE, the COCOM is based on a functional approach. A functional
approach “is driven by the requisite variety of human performance rather
than by hypothetical conceptual constructs” (Hollnagel, 1998). COCOM
thus concerns the requisite variety of human performance. Ashby (1956)
described the concept of requisite variety, meaning that a system trying to
ix. Clausewitz famous work ”On War” naturally discusses warfare, but it is possible
to apply his arguments on most activities that can be described abstract/theoretic
and then is performed in practice.
29
C HAPTER 2
control another system must, at least, match the variety of the target system. Control can, as discussed above, be both compensatory or feedbackdriven as well as anticipatory or feedforward-driven. There are three basic
concepts described in the COCOM: competence, control and constructs.
Competence regards the possible actions or responses that a system can
apply to a situation, in accordance to the recognized needs and demands
(recognized in relation to the desired state and the understanding of the
target system state). It also excludes all actions that are not available or
cannot be constructed from the available actions.
Control characterizes “the orderliness of performance and the way
competence is applied” (Hollnagel, 1993). This is described in a set of
control modes, scrambled, opportunistic, tactical and strategic (see
below). According to COCOM, control can move from one mode to
another on a continuum.
Constructs refer to the current understanding of the system state in the
current situation. The term “construct” also reveals that we are talking
about a constructed, or artificial/subjective, understanding that not necessarily has to be objectively true. They are, however, the basis for decision
making in the situation.
The contextual control model is based on the three basic concepts, but
they do not, as is obvious, solely decide the control mode of a system,
since it also depends on contextual factors. The main argument in the
COCOM is that a cognitive system regulates (takes action) in relation to
its context rather than “by a pre-defined order relation between constituent
functions”. Regularities in behaviour are from this point of view more an
effect of regularities in the environment rather than properties of human
cognition. The four characteristic modes of control suggested in the model
describe the level of actual performance at a given time.
Scrambled mode is when the next action of the controlling system is
apparently irrational or random. In this mode the controller is subject to
trial and error, and little reflection is involved.
30
T HEORETICAL
BACKGROUND
Opportunistic mode describes the kind of behaviour when action is a
result of salient features in the environment, and limited planning or anticipation is involved. The results of such actions may not be very efficient,
and may give rise to many useless attempts.
Tactical mode is characteristic of situations where performance more or
less follows a known procedure or rule. The controller’s time horizon goes
beyond the dominant needs of the present, but planning is of limited range
and the needs taken in account may sometimes be ad hoc. If a plan is frequently used performance may seem as if it was based on a procedural
prototype – corresponding to, e.g., rule based behaviour – but the underlying base is completely different.
Strategic control represents the mode where the controller uses a wider
time horizon and looks ahead at higher level goals. The choice of action is
therefore less influenced by the dominant features of the situation. Strategic control provides a more efficient and robust performance than the
other modes.
In everyday life most humans act on a continuum stretching from
opportunistic control to tactical control (Hollnagel, 1998). This comes
from the fact that we mostly act regularly, meaning that most of our
actions are habitual, well known and thus re-occurring almost at the same
time every weekday. If something unusual happens, we may need to plan
it in advance; otherwise we suffer the risk to be out of control. Just imagine your mother-in-law suddenly appearing on the porch?x
Hollnagel has also extended the control model, calling it ECOM
(Extended Control Model) (Hollnagel, 2002b). In this version, control is
described as four different, parallel ongoing activities that are interacting
with each other. These activities can be described as both open-loop and
closed-loop activities, and on some levels a mixture. The main reason for
the development of the ECOM is to acknowledge that action takes place
on several levels at the same time, and that this action corresponds to goals
at different levels. This clearly has similarities with Rasmussen’s SRKx. I am in this case referring to the mythological/archetypical image of a mother in
law, seen in movies and cartoons, rather than actual mothers in law.
31
C HAPTER 2
modelxi (1986), although it is extended to relate to concepts like goals and
time. For example, while driving, the main goal is to get to a specific destination, but there are also other goals like keeping track of the position of
the car relative to other vehicles, assuring that it is enough fuel for the trip
etc. The ECOM describes control on the following activity levels; Tracking, Regulating, Monitoring and Targeting (see fig 2.3).
Figure 2.3: The Extended Control Model (Hollnagel, 2003).
In order to be in “effective”, or strategic (according to the COCOM)
control, the JCS, or controller, has to maintain control on all levels. Loss
of control on any of the levels will create difficulties, and possibly risk, for
the controller. Figure 2.3 is also an effort to describe the dependencies
xi. Rasmussens model describes human actions as Skill-based, Rule-based and
Knowledge-based. It should also be noted that the activities not are described as
parallel in Rasmussen’s model, as in the ECOM.
32
T HEORETICAL
BACKGROUND
between the different levels in a top-down fashion, in a way corresponding
to the control modes of the COCOM. If targeting fails, the mode of control
obviously cannot be strategic, and so on. This can also be a conscious
strategy from the controller. If the controller experiences a critical situation on the level of tracking and regulating, he/she may temporarily give
up targeting and monitoring. It is sometimes possible to do the reverse, to
give up tracking and regulating in favour for the higher levels of control.
For example, if someone gets lost when driving, it is possible to stop the
car at the side of the road in order to try to figure out where to go. In that
case, the driver is no longer tracking and regulating since the vehicle is
standing still, but he/she is still trying to create a goal on the level of targeting and monitoring.
If we, like Hollnagel (2002b) use driving as an example, we can present
some of the characteristics of the four different levels. Tracking is in that
case a closed loop, feedback driven activity, although there is a strong
dependency between the tracking and regulating levels. Regulating is a
mixture of both open loop and closed loop control, although mostly the
former. For a driver to avoid collisions, he/she must be able to predict the
position of his/her car relative to other objects, and such an activity cannot
be completely closed loop. Monitoring is mainly open loop since it is
mostly about making predictions on a longer perspective. Likewise, Targeting is open loop since it mostly concerns planning on a long perspective. If we drive and get traffic information concerning the situation in our
near present, we monitor this and try to find alternative roads or slow
down. Targeting is the more overall planning concerning the fact that we
want to go from A to B.
The control modes and levels help us to describe control. The ECOM
describes control on different levels in relation to different goals, and this
fits very well with Kjellén’s (1987) ideas about loss of control in situations
lacking a norm or a goal. However, we should note that what Kjellén discussed was loss of control locally, meaning that an accident can occur if
we analyse with one perspective, but it can still be an incident or just a disturbance from another perspective. For example, if a worker in a factory
gets hurt while using a machine, it is an accident on the unit he/she is
working on, but from the perspective of the total production it may only be
33
C HAPTER 2
considered an incident. It is therefore important to decide on which level
control is studied, i.e. identifying the borders of the studied system, see
below, in order to understand what targeting an monitoring is in relation to
the ongoing activity, the purpose of the controlling system.
2.3 What is a Joint Cognitive System?
Above, we have concluded that we can describe a cognitive system
functionally. We have also mentioned that a system composed of one or
more individuals working with some kind of technical artifacts can be
described as a Joint Cognitive System. In this case, we do not differentiate
man from machine in other terms than functions, and if man and machine
performs a function, they can be viewed as one. We are thus less interested
in the internal functions of either man or machine, but rather the external
functions of the system (Hollnagel, 2002b). A clear problem with the “systems” perspective is to define the borders of the system. Clearly, parts of a
larger system can be studied as a joint cognitive system. There is thus a
pragmatic dimension when defining the boundary of a system.
Translated into a theory of control, we could say that systems involving
several persons exist since we need more personnel to match the requisite
variety of the target system. This may also lead to that systems grow more
and more, since controlling the control system in it self becomes a task. In
some well-defined situations, this might not be necessary, since it is possible to predict the variety in the target system so well that responses are
more or less “automated”, although they are executed by humans. In other,
less well-defined systems, coordination and planning are severe problems,
and the organization has to spend many resources on these aspects. Military systems, and organizations structured in hierarchies in general, are
examples of this. The executives (soldiers and their weapons) become so
many that they need to be managed to coordinate the effect of their work.
How do we then define the borders of a JCS? Hollnagel suggests that a
pragmatic approach should be used, based on the functionality. For example, a pilot and his plane is a JCS. But a plane, pilot and a crew (in a airline
carrier) is also a JCS, and several planes within an air traffic management
34
T HEORETICAL
BACKGROUND
system are also a JCS. In order to define if a constituent should be a part of
the JCS, we can study if the function of it is important to the system, i.e. if
the constituent represents a significant source of variety for the JCS –
either the variety to be controlled or the variety of the controller (Hollnagel, 2002b). The variety of the controller refers to constituents that
allow the controller to exercise his variety, thus different kinds of mediators. Secondly, we need to know if the system can manipulate the constituent, or its input, so a specific outcome results. If not, the constituent
should be seen as a part of the environment, the context. In the case of aviation, Hollnagel states that weather clearly is a part of the environment
rather than the JCS, since it is beyond control. If we look at the case of a
plane and its crew, the air traffic management can be seen as a part of the
environment, since the plane and its crew rarely controls the ATM. The
border of a JCS is thus defined more in terms of its function than its structure or physical composition, although these sometimes are clearly
related.
A JCS is thus a system capable of modifying its behavioural pattern on
the basis of past experience to achieve anti-entropic ends. Its boundary is
analytically defined from its function rather than its structure. The boundary is defined with an analytical purpose, meaning that a JCS can be a constituent of a larger JCS.
2.4 Control and time
We usually say that the rate by which things happen today has increased.
By that we both mean physical speed in cars, planes, trains, boats, but also
transaction speed like in economics, communication and processes. This
goes hand in hand with the technological development that in it self
becomes faster and faster, but also effects everything else that is done with
the help of technical artifacts, thus almost everything. For this we try to
compensate with even more technology, like the safety systems in cars, by
mail filtering tools and digital personal organizers. But these tools does
not change the fact that when things happen fast, it is easy to lose control.
If I drive my car at 80 km/h instead of 50 km/h, I will have less time to
35
C HAPTER 2
respond if something gets in the way of my intended path, and thus less
chance of choosing an appropriate action. Time for a controller is thus relative to the complexity of the task and the time to select action, see figure
2.4. If there are only a few obvious choices of action given an interpretation of a situation, there is a higher chance of choosing an alternative that
will retain control.
Figure 2.4: Control Modes and time (Hollnagel, 2002a).
We must however not only consider the time needed to evaluate feedback and choose action, we must also consider the time needed to actually
perform the action. It is of course possible to gain total time by improving
the speed of the action chosen. By inventing more powerful brakes, a car
may gain the critical parts of a second that can make the difference
between an accident and an incident. However, humans have a tendency to
learn this, and thus go even faster than before, so normally the effect of
this is only temporary. This is often referred to as the “risk homeostasis”
(Wilde, 1994). It is also possible to help the controller to make the right
decision in a critical situation by design of interfaces or training for antic36
T HEORETICAL
BACKGROUND
ipated events were control might be lost in order to gain time. The last and
most common tactic is however to increase the speed of the feedback so
the controller gets information about the process he/she is to control as fast
as possible.
Brehmer & Svemarck (1994) use the term “time-scale” to refer to different time horizons in an activity of a system, very similar to the control
modes described by Hollnagel (see above). They illustrate the concept by
taking a fire-fighting organization as an example. The leader of the organization works on one time-scale where his time horizon is depending on
the perceived development of the fire and the speed of the fire-brigades
he/she commands. The fire-brigades work on a shorter time-scale, directly
coupled to the local development of the fire in their vicinity. The fire-brigades thus have to take action more often than the leader of the fire-fighting has to, although they all work towards the same goal.
One problem is naturally that the concept of time is very hard to grasp,
since it in some sense is the “fourth dimension” of our descriptive world.
To describe time without relating it to something else is almost impossible. There are however some basic ideas that are worth mentioning. First
of all we have “objective” time, or clock time, in terms of seconds, minutes, hours, years etc. This notion of time is related to speed since a year is
the time the earth needs to circle around the sun. Recently, we have built
atomic clocks that provide very accurate measurements of time, but time
is still an entity related to physical movement.
We then have the problem of how time is experienced and judged by
humans and animals. After all, it would be almost impossible to function
without the ability to judge the duration of events. Followers of the information processing paradigm has suggested that humans and animals have
an “inner clock” that provides this functionality (De Keyser et al, 1998).
Another more pragmatic view is to think of time as relative to the environment in which the human/animal live and function, so called contextual
time. In that view, events are ordered along a temporal reference system
inherent to the processes facing the controller. That view on time can help
us to explain why a controller can achieve control or not, and therefore it is
adopted in this thesis.
37
C HAPTER 2
2.4.1 CONTROLLERS AND TIME
Unlike games that are played in turns, where the player has unlimited time
to think and plan before he/she acts, most control situations force the controller to take action in a timely manner since it is impossible to stop the
development of the situation. When facing a forest fire or a LOCAxii in a
nuclear power plant the controller has to take action before it is to late, and
he also needs to understand the time dynamics of the target system and the
controlling system to do this. Time thus shapes human action, meaning
that the possible mode of control often is a consequence of the time available and the controllers understanding of the situation. As shown above in
the ECOM model, control is achieved on various levels that are clearly
related to time.
Figure 2.5: Time and Control in the cyclical model (Hollnagel,
2002a).
Regulating and tracking are characterized by a short time-span were the
controller responds to changes in the environment. Targeting and monitor-
xii. Loss of Cooling Accident.
38
T HEORETICAL
BACKGROUND
ing on the other hand is conducted with a longer perspectivexiii, but still
depend on the other control levels. Hollnagel (2002a) has developed the
basic cyclical model, now including time (see fig.2.5).
According to the model, the controller gets feedback from the process
he/she is to control that has to be evaluated. After this, the controller have
to choose an action, or choose to do nothing, in order to maintain control
of the process. Both these parts take time. Then the action has to be performed, something that also takes time. All these three parts are weighted
against the actual available time to take action in order to change the state
of the target system. For the controller, this is estimation, a part of its construct. At the same time, a “real” available time exists, at time window,
and if the controller fails to estimate it due to inexperience or unforeseen
events, it might lag behind the process and eventually loose control. A
common way to handle this problem is the “speed-accuracy trade-off”.
This means that the controller either reduces speed to gain accuracy, for
example when driving, or the opposite, reduce accuracy in order to gain
speed.
The model clearly illustrates the effects of time in a control situation,
although it only relates to one control goal. In reality, many control situations are far more complex since they include more than one control goal/
target system at the same time, meaning that the controller has to not only
estimate the time available to achieve one goal, but many. In those cases
the controller can be compared to a juggler, since the juggler uses the time
some of the objects he/she is juggling are in the air to maintain control
over the others. Successful control is thus a matter of coordinating actions
both in space and in time.
2.4.2 TIME AND THE ECOM
Although the relation between time and the ECOM never have been
explicitly described in form of a model, there are several obvious relationships between the different activity loops and time. It is, as suggested
xiii. Observe that the use of “short” and “long” time perspective must be considered
in relation to the rate of change in the target system and the pace with which the
controlling system produces changes in the target system’s state.
39
C HAPTER 2
above, possible to maintain control on certain levels depending on the
time available even if it is not possible on other. Establishing goals
demands time, and the time needed to elaborate a goal depends on the
competence of the controller in relation to the current situation. To make
incorrect assessments concerning time on one control level can thus lead
to disasters on other. This is why very sudden changes in the control situation cause dangerous situations. When I go out in the morning and find
that it has snowed during the night, I will drive slower than in dry weather,
but if I am surprised by a slippery spot on the road on a sunny day, I may
loose control of my vehicle since I never had a chance to make a correct
assessment of the situation, and hence reduce my speed. This means that
the rate of change in the process to be controlled, the requisite variety, can
be complex in the sense that the changes occur very suddenly, making it
difficult for the controlling system to match it.
We can thus conclude by stating that the different activities in the
ECOM operate on different time-scales in the same manner as they work
towards different goals. The control levels also interact, and if control fails
on one level, this is likely to have effect on the others as well.
2.5 Human limitations in control
Human decision-making in complex/dynamic situations is the core component of control in complex system, since it always is humans who has to
take over the control task in a system if something unexpected (not
included in the normal/expected functionality of the regulating system)
happens. Hollnagel describes a circulus vitiosus when a decision maker
gets caught in a false understanding of a control process because something unexpected happens (1998). The basic idea is that unexpected feedback, (false, incomplete, too much, too little etc) may challenge the
construct of the controller (se figure 2.1.) and thus end with an incorrect
understanding/construct of the situation. This in turn leads to inadequate
compensatory actions or feedforward, depending on the control level, that
introduces even higher undesired variation in the system, thus giving new,
confusing feedback to the controller.
40
T HEORETICAL
BACKGROUND
From the discussion above about dynamic systems, we have concluded
that decision making in this context is signified by time-pressure, inadequate or lacking information and external influence on the actual execution of control and the feedback given. Further, Orasanu & Connolly
(1993) point out that decision-making in complex systems often puts even
more pressure on the decision maker, since a decision may, if wrong, be
dangerous (for example in nuclear power plants) to a large number of persons (including the decision maker) and/or have great economical consequences. All these different factors create stress that has to be taken into
account when reasoning about control in real-world systems rather than
hypothetical regulation tasks.
According to Conant & Ashby (1970) and Brehmer (1987) it is necessary that the controlling system is/has a model of the system that it is supposed to control, that minimally matches the requisite variety of the target
system. Functionally, this is true. There is however some additional difficulties that we need to consider when we discuss human decision-making.
The human psyche is not working in the rational way a machine does,
even if we claim to study “cognitive systems”. The cogs in the cognitive
machinery does not always turn in the right direction, something that was
recognized already by Lindblom (1959) when he concluded that most
human decision-makers facing complex situations rarely base there decisions on analytic reasoning, but rather seem to use the tactic of “muddling
through”. By “muddling through”, Lindblom meant that the decisionmaker seems to find a few obvious alternatives and try them. This simple
heuristic does not aim for the perfect solution, but rather for one that
works at the moment. Thirty years later, the fields of dynamic decisionmaking and naturalistic decision-making are devoted to examining the
psychology of decision-making under similar conditions. One of the
major results from the studies in naturalistic decision-making is the theory
of “recognition primed decision making” (Klein et al., 1993). The basic
idea behind the theory is that a decision maker facing a problem tries to
identify aspects of the new problem that have similarities with previous
experiences, and tries to find a solution to the new problem from the solutions used previously in similar situations.
41
C HAPTER 2
Another important finding comes from the Bamberg group, who made
done substantial contributions to the field of dynamic decision making, or
“komplexes problemlösung” (Dörner, Kreuzig, Reither & Stäudel, 1983;
Dörner, 1989). Using microworldsxiv for experimentation, Dörner &
Schaub (1994) have identified some “typical” errorsxv made by decision
makers when facing complex problems. The errors correspond to a
sequence of phases in, what Dörner calls, “action regulation”, which is
similar to the basic cyclical model of Hollnagel described above (1998),
but without the circular arrangement. The sequence rather reflects a “decision event” rather than a process, but it is nevertheless interesting since the
errors identified certainly can be applied to a circular model as well. Brehmer, (1992) has summarized the findings of the Bamberg group, calling
them “the pathologies of decision making”.
According to Dörner, the pathologies should not be seen as causes of
failure in themselves, but rather as behaviours that occur when people try
to cope with their failures. However, Jansson (1994) promotes the idea
that the pathologies actually are precursors to failure rather than ad hoc
explanations. In either way, it is to some extent possible to identify the
pathologies in the actual behaviour of a person trying to control a dynamic
system.
The first pathology is called thematic vagabonding and refers to a tendency to shift goals. The decision maker jumps between different goal
states, rather than trying to different solutions to reach the same goal state,
which probably is more important. The second pathology is encystment.
The consequence of this behaviour is that the controller sticks to goal he/
she believes to able to achieve rather than trying to state a more relevant
goal state. The third pathology is the one to avoid making decisions. It is
claimed that ostriches use this tactic when they put their heads in the sand
rather than run if frightened. A fourth pathology is blaming others for own
failures. A fifth pathology is delegating responsibility that cannot, or
xiv. A simulation developed for research purposes, see below for an elaborated discussion/description of microworlds.
xv. ”Error” is in this case a heavily debated term. Assume that I refer to an action taking that will increase the variation of the system in an undesired way.
42
T HEORETICAL
BACKGROUND
should not, be delegated. The other way around, not delegating, can also
be dangerous, especially in hierarchic organizations were feedback
reaches lower levels first, implying that delegation could increase the
response time of the controlling system.
Brehmer observes that the pathologies fit into two categories, the first
one comprising the first two pathologies, the other one the last three. The
first category concerns goal formulation. The second one refusal to learn
from experience, which naturally is important considering the basic cyclic
model. However, Brehmer also notes that we know little about the regularity of these pathologies, i.e., if they are common, and we also do not
know much about individual differences related to the pathologies.
To use the term “decision” can thus be seen as somewhat misleading,
since it is fair to ask whether some actions taken in dynamic situations
really had any alternatives. Of course we can use the term in retrospective
and ask someone why he or she did something in a particular situation, but
we have to remember that the answer is a reconstruction of a series of
events. When we motivate why we did something, we want to give a
rational explanation, but it is not always the truth.
We can conclude from this that humans are the essential creative part in
a cognitive system that can handle unanticipated events, but it is also so
that the human part of the system is sensitive to a number of possible
increases in undesired performance variation, both due to external influences that the controller is unable to understand correctly, but also because
of erroneous behaviours that may occur as a consequence of this.
2.6 Synthesis
From the basic cyclical model, presented above, we have concluded that
control is founded on the ability to establish a construct, take action, monitor and adjust in accordance. The ECOM further divided the control loop
into several levels, working simultaneously against different goals on different time-scales: Targeting, Monitoring, Regulating and Tracking.
An interesting problem rises from the field of new information technology. Such technology is by many seen as the solution that will make it
43
C HAPTER 2
possible to manage even unforeseen situations or processes which development is hard to predict. Earlier, messages from “the field” to a commander had to be relayed, both through organizational levels and different
communication media, before it reached its destination. Today it is common (or at least envisioned) that the data is available to the commander
almost immediately via communication networks and databases, known
as the network centric approach. Networked communication structure also
means that anyone attached to the network, given the right permissions,
could access any information in the network. This means that the time to
retrieve information (feedback) is/is going to be much shorter than it used
to.
Table 2.1: Characteristics of traditional and envisioned command and
control systems (Persson & Johansson, 2001).
“Traditional” C2-Systems
Envisioned C2-Systems
• Organised in hierarchies
• Organised in networks.
• Information distributed over a
variety of systems, analogue and
digital. Most common medium is
text- or verbal communication.
• All information is distributed to all nodes
in the system. Anyone can access data in
the system.
• Data is seldom retrieved directly
from the sensor by the decisionmaker. It is rather filtered through
the chain of command by humans
that interpret it and aggregates it
in a fashion that they assume will
fit the recipient.
• Presentation of data is handled
“on spot”, meaning that the user
of the data organises it him/her
self, normally on flip-boards or
paper-maps. The delay between
sensor registration and presentation depends greatly on the organisational “distance” between the
sensor and the receiver.
44
• Powerful sensors support the system and
feed the organisation with detailed information.
• Data is mostly retrieved directly from the
sensors. Filtering or aggregation is done
by automation.
• Presentation is done via computer-systems. Most data is presented in dynamic
digital maps. The time between data
retrieval and presentation is near realtime.
• It is possible to communicate with anyone
in the organisation, meaning that messages do not have to be mediated via different levels in the organisation.
T HEORETICAL
BACKGROUND
The idea behind this is that the control organization is going to be able
to react to changes more rapidly, and thus have better possibilities to control the target system.The most central aspects of the new command and
control visions are described in table 2.1.
As concluded above, the basic idea behind this concept is simple. In a
conflict, the commander with the more accurate and faster information
will gain the upper hand (Alberts, Gartska & Stein, 2000).
The idea of faster information retrieval is supported by the study of
Brehmer & Allard (1991) that showed that even small delay in feedback
seemed to have great impact on the ability to control a dynamic situation.
The target system in that case was simulated forest fires.
There is however other investigations that shows different results. For
example, Omodei et al. (in press) have performed a very similar study to
the Brehmer, and found the opposite, that fast and accurate feedback actually decreased performance significantly in comparison with a more traditional information system in forest fire fighting. Omodei et al. Provides
some possible explanations to the somewhat puzzling findings:
“It appears that if a resource is made available, commanders feel
compelled to use it. That is, in a resource-rich environment, commanders will utilize resources even when their cognitive system is
so overloaded as to result in a degradation of performance. Other
deleterious effects of such cognitive overload might include (a)
decrement on maintenance of adequate global situation awareness,
(b) impairment of high-level strategic thinking, and (c) diminished
appreciation of the time-scales involved in setting action in train.”
(Omodei et al. In press)
The results from the Omodei et al study could also be explained by the
Misperception Of Feedback hypothesis (MOF) (Langley, Paich & Sterman, 1998). The MOF-hypothesis is based on that a decision-maker/controller have such large problems interpreting feedback in systems with
45
C HAPTER 2
interacting feedback loops and time delays that they systematically misperceive it. Performance in such situations is often better when it is based
on simple, naïve decision-rules than decisions based on feedback.
The point is that while “fast” (in the case of Brehmer & Allard, immediate) feedback can improve performance on the level of regulating and
tracking, it is not self-evident that fast feedback improves performance in
relation to the higher control levels like monitoring and targeting that
demand anticipatory actions. It could be that systems that provide a controller with very fast and accurate feedback have the effect that the controller shortens his time horizon, since he/she will be able to evaluate the
actions taken sooner than before.
If we look into the world of stock trading, an area where the network
centric approach is applied in its full sense, this is very obvious. This could
be one of the explanations to the sudden fluctuations on the stock market.
Since information about business is available to all actors on the market at
the same time, the reactions from the traders have to be very quick. This
causes unwanted chain reactions, and also makes stock trading very sensitive to rumours and false information. The activity of stock trading thus
largely corresponds to opportunistic control in terms of the COCOM, and
sometimes even scrambled. Another interesting aspect of artificial systems like stock trading is that the regulation process is the result of very
complex interactions between the traders and the possibility to regulate
almost immediately. Although the system has some built in defences, the
traders can respond several orders of magnitudexvi faster than the actual
development of the firms with which value they trade. This means that
corporations that have an actual value in terms of factories and products
may become worthless, and the opposite, that corporations without actual
physical value can increase it.
In a sense, this is a problem that concerns most control systems that provide the opportunity to observe and react very fast, independent of
whether the target system responds rapidly or with delay. What would
xvi.Roughly, selling or buying can be done within a matter of seconds. Actual trading
with physical goods or expansion of factories/businesses takes weeks, months or
years.
46
T HEORETICAL
BACKGROUND
happen if we increased the time between actions of the controlling system,
but not the feedback from the target system? Earlier studies have focused
on the problem of controlling systems with delayed feedback (Brehmer &
Allard, 1991), and some on systems that respond slowly (Crossman &
Cooke, 1974), although the feedback is not delayed. One hypothesis is that
it could be easier to see the effects of own actions, since many real world
systems respond slowly to the interventions made by the controller. The
reason for why this should be positive is simple. Given that a controlling
system under uncertainty tries to learn how it should achieve its goals, it is
likely that it produces a lot of input to the target system. However, when it
comes to human control, we know from the Bamberg-studies (and other)
that it is difficult to tell the difference between own actions and natural
changes in the target system. This is especially true in cases were there are
long delays between action taken and the actual manifestation of changes
in the target system, independent of whether it is an effect of slow
response from the target system, or delayed feedback. It is in these cases
easy to get caught in the circulus vitiosus were the next action taken will
be based on a false understanding of the effects of the action taken before
it. We can therefore assume that a “fast” control loop can be beneficial if
the control process is based on correct decisions/actions, but if not, which
is often the case in uncertain situations, this will probably lead to large
fluctuations in the target system. The reasoning is naturally based on the
fact that the original pace of the control loop is faster than the rate of
change in the target process. Below, I suggest a study that will examine the
impact of action regulation on human control of slow response systems.
This phenomenon becomes even more intricate if we consider organizations like the military. Even though the business of trading is complex, the
military business, or any other business involving actual physical movement of larger amounts of people and equipment, is even more complex
since all actions of any significance require planning to a much larger
extent. This means that the controller has to work, as pointed out by Brehmer & Svemarck (1994) at different time scales. Brehmer & Svenmarck
made a study of organisational structure and its effect on control. Subjects
were organised in mini-organisations facing the task of extinguishing a
simulated forest fire. They found in their initial study that a centralised
47
organisation, were one subject had to coordinate his/her colleagues, was
superior to a more open organisations were any participant could communicate with anyone. Further studies have shown that the findings also
relate to time in the sense that the more open organisation becomes superior when time-pressure increases. This fits very well with the ECOM, in
the sense that if the rate of change of a target system increases, and thus
cannot be foreseen, we have to adapt to that by lowering our level of control.
I would like to suggest a similar study, although the focus should not be
on the organizational structure, but rather on the intricate problem of
direct control versus indirect control (delegation). Direct control without
slow response/delays will gives the controller the possibility to act upon
the system directly and could therefore improve performance in tasks
were it is essential to respond fast rather than plan ahead. On the other
hand, a single controller can only handle a limited number of tasks at the
same time. If time pressure increases even more, or the complexity, it is
likely that the controller will loose control. To delegate tasks is in such situations the most reasonable solution, but then the controller with the main
responsibility for the task has to take the "dead" time between order and
execution into account when planning, demanding more anticipatory planning. An interesting study would thus be to examine the balance between
complexity, rate of change in the target system and direct/indirect control.
Below follows a presentation of two studies that gather knowledge
about this, and a discussion about methods for gathering such knowledge.
Chapter 3
Method
The theoretical foundations of this thesis proposal discuss control from the
view of Joint Cognitive Systems. These systems tend to be composed of
several individuals who are more or less well organized and are using different kinds of artifacts to do their work. One way to gain knowledge
about such systems is to use a qualitative approach and study such a system in the “field”. This provides context-specific data that gives insight
knowledge about the cognitive aspects of a work place. As an example,
Hutchins (1995) has very convincingly described how the crew of an aircraft carrier uses various tools and work practices to calculate the headings
for the ship. Hutchins argues for that cognition cannot (or should not) be
studied in laboratory settings alone, since “cognition” is divided between
humans, artifacts and practices, like discussed above. One major problem
with Hutchins research is however that it is very hard to make assumptions
about causal relationships from them. The phenomenon per se, that cognition can be viewed as distributed between artifacts and humans, is interesting, but could have been proven equally well by studying one person using
a calculator. The method he uses (ethnography) generates very exact
knowledge about one single context (high ecologic validity). His main
argument is that we cannot understand a situation without living in it/with
it for a long time. In this thesis however, I have outlined two research
49
C HAPTER 3
questions that I would like to investigate. To do this by going into the field
could be dangerous. This is mostly because the questions I will examine
contain some assumptions in the form of causal relationships. If I were to
go out with my hypothesis and examine them in the field, it would be very
difficult to determine these causal relationships. There is a risk that my
observations might lead me to draw conclusions that are not internally
valid.
A more sound approach would in that case have been to have a less well
defined theory, more like an area of interest, and go out into the field, collect data and then try to build some kind of model of how control works.
Followers of for example grounded theory (Strauss & Corbin, 1990) use
this method frequently. The basic idea is to let the data “speak” rather than
trying to enforce a theory on the data. This can generate theory that has
very good validity if the researcher manage to live up to the very hard conditions of the grounded theory approach, namely to consciously try to
ignore the theoretical assumptions and knowledge that he/she carries. In
my case this would be inappropriate since I take a stance in well-known
theories and try to base my hypothesis on them.
If the models described above are correct (and that I assume), it should
be possible to test different aspects of control and study it in terms of more
or less successful control. The causal chain from monitoring to actual performance is never stronger than our knowledge about the system. In
accordance with cybernetics (Ashby, 1956), we must view systems that
we cannot describe in detail as “Black box” systems. A human or an animal is thus in part black box systems, especially when it concerns the cognitive abilities. There are many different theories about human
cognitionxvii, but we must not forget that these mostly are hypothetical
constructs rather than measurable entities. The chain of reasoning that has
lead to the conclusions about the inner mechanisms of our cognition could
in many cases be replaced with any similar theory that matches the collected data that is to serve as “evidence” for the theory. By this, I am not
suggesting that it is impossible to examine the inner workings of the
xvii. See for example Gardner (1987) for an overview of theories of human cognition.
50
M ETHOD
human mind, but I am saying that I find it sounder to ground my research
on observations of human behaviour in relation to known variables than
theoretical constructs about human cognition. If we instead step back and
look at functional aspects of systems, be it of any kind, and try to establish
these functional relationships under controlled conditions, we can possibly say something that at least applies to that particular function/ability.
3.1 Experimental research
What is then experimental research, or laboratory research? What are the
problems associated with it? There are two main arguments against this
kind of research. First of all, it is possible to claim that laboratory research
have problems with, as mentioned above, ecologicalxviii validity, meaning
that the findings from the lab, no matter how internally valid, does not
apply to any other settings, and especially not to settings outside a lab. The
other argument that occurs when concerning research on the human psyche, the cognition, is the one pointed out by several researchers, that cognition is situated and we can therefore not study it in isolation. Hollnagel
dissects that debate by introducing the concept of “cognition in captivity”
(Hollnagel, 2002b). By this, he claims that there is no such thing as isolated cognition or laboratory studies; it is merely a question of different
contexts.
”The important point is, however, to realise that all human performance is constrained by the conditions under which it takes place,
and that this principle holds for ”natural” performance as well as
controlled experiments.”
(Hollnagel, 2002b, pp 8)
xviii. Ecological validity is a term used to describe the problem of transferring findings from the lab to real settings, see the discussion about micro-worlds below.
51
C HAPTER 3
It is thus not more or less appropriate to study cognition in the lab or in
the field, it is a question about whether or not the two situations are comparable. “Cognition in captivity” refers to the fact that cognition remains
cognition in the laboratory, the difference lays in the fact that the degrees
of freedom is decreased.
Another point is that, as pointed out by Brehmer (in press) in his discussion about microworld research, that generalisation cannot be done on the
empirical level at all.
"Indeed, generalisation cannot be done at the empirical level at all.
It requires theory. This theory should inform the researcher which
variables to look for and how they should be operationalised. A
generalisation involves testing a hypothesis, and this hypothesis
must be derived from a theory. Generalisation just means a further
test of the theory in question. Generalisation simply means testing
hypotheses first tested in laboratory experiments again in circumstances outside the laboratory. If the hypothesis is neither rejected
in the microworld study, nor in the study outside the laboratory, we
might say that we have a generalisable result, or we may say that
we have a valid theory. In the end, it comes down to the same
thing."
(Brehmer, in press)
It is obviously so that what we study in the laboratory is not the same
thing as the “real”, non-captive world. It is also obvious that if we try to
draw causal conclusions from an observation, we need to, if not control, at
least reliably measure all variables in the environment that could possible
have effect on the studied. In this case, this is especially problematic, since
the phenomenon of interest concerns several humans trying to control a
dynamic, complex target system. There are thus several variables that are
hard to control, and if I would study this in reality, it would be extremely
difficult to draw any conclusions since it is very difficult to estimate the
actual control over a forest fire. Neither is it ethical or in practice possible
52
M ETHOD
to test the effect of feedback delays or slow response systems in real situations.
3.2 Micro-worlds as a tool for experimentation
I still face the problem of designing adequate experiments were I can operationalise my research question into measurable tasks that can be tested. I
also face the problem of creating situations that are, at least to the subjects,
dynamic in the sense described above. One way to do this is to use computer based simulations, so called microworlds. Several researchers have
suggested this as a possibility to present a dynamic problem to a research
subject and at the same time having the variables under control, or at least
have them in a traceable format. Brehmer & Dörner (1993) suggests that
micro-worlds bridge the gap between the traditional (psychological) laboratory study and the “deep blue sea” of field research. We shall, however,
be cautious and define what a micro-world is. It is easy to say that a microworld is a simulation. This is of course partly true if we by simulation
mean any computer program that has some similarity with a real-world
task. It is on the other hand a grave misuse of the term, since a simulation
often claims to be a more or less exact representation of a real-world task.
For example, a flight simulator for professional training may be very
advanced, providing an almost entirely realistic interaction. This is not the
purpose of a micro-world.
“In experiments with microworlds, subjects are required to interact
with and control computer simulations of systems such as forest
fires, companies, or developing countries for some period of time.
Microworlds are not designed to be high fidelity simulations.
Instead, they are related to the systems that they represent in the
same manner as wood cuts are related to what they represent. That
is, it is possible to recognise what is being represented, but there is
little detail.
53
C HAPTER 3
However, microworlds always have the fundamental characteristics of decision problems of interest, here, viz., complexity and intransparency.”
(Brehmer, 2000, pp 7-8.)
The purpose of a micro-world is to present at recognizable problem to
the subjects using it. This is necessary in order to be able to analyse the
material. In qualitative research, the normal procedure is to start with the
data and try to extract some main categories, or variables, and try to establish some kind of relation between them. In the case of micro-worlds, the
variables belong to the environment, the micro-world, are known and can
be controlled. But the micro-world must still be complex enough so that
the subjects experience a dynamic situation with uncertainty.
3.2.1 CHARACTERISTICS OF MICRO-WORLDS
Micro-worlds exist in many different versions, but the ones interesting to
this thesis (and the most commonly used) share some fundamental characteristics. They are complex, dynamic and opaque. They are complex
because the subjects have to consider a number of aspects, like different
courses of actions or contradicting goals. Secondly, they are dynamic in
the sense that subjects have to consider different time-scales and unforeseen effects since the relationship between different variables are uncertain. The opaqueness comes from that some parts of the simulation are
invisible to the subject, who has to view the target system as a black box.
They thus have to make hypotheses and test them in order to handle the
situation (Brehmer & Dörner, 1993). These three characteristics are representative to many real world situations. The inner workings of microworlds like the fire-fighting example provides complex relationships in
form of exponential growth combined with linear control measures, something that is difficult to comprehend for the research subjects. The number
of variables and their relations determine the complexity. Another important issues are to which extent the micro-world match the system it is sup54
M ETHOD
posed to represent. The subjects will of course base their reasoning about
the micro-world on their knowledge about the system that the micro-world
is to represent. This creates some interesting problems, since it is both possible that the micro-world lacks some parts that the subject assumes exist,
and also the opposite, that the micro-world has some properties that the
subject do not expect to be in it.
The last point, which does not relate directly to the concept of microworlds, but rather to the design of the experiments, is discussed by Brehmer
& Dörner (1993). This relates to the goals that the subject is to reach when
handling the micro-world. The easiest distinction comes from whether the
subject has to handle one or more goals. To achieve one single goal the
subject does not have to consider side effects. For example, if we use a
micro-world representing a production system and the only goal is to maximize profit, the subject does not have to consider if this effects the workers situation, for example their salariesxix. It is thus possible to introduce
goals that are conflicting, forcing the subject to try to balance the effects of
his/her actions. Another problem is the description of the goal, something
that clearly relates to this thesis. Does the goal come in form of a desired
end-state, or does it prescribe that the system reaches a certain level of
functionality? In the example of the fire extinguishing task, the goal is
often defined as a state (no fires left rather than a certain are on fire), but in
for example an industrial production task, the goal is to keep the system
within a certain performance envelope during the entire test.
3.2.2 RESEARCH APPROACHES USING MICRO-WORLDS
There are three main research strategies when using micro-worlds, the
individual differences approach, the case study and the experimental
approach (Brehmer & Dörner, 1993). The first examines the subjects by
different kinds of tests, like intelligence, and then try to correlate these
tests with the “performance” in the micro-world (see for example Rigas,
2000). The second approach is basically qualitative research in a controlxix. The example is humbly borrowed from the often cited Brehmer & Dörner article
”Experiments With Computer-Simulated Microworlds: Escaping Both the Narrow
Straits of the Laboratory and the Deep Blue Sea of the Field Study” (1993).
55
C HAPTER 3
led setting, where the researcher examines the behaviour of the subject
and tries to identify patterns in order to generate hypothesis. The last
approach does not aim at examining differences between different subjects; instead it uses some variable in the micro-world that is manipulated.
The interest lays in that variable. For example, the Brehmer & Allard
(1991, see above) study had one condition with direct feedback and one
with delayed.
The latter strategy is less problematic in the sense that it avoids the
problem of trying to measure abstract terms like intelligence or personality, and instead measure interaction with the micro-world. It does not rule
out the problem of individual differences, these are still a problem, but
oppositely to the individual differences approach, it is actually desired that
the subjects are as similar as possible so the effect of the manipulated variable becomes as clear as possible. This approach is the suggested to be
used in the thesis.
3.2.3 POSSIBLE METHODOLOGICAL PROBLEMS WITH MICRO-WORLDS
One obvious problem that relates mostly to the first research approach
(individual differences) is that the demands put on the subject by the
micro-world only tells us what the micro-world demands, rather than
something about the real world. The third approach (experimental) tells us
what someone can do and not do under certain conditions. But the latter
approach also suffers from the same basic problem, that the results gained
from the experiment only apply to the experimental conditions.
There is thus always a large threat to the ecological validity of any studies using simulations. We can state that micro-worlds provide some context-specific characteristics like dynamic development, uncertainty and
opaqueness, but the fact remains that real situations provide a different
kind of stress and other contextual factors that never can be simulated.
After all, the subjects know that they are not fighting a real forest fire, and
they also know that they are not gambling with real money and real lives
when trying to help a developmental country. This can increase the level
of risk that the subjects are willing to take in order to reach their goals,
56
M ETHOD
especially in the individual micro-worlds like Moro (see below) where the
subjects have “dictatorial” powers (Brehmer & Dörner, 1993).
Time is in a sense also problematic. Time is almost always compressed
in micro-worlds. In Moro, for example, thirty years pass in less than three
hours (typically, there is no real time limit since it is played in turns). In
the C3fire micro-world, which simulates a forest fire, a trial normally lasts
for about 30 minutes, under which large areas of land may be consumed
fire. This can on the other hand be used for experimentation if the microworld allows the researcher to manipulate for example feedback delays
like in the Brehmer & Allard study (1991). Moro, and many other microworlds, are also controlled in “turns”, like a board game, rather than in
clock time.
Another problem comes from the fact that the typical subject does not
have the professional background to solve the task they are given. Students (who are the typical research subjects) are rarely fire-fighters or
experts on ecological systems and development of third world countries.
This is also the answer to why the micro-worlds are low-fidelity simulations where the purpose is to examine how a subject handles a dynamic situation rather than an actual system.
It is however possible to argue against this by pointing to the fact that a
high-fidelity simulation using professionals still will have some of the
problems of micro-worlds (for example the fact that the subjects know that
the situation is not reality), and the findings will be less general since the
group tested will be very specific. Another, and perhaps more important
point is that if we use a very complex, but realistic, simulation, it will not
tell us anything more or less than the real system would since it is equally
difficult to understand. Brehmer often uses the example of a cat. A cat may
be seen as a complex system, and the best possible simulation of a cat is
another cat. But another cat is just as hard to understand as the original cat,
and it is not more informative to study the simulated cat than the real cat.
The same goes for a person’s behaviour in relation to the simulated cat: It
is just as complex and intransparent as the behaviour in relation to a real
cat. The argument should hold for microworld studies as well.
The last point is that micro-worlds reflecting a dynamic system suffer
the same problem as a real dynamic system. This is not really a threat to
57
C HAPTER 3
validity, but it is still worth pointing out since even small mistakes may
escalate into disasters (Maruyama, 1963), meaning that there is a clear risk
of large differences between supposedly similar subjects. Each trial will
actually be unique in the sense that once the simulation has started, it is
impossible to tell the end state. The interactions between even a very small
numbers of variables that interact with each other create great uncertainty.
The best way to deal with this problem is to make many trials with many
subjects, but even this does not exclude the possibility that the results are
affected by chance.
3.3 The choice of micro-worlds
The choice of the two micro-worlds must naturally be motivated.
Although the research questions could be answered with several other
micro-world or simulations, or even some other kind of experiment, I have
(preliminary) chosen MORO and C3fire (see below). The answer to the
question is pragmatic and has been discussed before. The main argument
is that there is a large body of data gathered, considering the short time
micro-worlds have been used in research, in earlier studies using these two
kinds of micro-worlds. As pointed out by Brehmer (in press), it is advantageous to keep on conducting experiments with micro-worlds that have
been used previously so the research community gathers comparable data.
It is also so that there exists well-documented experience about the use of
these two micro-worlds, something that helps other researchers to do
methodologically sound research.
More specifically, both micro-worlds are well suited for examining the
research questions. Moro has multiple goals relating to different timescales. Moreover, the Moro-task corresponds to anticipatory control to a
larger extent than the C3fire task does. Therefore Moro serves very well as
an initial platform for examining what happens when we change the pace
of the controlling system in relation to the target system. C3fire has previously been used to study organisations in dynamic control situations, and
should therefore be appropriate for studies concerning the problem of handling dead time in organisations.
58
M ETHOD
3.3.1 MORO
The Moro micro-world has been used to a great extent since the eighties
(Dörner, Stäudel & Strohschneider, 1988; Brehmer & Dörner, 1993).
Moro is actually a simulation of Burkina Faso, at least some of the ecological systems. The microworld provides a complex dynamic task with several processes that have to be managed, working on different time scales.
The simulation does not run in clock time, like C3fire (see below), but
rather in “turns” of one year.
Figure 3.1: The relationship between variables in the MORO Microworld.
59
C HAPTER 3
The task presented in Moro is to be advisor to a figurative African tribe,
the Moros, during a period stretching over several (typically 30) years.
The subject is given a “loan” of one million Rikas, the Moro currency and
are instructed to increase the “well being” of the Moros. The purpose of
the game is to maximize the “well-being” at same time as the ecological
system is kept in balance and to be able to repay the loan of one million
Rikas when the game ends.
The Moros mainly eat meat from cattle herds and a little Hirs that they
grow themselves. It is fairly easy to get more than enough food for the
Moros early in the game simply by fighting the Tsetse-flies that plague the
Moro cattle and increase the watering of the fields the herds inhabit. If
there is an over-production of cattle, it is possible to sell cattle and thereby
earn money.
A problem with this approach may rise since to much cattle will put the
ecological system out of balance since the cattle will eat so much grass
that they cause erosion and eventually have to little to eat. Another danger
is that the ground water level decreases too much since it is possible to
build to many springs to water the pasture land and the Hirs fields. The
main variables and their influence on each other can be seen in figure 3.1.
Measurable variables in Moro
Almost everything in Moro is measurable (see figure 3.1). Which variables we care to examine naturally comes from the formulation of the
norm/goal in the experiment. If we assume that the overall goal of all tests
is to make thing better for the Moros, the amount of living Moros and the
amount of food they have available are two crucial aspects. The health of
the ecological system is also important (the cattle, the grass they eat and
the hirs fields), and after this other aspects like teaching and health care.
The financial situation is of course also relevant. Schaub & Strohschneider
(1989) has provided a tool to calculate performance in the Moro microworld. The tool helps the researcher to categorize subjects according to
different performance criteria.
The Schaub & Stroschneider tool is probably less useful in a study like
the ones proposed in this study since it is based on six different values,
ground water level, capital, deaths from starvation, pasture land, harvest
60
M ETHOD
and cattle, that develop on different time-scales. However, it still gives
some guidance to which variables that are important in Moro. These variables are mostly concerned with the basic needs for survival in the Moro
micro-world and economy. It is naturally possible to measure many other
variables in Moro, like population size, development of health care,
number of teachers employed etc.
Moro thus offers a complex and opaque task involving goals on different time-scales and the possibility to manipulate the interaction rate
between the subject and the development in the simulation. This makes
the micro-world a relevant choice for studying the relationship between
the speed of the controlling system and the target system.
3.3.2 C3FIRE
C3fire is a micro-world based on the fire-extinguishing task, originating
from the DESSY (Dynamic Environment Simulation System) (Brehmer,
1987) and D3fire (Distributed Dynamic Decision Making) (Svenmarck &
Brehmer, 1994; Brehmer & Svemarck, 1994) microworlds (Granlund,
2002).
Figure 3.2: The C3fire micro-world. The subjects have to cooperate
to extinguish a simulated forest-fire. The development of the simulation is set by a manager in a number of script-files (Granlund, Johansson & Persson, 2001).
61
C HAPTER 3
The problem presented in the C3fire microworld is that a number of fire
brigades have to be coordinated in order to extinguish one or more forest
fires. The forest fire develops in an area that is limited to, at least in the last
version of the simulation, 40x40 squares, corresponding to an area of perhaps 20x20 km, although this can be manipulated. This area may contain
other objects than forest, like houses. The simulation is normally configured so that the fire brigades have a limited view of the area in the game
and therefore has to cooperate in order to be successful.
C3fire can be configured for many different purposes, but typically it is
arranged in such a way that a staff is responsible for coordinating two or
more ground-chiefs (humans) that in turn control at least two fire brigades
each (simulated), see figure 3.2
Figure 3.3: The C3fire client interface. Each fire brigade is represented by a number. In this configuration, the fire brigades can only
see nine squares; the one they are standing on and the adjacent eight
squares.
62
The C3fire micro-world is distributed in a client-server configuration,
meaning that each subject working in the simulation runs his own client.
The experiment administrator have the power to decide which participants
that can communicate with each other, and also how much information all
participants are to get from their fire brigades concerning positions on the
map and how much the brigades actually see of the map (see figure 3.3).
It is also possible to share databases between subjects in the microworld, for example textual and graphical (Artman & Granlund, 1999;
Granlund, 2002). Each trial are based on one scenario file and one configuration file. The scenario file contains information about where a fire is to
start and when, how fast the wind blows, in which direction, how long a
simulation should last (typically 30 minutes) and messages that are to be
sent from the simulation to the participants. For example is it possible to
send a message with an alarm about a new fire to one or more of the participants at a given time. The configuration file contains information about
objects that exist in the simulation (trees, houses, fire brigades), which
roles that are available and which information they are to receive from the
simulation.
Measurable variables in C3fire
All events in C3fire are saved into a log-file. All interaction in the system
can be measured, like positions of fire-fighting units, fires, messages sent,
burned down are etc. What typically is measured as performance criteria is
the area that has burnt down or been saved. On this area there are objects
that can be assigned value, houses and trees. A house can for example be
worth ten times the value of a square containing neither trees or houses,
and a square with a tree on it can be worth twice as much as an empty
square. If we make such a weighting system, it should be possible to compare two trials of C3fire and calculate which trial that is the more successful.
Chapter 4
Suggested studies
Below follows a description of tow different studies, one experimental and
one explorative, to examine the questions raised in the theoretical synthesis.
4.1 Study 1
Hypothesis: Variables that are developing on a longer time-scale in
Moro, like amount of cattle in relation to size of the pasture land in relation to the number of wells should benefit from the 2- and 4-year conditions. Subjects performing in the delayed conditions should also show
fewer cases of catastrophic developments in the system.
One factor – three levels. Between subjects design.
All different conditions will participate in a Moro-simulation using the
same scenario (using WinMoro). In Moro, the subject normally is allowed
to monitor the progress and take action once every (simulated) year. However, this interval has little connection to the actual developmental cycle
of the parameters in Moro. Actually, the full effect of an action can in
some cases not be seen until several years later, especially when it con65
C HAPTER 4
cerns side effects. As a simple example, if a subject decides to fight the
tse-tse flies that plague the cattle on the first year, the complete effect of
this will not be visible until four Moro-years later (see figure 4.1).
Figure 4.1: A comparison between a ”0”-simulation, no variables
have been changed, and a simulation were the tse-tse flies are fought
with maximum force from year one. The actual effect on the cattle
population peaks after four years.
The idea behind the experiment is to test different action intervals, one,
two and four years, and see which effect this will have upon the subjects
ability to control the Moro microworld. Subjects are normally allowed to
interact with the simulation once every simulated year. During this time,
they can monitor what has happened since last year and, based on this,
decide what should or should not be done until next year. Since the subjects are unfamiliar with the task, they are initially forced to rely on trial
and error. They have to make hypothesizes about the relationship between
different variables in Moro and what effect their actions will have on the
simulation. However, since feedback from different actions in most cases
is delayed, they will have problems monitoring the effects of their actions.
A subject may for example decide to fight the tse-tse flies year one. When
he/she plays the next turn there is probably no observable change in terms
of the amount of cattle. Unless the subject keeps on fighting the flies for a
66
S UGGESTED
STUDIES
few years more, he/she may hypothesize that there are no relation between
the amount of cattle and the number of cattle, or that the effort invested in
fighting the tse-tse flies was to weak. The risk for over-manipulation is
evident. The Moro-microworld also suffers from a very large number of
variables that the subjects can manipulate, something that makes it even
more likely that the subjects will fail, at least according to the MOFhypothesis by Langley, Paich & Sterman (1998, see above). It is also in
line with the basic cyclical model since one of the obvious problems a controller faces is that the construct (understanding) of the situation is wrong
because of misinterpretations of the development in the target system
state. There are also findings from Jensen & Brehmer (submitted) that
suggest that performance in dynamic control is increased when the decision-maker/controller is forced observe the development of the process
over more than one point in time, and thus also wait before taking action.
The independent variable is the number of years that pass between each
time the subject are allowed to take action. The subjects will still be
allowed to monitor changes in the system each year. What we do when we
introduce this is that we change the rate by which the controlling system
can take action, we are, relative to the target system, slowing it down.
Since MORO require the controller to actively search for information, it
is easy to see what information the controller looks, and more importantly,
how often.
The dependent variable is the six variables described in the Schaub &
Stroschneider tool, although not with the same rating. Rather, the variables will have to be examined individually.
4.1.1 NUMBER OF SUBJECTS
The number of subjects has to be calculated based on a pilot study. The
pilot study will start with ten trials in each condition.
4.1.2 SELECTION OF SUBJECTS
Subjects will be volunteers recruited among the students at Linköping
University. These volunteers will be offered a movie ticket (equals about
90 sek) for their participation and will also be instructed that the subject
67
C HAPTER 4
that performs best in the study will gain two additional tickets as a motivational factor (in their condition of course). A criterion for participation
is that the subject has not previously participated in any Moro-studies,
since this could introduce problems with learning effects.
4.1.3 PROCEDURE
Hired experiment leaders will conduct all experiments. Each subject will
have to fill in a simple form where they assure that they understand that
their participation will be treated anonymously, and that they have volunteered to participate and agree to allow the results from the test to be published. They will also basic questions about age, gender, education and
experience of similar computer simulations (games like Civilization, SimCity etc).
After this, they will be randomly assigned to one of the conditions.
They will then conduct a short training session together with the experiment leader with Moro to assure that they understand how interaction with
the micro-world works. Then the experiment starts.
The subjects are instructed that they cannot ask the experiment leader
about the simulation ones it started. After this, they get a paper with some
basic instructions about the Moro task on it. The instructions will be similar to instructions used in previous research (Elgh, 2002; Rigas, 2000)
although slightly more specified concerning the desired state of the six
central variables. They are allowed to study this as long as they want
before they start and they are allowed to keep it during the trial. After they
have started, they are to run the Moro-simulation for 25 Moro-years. The
Moro simulation will be in balance when it starts and the subject will get
1000 000 Rikas as a starting budget, which is a large enough sum to make
considerable investments. The money is to be considered as a loan that
should be repaid at the end of the game.
68
S UGGESTED
STUDIES
4.2 Study 2
The purpose of this study is to examine anticipatory control, much like the
Crossman & Cooke (1974, see above) study, but extended to an organizational perspective, using the C3fire microworld. It is not a true experiment
in the same way as the study described above since it lacks a null hypothesis. It also differs from the experiment described in hypothesis one in the
sense that it is conducted with a system that runs in clock time rather than
turns. While the first experiment concerns control of slow response systems, this experiment concerns direct versus indirect control of complex
situations with high time pressure. In a sense there is a delay in the controlling system since (in one condition) the controllers actions are carried
out by other parts of the organization, see below.
The purpose of the experiment is to examine the effects of direct and
indirect control in a dynamic control situation. While direct control of a
system that responds "quickly" can be managed mostly by compensatory
control, a system with slow response or "dead time" has to be managed
with anticipatory control.
Two different conditions will be used. Both of them concerns the same
task, namely controlling the simulated forest fires of the C3fire microworld. In the first condition, a single subject is to act as commander over a
number of simulated fire fighting units, which he/she controls directly via
the C3fire interface. In the other condition, one subject is to act as commander over two hired “ground chiefs”, who in turn controls the fire
fighting units. The ground chiefs cannot communicate with each other,
and they cannot see each other’s action unless they move their fire brigades within visual range of each other. It is thus in practise impossible for
them to coordinate their actions without support from the commander,
very much like in a real rescue operation. The ground chiefs are however
trained participants that take part in all the studies. The reason for this is
that if all positions in the organization were to be taken by new subjects in
each trial, a very long training period would be needed for each experiment (Johansson, Persson, Granlund, Artman & Mattson, 2001).
In many real-world tasks, the controller who makes a decision is rarely
the same person who actually executes the decisions. He/she normally del69
C HAPTER 4
egates the task to someone else. In this way, the controller can handle a
number of different tasks at the same time, serving as a coordinator of
action. In the Brehmer & Allard (1991) study, the subjects were also, in a
sense, responsible for controlling an organization. An important difference is though that the subordinates were simulated in all conditions. It
was possible to delegate action to the simulated subordinates, but the subjects rarely used this opportunity. In the suggested study, the subjects (in
condition 2) will issue commands to actual persons, something that may
increase delegation. The fact that the commanders will have to issue
orders to persons will probably also make the more aware of the need to
plan, to conduct feedforward control. This problem was noted already by
Brehmer & Allard in the 1991 study.
“This suggests the alternative possibility that subjects simply did
not understand the task well enough to realize that they could
increase their own efficiency by letting the FFUsxx make the decisions. These problems clearly merit further study.”
(Brehmer & Allard, 1991, pp 333)
In this study, there will not be any feedback delays, but instead "dead
time" in form of the subordinates (the ground chiefs) that the subject has to
manage (in one of the conditions). In the Brehmer & Allard study, the subjects failed to take the delays into account. This study will examine if they
are able to take "dead" time into account when working under time pressure. An important difference from this "dead" time or slow response from
the slow response system of the Crossman & Cooke (1974) study is that in
this case the source and nature of the dead time should be obvious to the
participants.
xx.
70
FFU, Fire Fighting Unit, my comment.
S UGGESTED
STUDIES
Research aim: The study is explorative in the sense that I will not
present a concrete hypothesis, at least not a null hypothesis. Rather, the
study should be seen as exploring the effects of indirect and direct control
of a dynamic target system. There are some points of measure that will be
given extra attention, and that are the ones presented in the Brehmer &
Allard (1991) study, namely the area saved in each conditions, the type of
orders, the use of delegation and how well the commanders manage to use
the resources, the time the fire fighting units are inactive.
4.2.1 SELECTION OF SUBJECTS
Subjects will be volunteers recruited among the students at Linköping
University. These volunteers will be offered a ticket to the movies (equals
about 90 sek) for their participation and will also be instructed that the
team that performs best in the study will gain two additional tickets per
subject as a motivational factor (in their condition of course). A criterion
for participation is that the subjects have not previously participated in any
C3fire, or similar fire games like DESSY, NEWFIRE or D3fire, in order
to avoid learning effects.
4.2.2 PROCEDURE
Each subject will have to fill in a simple form where they assure that
they understand that their participation will be treated anonymously, and
that they have volunteered to participate and agree to allow the results
from the test to be published. They will also answer basic questions about
age, gender, education and experience of similar computer simulations.
The subjects will take the role of commander in a small rescue organization, simulated in the C3fire microworld. The roles of ground chiefs will
be taken by hired participants (students). Naturally, they will receive
appropriate training before the experiments start.
The simulation is configured so that the ground chiefs only can se the
view provided by their own fire brigades, but the commanders will have
the view of all the fire brigades, thus possessing feedback on “all” available information in the system. “All” in this case represents the information
given by the fire brigades, which in previous studies have been able to see
71
C HAPTER 4
a total of about 16% of the total area in the C3fire simulation if deployed
optimal. It is however possible to manipulate the area that can be seen, and
it will also probably be increased significantly. This means that the fire
brigades will have to be used not only as fire fighters, but also to gather
information about the fires they are fighting. Every time a fire starts, the
commanders will be informed of this in an e-mail, although they may not
be given the exact position of the fire in all cases. The ground chiefs cannot send mail to each other, all information exchange will have to be done
via the commanders.
The ground chiefs will be placed in separate rooms so it is assured that
they cannot speak to each other.
The first part of the experiment is a training trial. During this trial, the
subject is supposed to accustom with the technical part of the microworld. The experiment leader answers questions of technical nature (not
questions concerning the dynamics of the simulation) in this phase. After
this, the commander will be shown a re-play of the trial and is allowed to
ask more questions, although he/she still is not allowed to ask questions
concerning the dynamics of the game.
The commander is given a paper map of the area in the simulation and is
given ten minutes to plan for the next trial. After ten minutes, the next trial
begins. After that, the procedure is repeated one more time. The length of
each trial will depend on the configuration of the micro-world. Earlier
experiments have typically lasted between fifteen and thirty minutes.
4.3 Possible threats to internal validity
There are some obvious threats to the internal validity in the studies. The
main problem is of course that humans may perform differently on different occasions because of many reasons. There is thus a clear risk that
effects of the independent variables can be obscured by other factors. Subjects can become tired, sad, distracted by personal problems or oppositely
more fit than the average subject and therefore perform more or less
well.The largest risk at hand in this case is to make a type II error, suggesting that there is no difference between the conditions when there really
72
S UGGESTED
STUDIES
exists one. If I follow the suggested designs above, I hopefully have
reduced the risk of this to some extent since I aim using student volunteers. The volunteers should be motivated, both because they volunteer
and because they can gain an additional reward (extra movie ticket) if they
perform well. It is also likely that they show some kind of homogeneity in
their background concerning education since they are students. This
should be positive in this kind of study when the aim is to have as equal
performance as possible within the groups. The design, between subjects/
groups also reduces the problem because the subjects only participate in
one of the conditions, hopefully making the effects more clear.
The threat that someone previously should have familiarized themselves with the tasks or the microworlds, is hopefully coped with since I
will reject all subjects that have participated in studies using any of the
micro-worlds, or similar. The worst threat would be that some of the subjects that have experience of the micro-world simply lie to me in order to
increase their probability of getting the reward, and thereby maliciously
disturbing the result. Another history related problem is the experience of
computer games that have similar characteristics to C3fire and Moro, for
example war games or games like SimCity. This is more or less unavoidable considering the basis for my recruiting. The only thing I can do here
is to include a question concerning this in the form that all participants are
to fill in prior to the experiment. In this way, I do not eliminate the threat,
but at least I can trace it.
The connected threat, maturation, should not be a problem in this case
since neither experiment includes pre-tests. Instrumentation should not be
a problem either since the same instrument is used all the time.
Attrition, the threat that some subjects drop out of the study for various
reasons, of course exists, but it is not really a big problem since that subject easily can be replaced with a new trial with a new.
The risk that some subjects get tired does exist. Especially the first
experiment is time demanding. A typical MORO-experiment may last as
long as three hours, and it is difficult to remain concentrated for such a
long time. But the alternative, to take a break during the trial is not very
attractive either. This because such a break would have to be given at the
same time for all subjects. In such a case, some subjects might loose con73
C HAPTER 4
centration because of the break rather than because they did not have a
break.
The threats against construct validity are hard to analyse in advance.
There is however one clear threat, namely the connection between the
dependent variable and the theory used to make the connection between it
and the independent variable. The first risk is that no such connection
exists, and therefore it is meaningless to perform the test in the first place.
If it would be the case, I have based my entire reasoning on a fundament
that cannot carry it. The other threat is the risk that the dependent and/or
independent variables are to vaguely defined and thus are unable to create
any significant results. This is of course very hard to know in advance. The
dependent variable should be less problematic in these cases since it is
clearly defined and possible to measure with high reliability. An advantage of the independent variable is that it is a part of the simulation, and
thus will be administrated in the same way to all subjects.
4.4 Threats to external validity
External validity refers to the degree to which research findings generalize
beyond the specific context of the experiment being conducted. This has
to some extent already been focused in the discussion about the use of
microworlds and experimental research above. However, there are of
course some problems, especially with the population from which the subjects are taken. In order to make externally valid findings, it should be possible to generalise results to other populations, environments and times.
This cannot be assured from the experiments suggested above. It is rather
so that we must see the suggested experiments as a point of departure. If
we would find interesting results from the suggested research, it could be
worth extending the studies to other populations or contexts.
The experiments described above can help us to understand some of the
problems related to control, time and delays. The first experiment will
hopefully provide some insight in whether short action intervals really are
good for a task characterized by a large element of planning. The second
74
experiment will give us some insight in how well people handle systems
that have "dead" time.
Chapter 5
Conclusion
To many practitioners of control over slow responding systems, the ideas
presented in this thesis proposal may not seem very surprising. For example, in Swedish nuclear power plants, the operators always wait 30 minutes before taking action if something unforeseen occurs, and in health
care it is a well-known practice not to change the dosage of medication
during the first 24 hours after it was initially administrated. These simple
heuristics have evolved from practice. However, although much research
has been made about control of slow responding systems or systems with
delayed feedback, or even both, there are little other findings than the fact
that it is extremely difficult to control these systems. The reason for this is
simple. Slow responding system and systems with delayed feedback
demands feedforward control, and feedforward control is always based on
a hypothesis of the process, more or less strong. Practice is often suggested as the only solution to the problem, since, and in accordance with
the “model” demand, see Brehmer (1992) above, it seems that people have
the ability to learn to control systems that respond slowly by anticipatory
action. Feedback relates to feedforward in the sense that it is necessary in
order to learn how a system or a process works. This is when time
becomes a problem, since learning/adaptation takes time. As discussed
above, there is also a risk that the controller gets caught in a vicious circle
77
C HAPTER 5
of false interpretation of the situation it is to control because of problems
of understanding which changes in the state of the target system that are a
cause of the controllers action rather than other factors. This becomes
especially difficult when we consider the controllers understanding of
how different processes in the target system develop over time.
The law of requisite variety states that if a controller successfully is to
control a target system, it has to (at least) have the same variety as the target system (Ashby, 1956). What is NOT stated in the law of requisite variety, but implied, is that the temporal aspect of variety. Time has been
described as a relation between two or more activities. Temporal notions
like fast, slow, before, after, overlapping etc, are thus based on these relationships. This means that five seconds of clock time can be a long time in
one situation and a short time in another situation. It all depends on the
relation between the controller and the target process. If a controller has
the ability to take appropriate action once per minute, he/she will not have
any problem handling a system that changes more seldom than this. If the
controller further is able to see a pattern in the rate of change in the target
system, he/she will be able to anticipate when to take action and can thus
free resources between the times he/she has to act. This is the most important characteristic of a cognitive system, the ability to adapt. As pointed
out by Hollnagel in the discussion of the ECOM (see above), humans have
the ability to make trade-offs between different levels of control, dynamic
control often forces us to do this. Since the environment often force the
controller to shift goal or even completely change goals, the temporal
dimension of the activity also changes. Based on this, we could say that if
we take two systems that are equal in all aspects except this, the system
that adapts to a situation faster is the better. The question is how to support
adaptation.
If a controller is to use the information in such a way that quality
increases, or at least not decreases, the controller has to study and evaluate
that information when planning his/her next action. However, this is not
always a feasible option when time is not unlimited for each decision, as in
dynamic situations. Goal formulation is naturally one of the tasks that put
the highest demand on a controller in a real-time dynamic task, since the
controller both have to judge for how long he/she can keep planning
78
C ONCLUSION
before they have to do something. A common (and dangerous) response to
this, at least in the Bamberg-studies, is to ignore the planning phase and
turn to opportunistic control, trial and error. A recent study by Jensen &
Brehmer (submitted) shows that subjects/controllers have difficulties in
taking advantage of additional information that in reality could support
them in their task. The previously mentioned study by Omodei, Wearing,
Mclennan, Elliot & Clancy (in press) also supports this. In a fire fighting
task that was to be managed by a small hierarchical organization, commanders given fast and accurate feedback actually performed worse than
other subjects. The problem with providing much information is that it is
only useful if the controller has a model of the system. Otherwise it may
be mostly confusing, something that is especially devastating initially in a
dynamic control task. The most important thing for a controller is namely
to create a hypothesis about how a system works and test that against the
system, and to achieve this understanding as early in the development as
possible. Forcing the controller to wait before taking action could help the
controller in the sense that he/she gets a chance to observe the development of the target process, rather than jumping straight to trial and error. It
is this question that will be tested in the first experiment suggested in this
proposal.
From an organizational view, time and feedforward control becomes
even more intricate. All organizations have built-in delays to some extent.
If a commander issues an order to his soldiers, there will be time between
the order and the actual execution of the order. This time must be a part of
the planning. Parts of the controlling system can thus provide delays that
do not exist in target system. In such cases, the task of the commander is to
use parts of his “own” system to achieve control over another system, a
form of indirect control. Brehmer & Allards (1991) study showed the devastating effects of delayed feedback in a dynamic control task. However,
the findings applied to individual control, although a simulated organization was involved. The second experiment suggested in this thesis will
look into the same problem, but will study the effect of delays originating
from the own control system.
This thesis proposal has given an overview of control and time, based
on cognitive systems engineering and dynamic decision-making. Control
79
has been described as an activity conducted by a controller in order to
either keep or change the state of a target system so it corresponds to a
desired state. Further, control can be described both as anticipatory and
compensatory, in the case of humans as controllers mostly a mix of both.
The studies suggested will hopefully extend our understanding of human
control over dynamic systems and the timeliness of action.
Chapter 6
Further research
Apart from the suggested studies, there are of course other questions that
have risen during this work. First of all, the hypothesis presented should
only be seen as overall suggestions. If I actually would be so fortunate that
there are interesting findings from these studies, other questions will need
to be answered. For example is the formulation of goals, as concluded
above, central to successful control since they are the very things that the
entire development of the control process is compared against.
Further, change between different levels of control that is described in
the ECOM and COCOM should be examined. There are findings from the
field indicating that information systems that are designed for a certain
task, corresponding to specific time-scale or goal level are abandoned or
changed, “tailored” when the operators experience that they do not have
time enough available to use to the equipment to reach their goals. They
rather stop using it, causing confusion in the rest of the organization that
uses the information system (Johansson & Persson, 2002). Such reactions
can be described in terms of the changes between control levels/modes,
and it would therefore be interesting to test the assumption that a certain
type of interface/system improves performance in relation to one type of
goal/time-scale, but not on other.
81
References
Alberts, D. S., Gartska, J. J., & Stein, F. P. (2000). Network Centric Warefare: Developing and Leveraging Information Superiority. National
Defence University Press, Washington DC.
Artman, H. & Granlund, R. (1999) Team Situation Awareness Using
Graphical or Textual Databases in Dynamic Decision Making. In Proceedings to Ninth Conference on Cognitive Ergonomics, ECCE-9, University of Limerick, Ireland.
Ashby, W. R. (1956) An introduction to cybernetics. Chapman and Hall,
London.
Brehmer, A. (1989) Brännskadeintensivvårt som hierarkiskt organiserat
system. Center for Human Computer Studies, Uppsala University CMD
Report No. 4., Uppsala.
Brehmer, B (1987) System Design and the Psychology of Complex Systems. In (eds.) J. Rasmussen & P. Zunde, Empiricial Foundations of
Information and Software Science, Plenum Publishing Corporation.
Brehmer, B. (1992) Dynamic decision making: Human control of complex systems. Acta Psychologica, 81, pp 211-241, North Holland.
83
Brehmer, B. (2000) Dynamic Decision Making in Command and Control.
In (Eds.) C. McCann & R. Pigeau, The Human in Command: Exploring
the Modern Military Experience,. Kluwer Academic press/Plenum Publishers, New York.
Brehmer, B. (In press) Some Reflections on Microworld Research. In
(Eds.) S. G. Schifflett, L. R. Elliot, E. Salas & M. D. Coovert. Scaled
Worlds: Development, Validation and Applications. Ashgate Publishing
Ltd, Aldershot.
Brehmer, B. & Allard, R. (1985) Dynamic Decision Making: A General
Paradigm and Some Experimental Results. Manuscript, Uppsala University, Department of Psychology, Uppsala.
Brehmer, B. & Allard, R. (1991) Real-time dynamic decision making.
Effects of task complexity and feedback delays. In (Eds.) J. Rasmussen, B.
Brehmer & J. Leplat. Distributed decision making: Cognitive models for
cooperative work. Chichester: Wiley.
Brehmer, B. & Dörner, D. (1993) Experiments with Computer Simulated
Microworlds: Escaping Both the Narrow Straits of the Laboratory and the
Deep Blue Sea of the Field Study. Computers in Human Behaviour, Vol.
9, pp 171-184.
Brehmer, B. & Svenmarck, P. (1994) Distributed decision making in
dynamic environments: Time scales and architectures of decision making.
In Contributions to decision making, edited by J.-P. Caverni, M. BarHillel, F. H. Barron and H. Jungermann. Elsevier Science, Amsterdam.
Beyerschen, A. D. (1993) Clausewitz, Nonlinearity and the Unpredictability of War. International Security, 17:3, pp 59-90.
Boyd, J. R. (1987) A discourse on winning and loosing, unpublished briefings and essays, Air University Library, document no MU43947.
Chebrowski, A. R. & Gartska, J. J. (1998) Network-centric Warfare: Its
origin and future. Proceedings of the Naval Academy, January, pp 28-35.
Clausewitz, C. (1997) On war. (Ed.) T. Griffith. Wordsworth Editions
Limited, Hertfordshire.
84
Conant, R. C. & Ashby, W. R. (1970) Every good regulator of a system
must be a model of that system. International Journal of Systems Science,
Vol 1, No. 2, pp 89-97.
Crossman, E. R. F. W. & Cooke, J. E. (1974) Manual Control of Slow
Response Systems. In (Eds.) E. Edwards & F. P. Lee, The Human Operator in Process Control. Taylor & Francis, London.
Decortis, F. & Cacciabue, P. C. (1988) Temporal Dimensions in Cognitive
Models. 4th IEE Conference on Human Factors and Power Plants. June 59, Monterey, California.
Decortis, F., De Keyser, V., Cacciabue, P.C., & Volta, G. (1991). The
Temporal Dimension of Man-Machine Interaction. In G.R.S. Weir and
J.L. Alty (Eds.), Human-computer interaction and complex systems, pp
51-72, London, UK: Academic Press
De Keyser, V. (1995) Time in Ergonomics Research. Ergonomics, Vo. 38,
no 8, pp 1639-1660.
DeKeyser, V., d’Ydewalle, G. & Vandierendonck, A. (1998) Time and the
Dynamic Control of Behavior, Hogrefe & Huber, Göttingen.
Dörner, D (1989) Die Logik des Misslingens. Rowohlt, Reinbeck beim
Hamburg.
Dörner, D., Kreuzig, H. W., Reiter, F. & Stäudel, T. (1983) Lohausen.
Vonm Umgang mit Unbestimmheit und Komplexität. Huber, Bern.
Dörner, D. & Schaub, H. (1994) Errors in Planning and Decision Making
and the Nature of Human Information Processing. In: Applied Psychology: An International Review Special Issue on Human Error, pp 433-453.
Dörner,D., Stäudel, T. & Strohschneider, S. (1988) Moro. Programdokumentation. University of Bamberg, Lehrstuhl Psycologie II, Bamberg.
Elg, F. (2002) Ett dynamiskt perspektiv på individuella skillnader av heurisitisk kompetens, intelligens, mentala modeller, mål och konfidens i kontroll av mikrovärlden Moro. Linköping Studies in Science and
Technology, Thesis No. 931, Unitryck, Linköping.
85
Endsley, M. (1997) The Role of Situation Awareness in Naturalistic Decision Making. In (Eds.) C. E. Zsambok & G. Klein, Naturalistic Decision
Making, Lawrence Erlbaum Associates, Mahaw, New Jersey.
Gardner, H. (1987) The Minds New Science - A history of the cognitive
revolution, Basic Books, USA.
Granlund, R. (2002) Monotoring Distributed Teamwork Training. Dissertation No. 746. Linköping Studies in Science and Technology, Linköping
University, Linköping.
Granlund, R., Johansson, B. & Persson, M. (2001) C3Fire: a Microworld
for Collaboration Training in the ROLF environment. SIMS 2001 42nd
Conference on Simulation and Modeling, October 8-9, 2001, Porsgrunn,
Norway.
Granlund, R., Johansson, B., Persson, M., Artman, H. & Mattson, P.
(2001) Exploration of methodological issues in Micro-world research Experiences from research in team decision making. In proceedings to
International Workshop on Cognitive Research with Microworlds - Methodological and theoretical issues for industrial applications, November
12-14, Granada, Spain.
Hollnagel, E. (1993). Human reliability analysis: Context and control.
Academic Press, London.
Hollnagel, E. (1998) Context, cognition, and control. In Y. Waern (Ed.).
Co-operation in process management - Cognition and information technology. Taylor & Francis, London.
Hollnagel, E. (2000) Performance variability management. Proceedings
to People in Digitized Command and Control Symposium. Royal Military
College of Science at Shrivenham, 12-14 December 2000, UK.
Hollnagel, E. (2002a) Time and Time again. Theoretical issues in Ergonomics Science, 3(2), pp 143-158.
86
Hollnagel, E: (2002b) Cognition as control: A pragmatic approach to the
modelling of Joint Cognitive Systems. IEE Transactions on Systems,
Man, and Cybernetics A: Systems and Humans “Model based Engineering in Complex Systems”. Accepted for publication
Hutchins, E. (1995) Cognition in the Wild. MIT Press, Cambridge Mass.
Jansson, A. (1994) Pathologies of decision making: Consequences or precursors of failure? Sprache & Kognition, 13, pp 160-173.
Jensen, E. & Brehmer, B. (submitted) Pictorial Aids to the Understanding
of Feedback, When Useful Aids are not Used.
Johansson, B., Hollnagel, E. & Granlund, Å. (2002) The Control of
Unpredictable Systems. In (ed.) Johnsson C. Proceedings to the 21st
European annual conference on Human Decision Making and Control,
GIST Technical Report G2002-1, Department of Computing Science,
University of Glasgow, Scottland.
Johansson, B. & Persson, P-A. (2002) Tailoring in CSCW systems –
Rational Response or Political Resistance? In (supplement to) Proceedings to the COOP2002 Conference, 4-7 July, 2002, St Raphael, France.
Kjellén, U. (1987) Deviation and the Feedback Control of Accidents. In
(eds.) J. Rassmussen, K. Duncan & J. Leplat, New Technology and Human
Error. John Wiley & Sons Ltd. Chichester.
Klein, G., Orasanu, J., Calderwood, r. & Zsambok, E. (1993) Decision
Making in Action: Models and Methods. Ablex Publishing Corporation,
Norwood, New Jersey.
Langley, P. A., Paich, M. & Sterman, J. D. (1998) Explaining Capacity
Overshoot and Price War: Misperceptions of Feedback in Competitive
Growth Markets. International Systems Dynamics Conference, 20-24
July, 1998, Quebec.
Lindblom, C.E. (1959) The Science of “muddling through”. Public
Administration Quarterly, 19, pp. 79-88.
Maruyama, M. (1963) The Second Cybernetics: Deviation-Amplifying
Mutual Casual Processes. American Scientist, 51, pp 164-79.
87
Miller, G.A., Galanter, E. & Pribam, K.H. (1960) Plans and the Structure
of Behavior. Holt, Rineheart & Winston, New York.
Neisser, U. (1976) Cognition and reality: Principles and implications of
cognitive psychology. W. H Freeman, San Fransisco.
Omodei, M. M., Wearing, A. J., McLennan, J. Elliot, G. C. & Clancy, J.
M. (In Press) “More is better?”: Problems of Self-Regulation in Naturalistic Decision Making Settings. In (Eds.) B. Brehmer, H. Montgomery, & R.
Lipshitz. How Professionals make decisions, Lawrence Erlbaum Associates Inc. Mahaw, New Jersey.
Orasanu, J. & Connolly, T. (1993) The Reinvention of Decision Making.
In (Eds.) G. Klein, J. Orasanu, J. Calderwood & C.E. Zsambok, Decision
Making in Action, Ablex Publishing Corporation, Norwood, New Jersey.
Persson, M. & Johansson, B. (2001). Creativity or Diversity in Command
and Control. In Smith, M.J., Salvendy, G., Harris, D., & Koubek, R.J. Proceedings of The Ninth International Conference on Human-Computer
Interaction, HCI International 2001, pp 1508-1512, Lawrence Erlbaum
and Associates, New Orleans.
Rasmussen, J. (1986) Information processing and human-machine interaction: An appraoch to cognitive engineering. North Holland, New York.
Rigas, G. (2000) On the relationship between psychometric intelligence
and decision making in dynamic systems. 98, Uppsala University, Uppsala.
Rochlin, G. (1991a) The Gulf War: Technological and Organizational
Implications. Survival, 33, No 3, pp 260-73.
Rochlin, G. (1991b) Lessons of the Gulf War: Tough New Technology
May Have Fragile Underpinnings. ISG Public Affairs Report, 32, No 6, pp
10-12.
Schaub, H. & Strohschneider, S. (1989) Memorandum 71. Universität
Bamberg, Lehrsthul Psychologie, II, Bamberg.
88
Spencer, J. (1974) An investigation of Process Control Skill. In (Eds.) E.
Edwards & F. P. Lee, The Human Operator in Process Control. Taylor &
Francis, London.
Strauss, A. & Corbin, J. (1990) Basics of Qualitative Research –
Grounded Theory Procedures and Techniques. SAGE Publications, London.
Sundin, C., & Friman, H. (Eds.). (2000). Rolf 2010 The Way Ahead and
The First Step: A Collection of Research Papers. Stockholm: Elanders
Gotab.
Svenmarck, P. & Brehmer, B. (1994) D3fire - An Experimental Paradigm
for the Study of Distributed Decision Making. Nutek-report, Sweden,
Uppsala University, Uppsala.
Vygotsky, L. S. (1978) Mind in Society: The Development of Higher Psychological Processes. Harvard U.P, Cambridge Mass.
Wilde, G. S. J. (1994) Target Risk. PDE publications, Toronto
89
Datum
Date
Avdelning, Institution
Division, department
Institutionen för datavetenskap
LINKÖPINGS UNIVERSITET
Språk
Language
x
Svenska/Swedish
Engelska/English
2003-5-26
Department of Computer and
Information Science
Rapporttyp
Report: category
ISBN
x
ISRN
Licentiatavhandling
Examensarbete
C-uppsats
D-uppsats
Övrig rapport
91- 7373-664-3
LiU-Tek-Lic- 2003:17
Serietitel och serienummer
Title of series, numbering
ISSN
0280-7971
Linköping Studies in Science and Technology
URL för elektronisk version
Thesis No. 1018
Titel
Title
Feedforward Control in Dynamic Situations
Författare
Author
Björn Johansson
Sammandrag
Abstract
This thesis proposal discusses control of dynamic systems and its relation to time. Although much research has
been done concerning control of dynamic systems and decision making, little research exists about the relationship
between time and control. Control is defined as the ability to keep a target system/process in a desired state. In this
study, properties of time such as fast, slow, overlapping etc, should be viewed as a relation between the variety of
a controlling system and a target system. It is further concluded that humans have great difficulties controlling target systems that have slow responding processes or "dead" time between action and response. This thesis proposal
suggests two different studies to adress the problem of human control over slow responding systems and dead time
in organisational control.
Nyckelord
Keywords
Feedforward control, Slow Response systems, Time, Human Control, Dynamic Decision Making,
Cognitive Systems Engineering, Complex systems
Department of Computer and Information Science
Linköpings universitet
Linköping Studies in Science and Technology
Faculty of Arts and Sciences - Licentiate Theses
No 17
No 28
No 29
No 48
No 52
No 60
No 71
No 72
No 73
No 74
No 104
No 108
No 111
No 113
No 118
No 126
No 127
No 139
No 140
No 146
No 150
No 165
No 166
No 174
No 177
No 181
No 184
No 187
No 189
No 196
No 197
No 203
No 212
No 230
No 237
No 250
No 253
No 260
No 283
No 298
No 318
No 319
No 326
No 328
No 333
No 335
No 348
No 352
No 371
No 378
No 380
No 381
No 383
No 386
No 398
Vojin Plavsic: Interleaved Processing of Non-Numerical Data Stored on a Cyclic Memory. (Available at:
FOA, Box 1165, S-581 11 Linköping, Sweden. FOA Report B30062E)
Arne Jönsson, Mikael Patel: An Interactive Flowcharting Technique for Communicating and Realizing Algorithms, 1984.
Johnny Eckerland: Retargeting of an Incremental Code Generator, 1984.
Henrik Nordin: On the Use of Typical Cases for Knowledge-Based Consultation and Teaching, 1985.
Zebo Peng: Steps Towards the Formalization of Designing VLSI Systems, 1985.
Johan Fagerström: Simulation and Evaluation of Architecture based on Asynchronous Processes, 1985.
Jalal Maleki: ICONStraint, A Dependency Directed Constraint Maintenance System, 1987.
Tony Larsson: On the Specification and Verification of VLSI Systems, 1986.
Ola Strömfors: A Structure Editor for Documents and Programs, 1986.
Christos Levcopoulos: New Results about the Approximation Behavior of the Greedy Triangulation, 1986.
Shamsul I. Chowdhury: Statistical Expert Systems - a Special Application Area for Knowledge-Based Computer Methodology, 1987.
Rober Bilos: Incremental Scanning and Token-Based Editing, 1987.
Hans Block: SPORT-SORT Sorting Algorithms and Sport Tournaments, 1987.
Ralph Rönnquist: Network and Lattice Based Approaches to the Representation of Knowledge, 1987.
Mariam Kamkar, Nahid Shahmehri: Affect-Chaining in Program Flow Analysis Applied to Queries of Programs, 1987.
Dan Strömberg: Transfer and Distribution of Application Programs, 1987.
Kristian Sandahl: Case Studies in Knowledge Acquisition, Migration and User Acceptance of Expert Systems, 1987.
Christer Bäckström: Reasoning about Interdependent Actions, 1988.
Mats Wirén: On Control Strategies and Incrementality in Unification-Based Chart Parsing, 1988.
Johan Hultman: A Software System for Defining and Controlling Actions in a Mechanical System, 1988.
Tim Hansen: Diagnosing Faults using Knowledge about Malfunctioning Behavior, 1988.
Jonas Löwgren: Supporting Design and Management of Expert System User Interfaces, 1989.
Ola Petersson: On Adaptive Sorting in Sequential and Parallel Models, 1989.
Yngve Larsson: Dynamic Configuration in a Distributed Environment, 1989.
Peter Åberg: Design of a Multiple View Presentation and Interaction Manager, 1989.
Henrik Eriksson: A Study in Domain-Oriented Tool Support for Knowledge Acquisition, 1989.
Ivan Rankin: The Deep Generation of Text in Expert Critiquing Systems, 1989.
Simin Nadjm-Tehrani: Contributions to the Declarative Approach to Debugging Prolog Programs, 1989.
Magnus Merkel: Temporal Information in Natural Language, 1989.
Ulf Nilsson: A Systematic Approach to Abstract Interpretation of Logic Programs, 1989.
Staffan Bonnier: Horn Clause Logic with External Procedures: Towards a Theoretical Framework, 1989.
Christer Hansson: A Prototype System for Logical Reasoning about Time and Action, 1990.
Björn Fjellborg: An Approach to Extraction of Pipeline Structures for VLSI High-Level Synthesis, 1990.
Patrick Doherty: A Three-Valued Approach to Non-Monotonic Reasoning, 1990.
Tomas Sokolnicki: Coaching Partial Plans: An Approach to Knowledge-Based Tutoring, 1990.
Lars Strömberg: Postmortem Debugging of Distributed Systems, 1990.
Torbjörn Näslund: SLDFA-Resolution - Computing Answers for Negative Queries, 1990.
Peter D. Holmes: Using Connectivity Graphs to Support Map-Related Reasoning, 1991.
Olof Johansson: Improving Implementation of Graphical User Interfaces for Object-Oriented KnowledgeBases, 1991.
Rolf G Larsson: Aktivitetsbaserad kalkylering i ett nytt ekonomisystem, 1991.
Lena Srömbäck: Studies in Extended Unification-Based Formalism for Linguistic Description: An Algorithm
for Feature Structures with Disjunction and a Proposal for Flexible Systems, 1992.
Mikael Pettersson: DML-A Language and System for the Generation of Efficient Compilers from Denotational Specification, 1992.
Andreas Kågedal: Logic Programming with External Procedures: an Implementation, 1992.
Patrick Lambrix: Aspects of Version Management of Composite Objects, 1992.
Xinli Gu: Testability Analysis and Improvement in High-Level Synthesis Systems, 1992.
Torbjörn Näslund: On the Role of Evaluations in Iterative Development of Managerial Support Sytems,
1992.
Ulf Cederling: Industrial Software Development - a Case Study, 1992.
Magnus Morin: Predictable Cyclic Computations in Autonomous Systems: A Computational Model and Implementation, 1992.
Mehran Noghabai: Evaluation of Strategic Investments in Information Technology, 1993.
Mats Larsson: A Transformational Approach to Formal Digital System Design, 1993.
Johan Ringström: Compiler Generation for Parallel Languages from Denotational Specifications, 1993.
Michael Jansson: Propagation of Change in an Intelligent Information System, 1993.
Jonni Harrius: An Architecture and a Knowledge Representation Model for Expert Critiquing Systems, 1993.
Per Österling: Symbolic Modelling of the Dynamic Environments of Autonomous Agents, 1993.
Johan Boye: Dependency-based Groudness Analysis of Functional Logic Programs, 1993.
No 402
No 406
No 414
No 417
No 436
No 437
No 440
FHS 3/94
FHS 4/94
No 441
No 446
No 450
No 451
No 452
No 455
FHS 5/94
No 462
No 463
No 464
No 469
No 473
No 475
No 476
No 478
FHS 7/95
No 482
No 488
No 489
No 497
No 498
No 503
FHS 8/95
FHS 9/95
No 513
No 517
No 518
No 522
No 538
No 545
No 546
FiF-a 1/96
No 549
No 550
No 557
No 558
No 561
No 563
No 567
No 575
No 576
No 587
No 589
No 591
No 595
No 597
Lars Degerstedt: Tabulated Resolution for Well Founded Semantics, 1993.
Anna Moberg: Satellitkontor - en studie av kommunikationsmönster vid arbete på distans, 1993.
Peter Carlsson: Separation av företagsledning och finansiering - fallstudier av företagsledarutköp ur ett agentteoretiskt perspektiv, 1994.
Camilla Sjöström: Revision och lagreglering - ett historiskt perspektiv, 1994.
Cecilia Sjöberg: Voices in Design: Argumentation in Participatory Development, 1994.
Lars Viklund: Contributions to a High-level Programming Environment for a Scientific Computing, 1994.
Peter Loborg: Error Recovery Support in Manufacturing Control Systems, 1994.
Owen Eriksson: Informationssystem med verksamhetskvalitet - utvärdering baserat på ett verksamhetsinriktat och samskapande perspektiv, 1994.
Karin Pettersson: Informationssystemstrukturering, ansvarsfördelning och användarinflytande - En komparativ studie med utgångspunkt i två informationssystemstrategier, 1994.
Lars Poignant: Informationsteknologi och företagsetablering - Effekter på produktivitet och region, 1994.
Gustav Fahl: Object Views of Relational Data in Multidatabase Systems, 1994.
Henrik Nilsson: A Declarative Approach to Debugging for Lazy Functional Languages, 1994.
Jonas Lind: Creditor - Firm Relations: an Interdisciplinary Analysis, 1994.
Martin Sköld: Active Rules based on Object Relational Queries - Efficient Change Monitoring Techniques,
1994.
Pär Carlshamre: A Collaborative Approach to Usability Engineering: Technical Communicators and System
Developers in Usability-Oriented Systems Development, 1994.
Stefan Cronholm: Varför CASE-verktyg i systemutveckling? - En motiv- och konsekvensstudie avseende arbetssätt och arbetsformer, 1994.
Mikael Lindvall: A Study of Traceability in Object-Oriented Systems Development, 1994.
Fredrik Nilsson: Strategi och ekonomisk styrning - En studie av Sandviks förvärv av Bahco Verktyg, 1994.
Hans Olsén: Collage Induction: Proving Properties of Logic Programs by Program Synthesis, 1994.
Lars Karlsson: Specification and Synthesis of Plans Using the Features and Fluents Framework, 1995.
Ulf Söderman: On Conceptual Modelling of Mode Switching Systems, 1995.
Choong-ho Yi: Reasoning about Concurrent Actions in the Trajectory Semantics, 1995.
Bo Lagerström: Successiv resultatavräkning av pågående arbeten. - Fallstudier i tre byggföretag, 1995.
Peter Jonsson: Complexity of State-Variable Planning under Structural Restrictions, 1995.
Anders Avdic: Arbetsintegrerad systemutveckling med kalkylkprogram, 1995.
Eva L Ragnemalm: Towards Student Modelling through Collaborative Dialogue with a Learning Companion, 1995.
Eva Toller: Contributions to Parallel Multiparadigm Languages: Combining Object-Oriented and Rule-Based
Programming, 1995.
Erik Stoy: A Petri Net Based Unified Representation for Hardware/Software Co-Design, 1995.
Johan Herber: Environment Support for Building Structured Mathematical Models, 1995.
Stefan Svenberg: Structure-Driven Derivation of Inter-Lingual Functor-Argument Trees for Multi-Lingual
Generation, 1995.
Hee-Cheol Kim: Prediction and Postdiction under Uncertainty, 1995.
Dan Fristedt: Metoder i användning - mot förbättring av systemutveckling genom situationell metodkunskap
och metodanalys, 1995.
Malin Bergvall: Systemförvaltning i praktiken - en kvalitativ studie avseende centrala begrepp, aktiviteter och
ansvarsroller, 1995.
Joachim Karlsson: Towards a Strategy for Software Requirements Selection, 1995.
Jakob Axelsson: Schedulability-Driven Partitioning of Heterogeneous Real-Time Systems, 1995.
Göran Forslund: Toward Cooperative Advice-Giving Systems: The Expert Systems Experience, 1995.
Jörgen Andersson: Bilder av småföretagares ekonomistyrning, 1995.
Staffan Flodin: Efficient Management of Object-Oriented Queries with Late Binding, 1996.
Vadim Engelson: An Approach to Automatic Construction of Graphical User Interfaces for Applications in
Scientific Computing, 1996.
Magnus Werner : Multidatabase Integration using Polymorphic Queries and Views, 1996.
Mikael Lind: Affärsprocessinriktad förändringsanalys - utveckling och tillämpning av synsätt och metod,
1996.
Jonas Hallberg: High-Level Synthesis under Local Timing Constraints, 1996.
Kristina Larsen: Förutsättningar och begränsningar för arbete på distans - erfarenheter från fyra svenska företag. 1996.
Mikael Johansson: Quality Functions for Requirements Engineering Methods, 1996.
Patrik Nordling: The Simulation of Rolling Bearing Dynamics on Parallel Computers, 1996.
Anders Ekman: Exploration of Polygonal Environments, 1996.
Niclas Andersson: Compilation of Mathematical Models to Parallel Code, 1996.
Johan Jenvald: Simulation and Data Collection in Battle Training, 1996.
Niclas Ohlsson: Software Quality Engineering by Early Identification of Fault-Prone Modules, 1996.
Mikael Ericsson: Commenting Systems as Design Support—A Wizard-of-Oz Study, 1996.
Jörgen Lindström: Chefers användning av kommunikationsteknik, 1996.
Esa Falkenroth: Data Management in Control Applications - A Proposal Based on Active Database Systems,
1996.
Niclas Wahllöf: A Default Extension to Description Logics and its Applications, 1996.
Annika Larsson: Ekonomisk Styrning och Organisatorisk Passion - ett interaktivt perspektiv, 1997.
Ling Lin: A Value-based Indexing Technique for Time Sequences, 1997.
No 598
No 599
No 607
No 609
FiF-a 4
FiF-a 6
No 615
No 623
No 626
No 627
No 629
No 631
No 639
No 640
No 643
No 653
FiF-a 13
No 674
No 676
No 668
No 675
FiF-a 14
No 695
No 700
FiF-a 16
No 712
No 719
No 723
No 725
No 730
No 731
No 733
No 734
FiF-a 21
FiF-a 22
No 737
No 738
FiF-a 25
No 742
No 748
No 751
No 752
No 753
No 754
No 766
No 769
No 775
FiF-a 30
No 787
No 788
No 790
No 791
No 800
No 807
Rego Granlund: C3Fire - A Microworld Supporting Emergency Management Training, 1997.
Peter Ingels: A Robust Text Processing Technique Applied to Lexical Error Recovery, 1997.
Per-Arne Persson: Toward a Grounded Theory for Support of Command and Control in Military Coalitions,
1997.
Jonas S Karlsson: A Scalable Data Structure for a Parallel Data Server, 1997.
Carita Åbom: Videomötesteknik i olika affärssituationer - möjligheter och hinder, 1997.
Tommy Wedlund: Att skapa en företagsanpassad systemutvecklingsmodell - genom rekonstruktion, värdering och vidareutveckling i T50-bolag inom ABB, 1997.
Silvia Coradeschi: A Decision-Mechanism for Reactive and Coordinated Agents, 1997.
Jan Ollinen: Det flexibla kontorets utveckling på Digital - Ett stöd för multiflex? 1997.
David Byers: Towards Estimating Software Testability Using Static Analysis, 1997.
Fredrik Eklund: Declarative Error Diagnosis of GAPLog Programs, 1997.
Gunilla Ivefors: Krigsspel coh Informationsteknik inför en oförutsägbar framtid, 1997.
Jens-Olof Lindh: Analysing Traffic Safety from a Case-Based Reasoning Perspective, 1997
Jukka Mäki-Turja:. Smalltalk - a suitable Real-Time Language, 1997.
Juha Takkinen: CAFE: Towards a Conceptual Model for Information Management in Electronic Mail, 1997.
Man Lin: Formal Analysis of Reactive Rule-based Programs, 1997.
Mats Gustafsson: Bringing Role-Based Access Control to Distributed Systems, 1997.
Boris Karlsson: Metodanalys för förståelse och utveckling av systemutvecklingsverksamhet. Analys och värdering av systemutvecklingsmodeller och dess användning, 1997.
Marcus Bjäreland: Two Aspects of Automating Logics of Action and Change - Regression and Tractability,
1998.
Jan Håkegård: Hiera rchical Test Architecture and Board-Level Test Controller Synthesis, 1998.
Per-Ove Zetterlund: Normering av svensk redovisning - En studie av tillkomsten av Redovisningsrådets rekommendation om koncernredovisning (RR01:91), 1998.
Jimmy Tjäder: Projektledaren & planen - en studie av projektledning i tre installations- och systemutvecklingsprojekt, 1998.
Ulf Melin: Informationssystem vid ökad affärs- och processorientering - egenskaper, strategier och utveckling, 1998.
Tim Heyer: COMPASS: Introduction of Formal Methods in Code Development and Inspection, 1998.
Patrik Hägglund: Programming Languages for Computer Algebra, 1998.
Marie-Therese Christiansson: Inter-organistorisk verksamhetsutveckling - metoder som stöd vid utveckling
av partnerskap och informationssystem, 1998.
Christina Wennestam: Information om immateriella resurser. Investeringar i forskning och utveckling samt
i personal inom skogsindustrin, 1998.
Joakim Gustafsson: Extending Temporal Action Logic for Ramification and Concurrency, 1998.
Henrik André-Jönsson: Indexing time-series data using text indexing methods, 1999.
Erik Larsson: High-Level Testability Analysis and Enhancement Techniques, 1998.
Carl-Johan Westin: Informationsförsörjning: en fråga om ansvar - aktiviteter och uppdrag i fem stora svenska
organisationers operativa informationsförsörjning, 1998.
Åse Jansson: Miljöhänsyn - en del i företags styrning, 1998.
Thomas Padron-McCarthy: Performance-Polymorphic Declarative Queries, 1998.
Anders Bäckström: Värdeskapande kreditgivning - Kreditriskhantering ur ett agentteoretiskt perspektiv,
1998.
Ulf Seigerroth: Integration av förändringsmetoder - en modell för välgrundad metodintegration, 1999.
Fredrik Öberg: Object-Oriented Frameworks - A New Strategy for Case Tool Development, 1998.
Jonas Mellin: Predictable Event Monitoring, 1998.
Joakim Eriksson: Specifying and Managing Rules in an Active Real-Time Database System, 1998.
Bengt E W Andersson: Samverkande informationssystem mellan aktörer i offentliga åtaganden - En teori om
aktörsarenor i samverkan om utbyte av information, 1998.
Pawel Pietrzak: Static Incorrectness Diagnosis of CLP (FD), 1999.
Tobias Ritzau: Real-Time Reference Counting in RT-Java, 1999.
Anders Ferntoft: Elektronisk affärskommunikation - kontaktkostnader och kontaktprocesser mellan kunder
och leverantörer på producentmarknader,1999.
Jo Skåmedal: Arbete på distans och arbetsformens påverkan på resor och resmönster, 1999.
Johan Alvehus: Mötets metaforer. En studie av berättelser om möten, 1999.
Magnus Lindahl: Bankens villkor i låneavtal vid kreditgivning till högt belånade företagsförvärv: En studie
ur ett agentteoretiskt perspektiv, 2000.
Martin V. Howard: Designing dynamic visualizations of temporal data, 1999.
Jesper Andersson: Towards Reactive Software Architectures, 1999.
Anders Henriksson: Unique kernel diagnosis, 1999.
Pär J. Ågerfalk: Pragmatization of Information Systems - A Theoretical and Methodological Outline, 1999.
Charlotte Björkegren: Learning for the next project - Bearers and barriers in knowledge transfer within an
organisation, 1999.
Håkan Nilsson: Informationsteknik som drivkraft i granskningsprocessen - En studie av fyra revisionsbyråer,
2000.
Erik Berglund: Use-Oriented Documentation in Software Development, 1999.
Klas Gäre: Verksamhetsförändringar i samband med IS-införande, 1999.
Anders Subotic: Software Quality Inspection, 1999.
Svein Bergum: Managerial communication in telework, 2000.
No 809
FiF-a 32
No 808
No 820
No 823
No 832
FiF-a 34
No 842
No 844
FiF-a 37
FiF-a 40
FiF-a 41
No. 854
No 863
No 881
No 882
No 890
Fif-a 47
No 894
No 906
No 917
No 916
Fif-a-49
Fif-a-51
No 919
No 915
No 931
No 933
No 938
No 942
No 956
FiF-a 58
No 964
No 973
No 958
Fif-a 61
No 985
No 982
No 989
No 990
No 991
No 999
No 1000
No 1001
No 988
FiF-a 62
No 1003
No 1005
No 1008
No 1010
No 1015
No 1018
Flavius Gruian: Energy-Aware Design of Digital Systems, 2000.
Karin Hedström: Kunskapsanvändning och kunskapsutveckling hos verksamhetskonsulter - Erfarenheter
från ett FOU-samarbete, 2000.
Linda Askenäs: Affärssystemet - En studie om teknikens aktiva och passiva roll i en organisation, 2000.
Jean Paul Meynard: Control of industrial robots through high-level task programming, 2000.
Lars Hult: Publika Gränsytor - ett designexempel, 2000.
Paul Pop: Scheduling and Communication Synthesis for Distributed Real-Time Systems, 2000.
Göran Hultgren: Nätverksinriktad Förändringsanalys - perspektiv och metoder som stöd för förståelse och
utveckling av affärsrelationer och informationssystem, 2000.
Magnus Kald: The role of management control systems in strategic business units, 2000.
Mikael Cäker: Vad kostar kunden? Modeller för intern redovisning, 2000.
Ewa Braf: Organisationers kunskapsverksamheter - en kritisk studie av ”knowledge management”, 2000.
Henrik Lindberg: Webbaserade affärsprocesser - Möjligheter och begränsningar, 2000.
Benneth Christiansson: Att komponentbasera informationssystem - Vad säger teori och praktik?, 2000.
Ola Pettersson: Deliberation in a Mobile Robot, 2000.
Dan Lawesson: Towards Behavioral Model Fault Isolation for Object Oriented Control Systems, 2000.
Johan Moe: Execution Tracing of Large Distributed Systems, 2001.
Yuxiao Zhao: XML-based Frameworks for Internet Commerce and an Implementation of B2B
e-procurement, 2001.
Annika Flycht-Eriksson: Domain Knowledge Management inInformation-providing Dialogue systems,
2001.
Per-Arne Segerkvist: Webbaserade imaginära organisationers samverkansformer, 2001.
Stefan Svarén: Styrning av investeringar i divisionaliserade företag - Ett koncernperspektiv, 2001.
Lin Han: Secure and Scalable E-Service Software Delivery, 2001.
Emma Hansson: Optionsprogram för anställda - en studie av svenska börsföretag, 2001.
Susanne Odar: IT som stöd för strategiska beslut, en studie av datorimplementerade modeller av verksamhet
som stöd för beslut om anskaffning av JAS 1982, 2002.
Stefan Holgersson: IT-system och filtrering av verksamhetskunskap - kvalitetsproblem vid analyser och beslutsfattande som bygger på uppgifter hämtade från polisens IT-system, 2001.
Per Oscarsson:Informationssäkerhet i verksamheter - begrepp och modeller som stöd för förståelse av informationssäkerhet och dess hantering, 2001.
Luis Alejandro Cortes: A Petri Net Based Modeling and Verification Technique for Real-Time Embedded
Systems, 2001.
Niklas Sandell: Redovisning i skuggan av en bankkris - Värdering av fastigheter. 2001.
Fredrik Elg: Ett dynamiskt perspektiv på individuella skillnader av heuristisk kompetens, intelligens, mentala
modeller, mål och konfidens i kontroll av mikrovärlden Moro, 2002.
Peter Aronsson: Automatic Parallelization of Simulation Code from Equation Based Simulation Languages,
2002.
Bourhane Kadmiry: Fuzzy Control of Unmanned Helicopter, 2002.
Patrik Haslum: Prediction as a Knowledge Representation Problem: A Case Study in Model Design, 2002.
Robert Sevenius: On the instruments of governance - A law & economics study of capital instruments in limited liability companies, 2002.
Johan Petersson: Lokala elektroniska marknadsplatser - informationssystem för platsbundna affärer, 2002.
Peter Bunus: Debugging and Structural Analysis of Declarative Equation-Based Languages, 2002.
Gert Jervan: High-Level Test Generation and Built-In Self-Test Techniques for Digital Systems, 2002.
Fredrika Berglund: Management Control and Strategy - a Case Study of Pharmaceutical Drug Development,
2002.
Fredrik Karlsson: Meta-Method for Method Configuration - A Rational Unified Process Case, 2002.
Sorin Manolache: Schedulability Analysis of Real-Time Systems with Stochastic Task Execution Times,
2002.
Diana Szentiványi: Performance and Availability Trade-offs in Fault-Tolerant Middleware, 2002.
Iakov Nakhimovski: Modeling and Simulation of Contacting Flexible Bodies in Multibody Systems, 2002.
Levon Saldamli: PDEModelica - Towards a High-Level Language for Modeling with Partial Differential
Equations, 2002.
Almut Herzog: Secure Execution Environment for Java Electronic Services, 2002.
Jon Edvardsson: Contributions to Program- and Specification-based Test Data Generation, 2002
Anders Arpteg: Adaptive Semi-structured Information Extraction, 2002.
Andrzej Bednarski: A Dynamic Programming Approach to Optimal Retargetable Code Generation for
Irregular Architectures, 2002.
Mattias Arvola: Good to use! : Use quality of multi-user applications in the home, 2003.
Lennart Ljung: Utveckling av en projektivitetsmodell - om organisationers förmåga att tillämpa
projektarbetsformen, 2003.
Pernilla Qvarfordt: User experience of spoken feedback in multimodal interaction, 2003.
Alexander Siemers: Visualization of Dynamic Multibody Simulation With Special Reference to Contacts,
2003.
Jens Gustavsson: Towards Unanticipated Runtime Software Evolution, 2003.
Calin Curescu: Adaptive QoS-aware Resource Allocation for Wireless Networks, 2003.
Anna Andersson: Management Information Systems in Process-oriented Healthcare Organisations, 2003.
Björn Johansson: Feedforward Control in Dynamic Situations, 2003.
Fly UP