...

Affordances and Constraints of Intelligent Decision

by user

on
Category: Documents
1

views

Report

Comments

Transcript

Affordances and Constraints of Intelligent Decision
Linköping Studies in Science and Technology
Dissertation No. 1381
Affordances and Constraints of Intelligent Decision
Support for Military Command and Control—
Three Case Studies of Support Systems
Ola Leifler
Linköping University
Department of Computer and Information Science
Human-Centered Systems
SE-581 85 Linköping, Sweden
Linköping 2011
© Ola Leifler, 2011
ISBN 978-91-7393-133-5
ISSN 0345-7524
URL:
http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-67630/
Published articles have been reprinted with permission from the respective
copyright holder.
Typeset using XETEX
Printed by LiU-Tryck, Linköping 2011
ii
Till min älskade Tilda.
Så stort ditt leende,
så varmt ditt hjärta,
så glad och så livfull.
Lika tom känns nu selen,
lika stor är vår saknad.
iii
Abstract
Researchers in military command and control (C2 ) have for several
decades sought to help commanders by introducing automated, intelligent decision support systems. These systems are still not widely used, however, and
some researchers argue that this may be due to those problems that are inherent in the relationship between the affordances of technology and the requirements by the specific contexts of work in military C2 . In this thesis, we
study some specific properties of three support techniques for analyzing and
automating aspects of C2 scenarios that are relevant for the contexts of work
in which they can be used.
The research questions we address concern (1) which affordances and
constraints of these technologies are of most relevance to C2 , and (2) how
these affordances and limitations can be managed to improve the utility of
intelligent decision support systems in C2 . The thesis comprises three case
studies of C2 scenarios where intelligent support systems have been devised
for each scenario.
The first study considered two military planning scenarios: planning for
medical evacuations and similar tactical operations. In the study, we argue
that the plan production capabilities of automated planners may be of less
use than their constraint management facilities. ComPlan, which was the
main technical system studied in the first case study, consisted of a highly
configurable, collaborative, constraint-management framework for planning
in which constraints could be used either to enforce relationships or notify
users of their validity during planning. As a partial result of the first study,
we proposed three tentative design criteria for intelligent decision support:
transparency, graceful regulation and event-based feedback.
The second study was of information management during planning at the
operational level, where we used a C2 training scenario from the Swedish
Armed Forces and the documents produced during the scenario as a basis
for studying properties of Semantic Desktops as intelligent decision support.
In the study, we argue that (1) due to the simultaneous use of both documents
and specialized systems, it is imperative that commanders can manage information from heterogeneous sources consistently, and (2) in the context of a
structurally rich domain such as C2 , documents can contain enough information about domain-specific concepts that occur in several applications to allow
them to be automatically extracted from documents and managed in a unified
manner. As a result of our second study, we present a model for extending a
general semantic desktop ontology with domain-specific concepts and mechanisms for extracting and managing semantic objects from plan documents.
Our model adheres to the design criteria from the first case study.
The third study investigated machine learning techniques in general and
text clustering in particular, to support researchers who study team behavior
and performance in C2 . In this study, we used material from several C2 scenarios which had been studied previously. We interviewed the participating
researchers about their work profiles, evaluated machine learning approaches
for the purpose of supporting their work and devised a support system based
on the results of our evaluations. In the study, we report on empirical results regarding the precision possible to achieve when automatically classifying messages in C2 workflows and present some ramifications of these results
v
on the design of support tools for communication analysis. Finally, we report
how the prototype support system for clustering messages inC2 communications was conceived by the users, the utility of the design criteria from case
study 1 when applied to communication analysis, and the possibilities for using text clustering as a concrete support tool in communication analysis.
In conclusion, we discuss how the affordances and constraints of intelligent decision support systems for C2 relate to our design criteria, and how the
characteristics of each work situation demand new adaptations of the way in
which intelligent support systems are used.
vi
Acknowledgments
First and foremost, I would like to thank my supervisor Henrik Eriksson, without
whom this thesis would never have been written. Your invaluable experience, tireless encouragement and patience have meant a lot to me. I believe I have grown
during the production of this thesis and that this is in no small part due to your
work. The journey of writing a thesis was certainly not straight, and though I
may have been disillusioned at times, you have kept your faith in me and for that
I am most grateful. Also, there are two other people that I have worked with most
closely and that deserve a special mention. Johan Jenvald acted as my secondary
supervisor and helped me with very much during my first case study on the ComPlan planning support tool and for a long time provided me with valuable insights
and comments on my work. Sture Hägglund asked me to participate in a preliminary study for FHS, which later grew into a PhD project. Sture’s comments on my
early work and his highly stimulating course on the philosophy of science helped
me develop a critical eye.
There are many people at IDA and FHS that I have worked with who have
supported the work I present in this thesis. It has been a pleasure to work with you
all and I am grateful for a chance to work with competent and inspiring people.
This list1 is not intended to be exclusive, but should hopefully serve to remind the
reader that, even as all the errors and omissions in the thesis are certainly my own,
I would be most happy to share any positive credit with all those who have given
me the opportunity to pursue these studies, sharpened my wit, brought me joy, and
made this journey a most rewarding experience.
Lars Ahrenberg gave me directions during my second study and helped me discover the natural language processing framework GATE.
1 sorted
alphabetically, not in any order of importance
vii
Mattias Arvola and Jonas Lundberg for all the methodological discussions regarding study 3, and for reviewing that which later became the final papers
of the study.
Erik Berglund who has engaged in animated discussions in a relaxed atmosphere.
I hope that your passion for undergraduate education earns you the credit
you deserve.
Berndt Brehmer at the Swedish National Defense College made this work possible and offered an open and stimulating environment for trying new thoughts
on critiquing and decision support in command and control.
Magnus Bång with a passion for music, philosophical as well as practical matters,
we have had engaging discussions and it’s been a pleasure to help you with
your project. I hope your music studio business takes off, and I truly admire
your perseverance, commitment and knowledge in this field. I’m also grateful for all your comments on my work and our discussions on philosophy
and hermeneutics, even if they have taken me astray.
Peter Dalenius for stimulating discussions and rewarding collaboration in
courses over the years.
Inger Emanuelsson, Heléne Eriksson, Anne Moe, and Inger Norén Thank
you for asking, thank you for caring, and thank you for everything else.
Anders Fröberg as a very good friend, a fellow strategy gamer and Ph.D. student:
whenever you’re ready, I will make sure to be there for your defense and
hopefully for less serious occasions as well.
Magnus Ingmarsson as a close friend and allied Ph.D. student, we have had much
to talk about of both a personal and professional nature, and have shared
with one another both hardships and joy. That, I will sorely miss but I look
forward to attend your thesis defense.
Björn Johansson for providing me with contacts and a lot of help in study 3 of this
thesis as well as co-authoring one of my earlier papers. I hope you enjoyed
your trip to Françe back then.
Arne Jönsson introduced me to the wonders of latent semantic analysis and provided me constructive ideas when I began to look at the possibilities for using
text clustering.
Ben Kersley gave me the opportunity to become a much better writer by scrutinizing my papers rigorously, cutting through jargon and helping me develop
a better ear for English.
Patrick Lambrix for acting as my secondary supervisor during my third case
study, during which you gave valuable comments and suggestions for publication outlets.
viii
Bastien LeComte-Fousset for providing a good starting point for the system developed and evaluated in case study 3. I hope others will appreciate your
talent and dedication as much as I.
Jalal Maleki for being such a warm, generous, and supportive person in so many
ways that I am truly glad for the opportunities you have given me to help
you with something in return.
Dennis Netzell for your warmth, care, and your great work in cultivating my
ideas of cover design into a finished product that looks better than I ever
had hoped it would.
Mats Olofsson introduced me to FHS, opened doors, believed in my work, and
made sure that I submitted those quarterly reports even in times of labor. ;)
Mats Persson was a co-author of one of my early papers and has also been a very
good and supporting friend. I hope to see you more now that we’re moving
to Stockholm.
Magnus Rosell for helping me out with understanding the Infomat tool during a
spontaneous meeting. It helped me a lot to understand the inner workings
of the text clusterer used in study 3.
Mirko Thorstensson for opening doors to participating in exercises and collecting
the material used in study 2. Also, I thoroughly enjoyed our discussions on
topics related to both your and my thesis.
Jiri Trnka for being a generous travel companion to Brussels during the ISCRAM conference there and contributing material and other support to case
study 3.
Pamela Vang for proofreading this thesis very thoroughly and steering me from
the deep waters of vagueness to the narrow straits of the unambiguous.
All reviewers who gave me valuable and constructive comments through which I
became a much better writer and researcher than I would have been without
you.
My fellow PhD companions at IDA, past and present: Sarah, Maria, Johan,
Fabian, Per-Ola and Per-Magnus. Thank you all for a joyful and stimulating
environment for serious as well as not-so-serious talks.
Other IDA employees that have provided much of the positive atmosphere that
makes IDA a great place to work at: Rolf Nilsson, Peter J Nilsson, Arne
Fäldt, Nils Nilsson, Mariam Kamkar and all administrative staff that have
helped me with matters related to both research, teaching, and other kinds
of support.
Past and present members of the MVI group at FHS Georgios Rigas, Jan
Kuylenstierna, Christofer Waldenström, Joacim Rydmark, Anders Frank,
ix
Anders S Christensson, Eva Jensen and all the others: thank you all a welcoming environment at FHS, for inspiring talks and good company every
time I’ve visited.
Grandfather Olle for your interest in my work and openness to discuss all possible topics with equal passion, I have always admired you, and I wish to grow
old like you.
My father Björn for your interest in my work, your support, your love, and for
being there.
My mother Kicki for all your love, and for sharing both despair and elation these
years.
Finally, my family Karin, Emma, Tilda and Teo: you are the center of my life.
In times of deepest despair, you have been my lifeline and best support. In times
of joy, you are those I want to share it with first. I love you so very much.
x
Contents
Abstract
v
Acknowledgments
vii
Contents
xi
List of Figures
xvii
List of Tables
xx
1 Introduction
1.1 Research question . . . . . . . . .
1.2 Contributions . . . . . . . . . . . .
1.3 Papers . . . . . . . . . . . . . . . .
1.3.1 Other contributed papers
1.4 Outline . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1
1
4
6
7
8
2 Background
2.1 Command and Control . . . . . . . . . . . . . . . . . . . . .
2.1.1 Command and control as Decision-making . . . . .
2.1.2 Decision-making as selecting optimal solutions . . .
2.1.3 Functions of command and control . . . . . . . . . .
2.1.4 Command and Control as communication of intent
2.1.5 Command and Control Performance . . . . . . . . .
2.2 Planning and Cognition . . . . . . . . . . . . . . . . . . . . .
2.2.1 Wicked problems . . . . . . . . . . . . . . . . . . . .
2.2.2 Functional modeling . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
9
9
12
14
15
17
18
19
20
21
xi
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
2.3
General decision support systems . . . . . . . . . .
2.3.1 Guidelines for Decision Support Systems .
2.4 Automated planning . . . . . . . . . . . . . . . . . .
2.4.1 JADE . . . . . . . . . . . . . . . . . . . . .
2.4.2 CTAPS . . . . . . . . . . . . . . . . . . . . .
2.4.3 ARPA-Rome Planning Initiative . . . . . .
2.4.4 ACPT and O-P3 . . . . . . . . . . . . . . .
2.4.5 HICAP . . . . . . . . . . . . . . . . . . . . .
2.4.6 CADET . . . . . . . . . . . . . . . . . . . .
2.5 Mixed-initiative planning systems . . . . . . . . . .
2.5.1 TRAINS . . . . . . . . . . . . . . . . . . . .
2.5.2 Case-based plan re-use . . . . . . . . . . . .
2.5.3 Plan sketches, templates and meta-theories
2.5.4 Other mixed-initiative systems . . . . . . .
2.5.5 Evaluations of Mixed-initiative systems . .
2.6 Research issues in Planning . . . . . . . . . . . . .
2.7 Critiquing . . . . . . . . . . . . . . . . . . . . . . . .
2.7.1 Foundations of critiquing . . . . . . . . . .
2.7.2 Expert critiquing systems . . . . . . . . . .
2.7.3 INSPECT/EXPECT . . . . . . . . . . . . .
2.8 Knowledge Acquisition for Critiquing . . . . . . .
2.8.1 The HPKB and RKF Programmes . . . . .
2.8.2 The roles of ontologies . . . . . . . . . . . .
2.9 Semantic Desktops . . . . . . . . . . . . . . . . . .
2.10 Machine Learning . . . . . . . . . . . . . . . . . . .
2.10.1 Information extraction . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
22
23
26
26
27
28
28
29
29
30
32
32
33
33
35
35
35
36
37
39
40
41
42
43
44
46
3 Method
3.1 Scientific methods . . . . . . . . . . . . . . . . . .
3.2 Epistemology . . . . . . . . . . . . . . . . . . . . .
3.2.1 Hermeneutics . . . . . . . . . . . . . . . .
3.3 Theory in information systems research . . . . .
3.4 Case Study research . . . . . . . . . . . . . . . . .
3.5 Information Systems research . . . . . . . . . . .
3.5.1 Prototyping and iterative development .
3.5.2 Evaluations . . . . . . . . . . . . . . . . .
3.6 Our research method . . . . . . . . . . . . . . . .
3.6.1 Case Study 1: Planning . . . . . . . . . .
3.6.2 Case Study 2: Information Management
3.6.3 Case Study 3: Understanding C2 . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
49
49
50
51
52
53
53
54
55
56
56
57
58
4 ComPlan
4.1 Material . . . . . . . . . . . . .
4.2 Study . . . . . . . . . . . . . .
4.3 First Stage: Critiquing system
4.4 Second stage . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
63
63
64
64
67
.
.
.
.
xii
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
67
70
72
76
76
76
5 Information Management
5.1 Material . . . . . . . . . . . . . . . . .
5.1.1 DRESDEN . . . . . . . . . .
5.1.2 RAVEN . . . . . . . . . . . .
5.2 Study . . . . . . . . . . . . . . . . . .
5.2.1 Design criteria . . . . . . . . .
5.2.2 Extraction and Visualization
5.3 Results . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
79
79
80
81
84
86
87
89
6 Communication Analysis
6.1 Material . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.2 Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.2.1 Technical evaluation . . . . . . . . . . . . . . . . . . . . . . .
6.2.2 Interview study . . . . . . . . . . . . . . . . . . . . . . . . . .
6.2.3 Workflow Visualizer . . . . . . . . . . . . . . . . . . . . . . .
6.2.4 Workshop evaluation . . . . . . . . . . . . . . . . . . . . . . .
6.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.3.1 Results from classifier evaluations . . . . . . . . . . . . . . .
6.3.2 Results from the Workflow Visualizer design and evaluation
91
92
93
93
93
94
97
97
97
98
4.5
4.4.1 Knowledge representation
4.4.2 Views . . . . . . . . . . . .
4.4.3 Constraints . . . . . . . . .
4.4.4 Interactive simulation . .
4.4.5 Collaboration support . .
Results . . . . . . . . . . . . . . . .
.
.
.
.
.
.
7 Summary and discussion
7.1 Requirements for planning support systems . . . . . .
7.1.1 Assumptions in planning support systems . . .
7.1.2 Human functions to support . . . . . . . . . .
7.2 Requirements for semantic desktop systems . . . . . .
7.2.1 Assumptions in semantic desktop systems . . .
7.2.2 Human functions to support . . . . . . . . . .
7.3 Requirements for machine learning systems . . . . . .
7.3.1 Assumptions in machine learning systems . . .
7.3.2 Human functions to support . . . . . . . . . .
7.4 General discussion of our results . . . . . . . . . . . . .
7.4.1 Critiquing and Naturalistic Decision Making .
7.4.2 Design Criteria . . . . . . . . . . . . . . . . . .
7.4.3 Methodological issues . . . . . . . . . . . . . .
7.4.4 Alternative approaches . . . . . . . . . . . . . .
7.5 Future work . . . . . . . . . . . . . . . . . . . . . . . .
7.5.1 Future research questions . . . . . . . . . . . .
8 Conclusions
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
99
99
99
100
101
101
102
102
103
103
104
104
104
105
106
107
108
111
xiii
Bibliography
I
115
Paper I
I.1 Introduction . . . . . . . . . . . . . . .
I.2 Background . . . . . . . . . . . . . . . .
I.2.1 Mixed-initiative planning . . .
I.2.2 Critiquing . . . . . . . . . . . .
I.2.3 Cognitive systems engineering
I.2.4 Military decision theory . . . .
I.3 Guiding principles . . . . . . . . . . . .
I.3.1 Transparency . . . . . . . . . .
I.3.2 Graceful regulation . . . . . . .
I.3.3 Event-based feedback . . . . .
I.4 The ComPlan model . . . . . . . . . . .
I.4.1 Plan Views . . . . . . . . . . . .
I.4.2 Active constraints . . . . . . . .
I.4.3 Passive constraints . . . . . . .
I.4.4 Simulation . . . . . . . . . . . .
I.4.5 Synchronization support . . . .
I.5 Discussion . . . . . . . . . . . . . . . .
I.5.1 Plan views . . . . . . . . . . . .
I.5.2 Simulation . . . . . . . . . . . .
I.5.3 Collaboration support . . . . .
I.6 Related Work . . . . . . . . . . . . . . .
I.7 Conclusions . . . . . . . . . . . . . . . .
I.8 Acknowledgments . . . . . . . . . . . .
References . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
137
137
138
138
139
139
139
140
140
141
142
142
142
143
144
144
146
147
147
148
148
149
149
149
150
II Paper II
II.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . .
II.2 Background: Semantic desktop systems . . . . . . . . . . . .
II.3 Document workflow modeling . . . . . . . . . . . . . . . . . .
II.4 Domain-specific semantic desktop information management
II.5 Implementation for command and control . . . . . . . . . . .
II.6 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
II.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
155
155
156
158
159
159
160
161
161
III Paper III
III.1 Introduction . . . . . . . . . . . . . . . . . .
III.2 Command and control . . . . . . . . . . . . .
III.3 The IRIS semantic desktop . . . . . . . . . .
III.3.1 Domain-specific extensions to IRIS
III.4 Related Work . . . . . . . . . . . . . . . . . .
III.5 Discussion . . . . . . . . . . . . . . . . . . .
III.6 Conclusions . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
165
165
166
167
168
170
170
171
xiv
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
III.7 Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
IV Paper IV
IV.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
IV.2 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
IV.3 Related work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
IV.4 Material . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
IV.4.1 ALFA -05 . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
IV.4.2 C3Fire -05 . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
IV.4.3 LKS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
IV.5 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
IV.5.1 Message lengths . . . . . . . . . . . . . . . . . . . . . . . . . .
IV.5.2 Classifier selection . . . . . . . . . . . . . . . . . . . . . . . .
IV.5.3 Dataset features . . . . . . . . . . . . . . . . . . . . . . . . .
IV.5.4 Phrase structure classification . . . . . . . . . . . . . . . . . .
IV.6 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
IV.6.1 Non-text classifier comparison . . . . . . . . . . . . . . . . .
IV.6.2 Text-based classifier comparison . . . . . . . . . . . . . . . .
IV.6.3 Comparison of text-based and non-text-based classification
IV.6.4 LKS classification . . . . . . . . . . . . . . . . . . . . . . . . .
IV.6.5 Summary of results . . . . . . . . . . . . . . . . . . . . . . . .
IV.7 Discussion of results . . . . . . . . . . . . . . . . . . . . . . . . . . . .
IV.7.1 Implications for support systems . . . . . . . . . . . . . . . .
IV.7.2 Support system requirements . . . . . . . . . . . . . . . . . .
IV.7.3 Workflow Visualizer . . . . . . . . . . . . . . . . . . . . . . .
IV.8 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
IV.9 Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
175
176
176
178
178
180
180
180
181
182
183
185
186
187
187
187
188
189
191
192
193
194
194
195
195
196
V Paper V
V.1 Introduction . . . . . . . . . . . . . .
V.1.1 Outline . . . . . . . . . . . . .
V.2 Background . . . . . . . . . . . . . . .
V.2.1 C2 Research . . . . . . . . . .
V.2.2 Data analysis methods . . . .
V.2.3 Data analysis tools . . . . . .
V.2.4 Pattern extraction . . . . . . .
V.3 Research method . . . . . . . . . . . .
V.4 Interview study . . . . . . . . . . . . .
V.4.1 Interview analysis . . . . . . .
V.4.2 Focusing . . . . . . . . . . . .
V.4.3 Drawing conclusions . . . . .
V.4.4 Understanding tools and data
V.5 Discussion . . . . . . . . . . . . . . .
V.6 Conclusions . . . . . . . . . . . . . . .
201
201
202
202
203
204
206
208
208
209
210
211
212
214
216
217
xv
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
V.7 Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
VI Paper VI
VI.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . .
VI.1.1 Research method . . . . . . . . . . . . . . . . . . . . .
VI.1.2 Outline . . . . . . . . . . . . . . . . . . . . . . . . . . .
VI.2 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
VI.2.1 Data analysis methods . . . . . . . . . . . . . . . . . .
VI.2.2 Data analysis tools . . . . . . . . . . . . . . . . . . . .
VI.2.3 Pattern extraction . . . . . . . . . . . . . . . . . . . . .
VI.3 The Workflow Visualizer . . . . . . . . . . . . . . . . . . . . .
VI.3.1 System Description . . . . . . . . . . . . . . . . . . . .
VI.4 Use Cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
VI.4.1 Performance analysis . . . . . . . . . . . . . . . . . . .
VI.4.2 Exploration . . . . . . . . . . . . . . . . . . . . . . . .
VI.5 Prototype Evaluation Workshop . . . . . . . . . . . . . . . . .
VI.5.1 Team performance scenario . . . . . . . . . . . . . . .
VI.5.2 Pattern exploration scenario . . . . . . . . . . . . . . .
VI.6 Summary and Discussion . . . . . . . . . . . . . . . . . . . . .
VI.6.1 Managing issues in automated clustering techniques
VI.7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
VI.8 Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . .
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
xvi
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
225
225
226
226
227
227
229
230
231
232
234
234
238
239
240
242
242
244
245
246
246
List of Figures
1.1
1.2
Three cases of intelligent analysis systems in command and control. . .
The relationship between the papers included in this thesis. . . . . . . .
2.1
2.2
2.3
2.4
2.5
2.6
2.7
2.8
2.9
Boyd’s Observe-Orient-Decide-Act-loop . . . . . . . . . . . . . . . .
Brehmers Dynamic DOODA loop . . . . . . . . . . . . . . . . . . .
The FRAM Hexagon . . . . . . . . . . . . . . . . . . . . . . . . . . .
The Technology-Performance Chain (TPC) . . . . . . . . . . . . . .
A distributed, collaborative planning process as described by ARPI.
Silverman and Wenigs expert critiquing ontology . . . . . . . . . . .
A critic evaluation ontology . . . . . . . . . . . . . . . . . . . . . . . .
Machine Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Categorization of Machine Learning systems . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
10
16
21
24
27
38
40
45
47
3.1
3.2
3.3
3.4
3.5
3.6
Structural components of information systems design theories
Research stages in Study 1 . . . . . . . . . . . . . . . . . . . . .
The relevance and rigor of the first case study . . . . . . . . . .
The relevance and rigor of the second case study . . . . . . . .
Research stages in Study 3 . . . . . . . . . . . . . . . . . . . . .
The relevance and rigor of the third case study. . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
52
56
57
57
59
60
4.1
4.2
4.3
4.4
4.5
4.6
MEDEVAC scenario . . . . . . . . . . . . . . . . .
main method in the MEDEVAC planning domain
Graphical representation of plan decomposition . .
operator in the MEDEVAC planning domain . . .
Dependencies between views in ComPlan . . . . .
The Organization View . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
64
65
66
66
70
71
xvii
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
4
5
4.7
4.8
4.9
4.10
The Task View . . . . . . . . . .
The Resource View . . . . . . . .
The Timeline View . . . . . . . .
Simulation in the Timeline View
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
72
73
74
75
5.1
5.2
5.3
5.4
5.5
5.6
5.7
The Operational Planning Process . . . . . . . . . . .
A Battlefield management system . . . . . . . . . . .
Document structure in operational planning . . . . .
The IRIS Semantic Desktop . . . . . . . . . . . . . .
Structure-based extraction of tasks for military units
Content-based extraction of named entities . . . . . .
The Map Panel domain-specific component in IRIS
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
81
82
84
85
88
89
89
6.1
6.2
6.3
Participants in one of the role-playing exercises analyzed. . . . . . . . . 92
A Workflow analysis tool. . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
The Workflow Visualizer. . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
I.1
I.2
I.3
I.4
I.5
I.6
I.7
ComPlan in relation to human-centered and technical research
The Task View . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Ilustration of user interaction . . . . . . . . . . . . . . . . . . . .
Illustration of workflow during simulation . . . . . . . . . . . .
Screenshot of a simulation . . . . . . . . . . . . . . . . . . . . .
Illustration of workflow for collaboration . . . . . . . . . . . . .
Conflict notification in joint planning . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
141
143
143
145
145
146
147
II.1 The IRIS semantic desktop . . . . . . . . . . . . . . . . . . . . . . . . . . 157
II.2 Document flow as part of collaboration . . . . . . . . . . . . . . . . . . . 158
III.1
III.2
III.3
III.4
Information flow between staffs . .
IRIS with domain-specific additions
Timeline of tasks in IRIS . . . . . .
Map view in IRIS . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
166
168
169
170
IV.1 The DOODA loop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
IV.2 Algorithm for calculating the Message direction dataset attribute. . . . . .
IV.3 Part of the decision tree generated by the J48 classifier on the ALFA
-05 dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
IV.4 An example of substitution of phrase structure for message text. . . . .
IV.5 Simple workflow with two stages . . . . . . . . . . . . . . . . . . . . . .
IV.6 Conditions for detecting workflow stages . . . . . . . . . . . . . . . . . .
IV.7 Relative precision when detecting workflow stages in the LKS dataset
IV.8 Relative classification results when detecting workflow stages . . . . . .
IV.9 Possible clusters of messages in The Workflow Visualizer . . . . . . . .
177
179
184
186
190
190
191
192
193
V.1 Four stages in qualitative data analysis compared to the scatter/gather
paradigm of data mining . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
V.2 Exploratory sequential data analysis . . . . . . . . . . . . . . . . . . . . . 205
xviii
V.3 An ESDA tool used for analysis of C2 scenarios by the participants in
the study. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
VI.1 Four stages of qualitative data analysis . . . . . . . . . . . . . . . . . . . 227
VI.2 An overview of F-REX, an ESDA tool used for analysis of C2 scenarios
by the participants in the study. . . . . . . . . . . . . . . . . . . . . . . . 229
VI.3 An overlay of the tool support provided by ESDA tools similar to FREX on the four stages of qualitative data analysis by Miles. . . . . . . 230
VI.4 Infomat, an Information Visualization GUI . . . . . . . . . . . . . . . . 230
VI.5 An architectural overview of the Workflow Visualizer, capable of managing a set of different text-based data sources and manipulating them
in views related to the exploration of large data sets. . . . . . . . . . . . 232
VI.6 A comparison between the four stages of qualitative data analysis by
Miles and the Workflow Visualizer support tool (WAT in the figure). . 233
VI.7 Selection of messages based on communication participants, the time
frame for the communication and the keywords present in the messages.235
VI.8 Hypothesized threads in the communication indicated by arrows of
different colors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
VI.9 When users select one of the observation categories in the tree (“communication interference/minor” above), they can choose a color when
plotting observations with that category along the timeline. . . . . . . . 236
VI.10A set of message clusters and observations concerning the staff’s beliefs
about interferences. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
VI.11Clusters of messages and a set of observations regarding interference
as displayed in the timeline view of the scenario at the time when there
is simulated radio jamming. . . . . . . . . . . . . . . . . . . . . . . . . . . 237
VI.12Representation of a set of message clusters, as hypothesized by the
Random Indexing approach to cluster message texts. . . . . . . . . . . . 239
VI.13The two clusters of messages during scenario two of the Sandö material.239
VI.14The same cluster of messages at scenario three in the Sandö material.
There is a clear difference compared to scenario two according to the
clustering. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239
xix
List of Tables
3.1
3.2
3.3
3.4
A framework for IT research . . .
Structural components of study 1
Structural components of study 2
Structural components of study 3
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
5.1
Documents included in the RAVEN scenario . . . . . . . . . . . . . . . 82
8.1
Affordances and constraints . . . . . . . . . . . . . . . . . . . . . . . . . 112
I.1
A Comparison between ComPlan and other Planning Systems . . . . . 149
IV.1
IV.2
IV.3
IV.4
IV.5
IV.6
IV.7
Non-text attributes used for message classification. . . . . .
Message frequencies . . . . . . . . . . . . . . . . . . . . . . .
Message lengths in all categories . . . . . . . . . . . . . . . .
Cross-validation results of non-text classification . . . . . .
Cross-validation results of text-based classification . . . . .
Comparison of text-based and non-text-based classification
Combined classifier versus phrase structure classifier . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
54
58
59
61
179
182
183
187
187
188
189
V.1 The roles of people interviewed regarding communication analysis in
C2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
xx
1 Introduction
In this chapter, we describe the research question that has framed this work, the
application domain in which our work has been conducted, and the papers that
have been contributed and form the basis of this dissertation.
1.1 Research question
This dissertation provides an analysis of three case studies of technical, artificially
intelligent, decision support systems to be used in military command and control
(C2 ) at higher levels of command.
Many similar systems have been devised, tested and deployed in a number of
countries over the course of the last two decades. However, for reasons related
both to building complex systems in general and military systems in particular,
and changing organizational practices as a result of the introduction of new systems and thinking, those systems have not been universally accepted. When commanders are allowed to choose themselves, they often conduct their work through
whiteboards, maps and simple office software for producing order texts and report
sheets. Designers of intelligent decision support systems who attempt to address
issues of usability and acceptance have a bewildering array of possible perspectives with regard to users, tasks, performance and system purposes to adopt, and
may find it difficult to use them as concrete guidance. In this thesis, we adopt the
view that there is a dissonance between our current conceptualization of how military commanders at higher levels of command work and how intelligent support
systems actually support them.
The main tasks performed by military commanders, in peace-keeping operations as well as in war, are to command groups of subordinate commanders and
1
1. Introduction
to monitor and control the actions performed by these. In this respect, the tasks
of command and control have remained the same for a long time. However, the
current context of military command and control has changed our appreciation
of exactly what it is that constitutes and should constitute commanders’ work. In
contemporary operations, goals and priorities may change quickly, collaboration
with other organizations is critical to success and much more focus is put on a
holistic perspective as well as on human factors in command [250, 249]. With the
advent of more complex theories of what decision-making is in general, and in the
context of command and control in particular, the conditions for building systems
to support decision-makers have changed. Decision-making is currently not seen
so much as a process of selecting optimal solutions to well-defined problems of resource allocation or planning, but as a fundamentally social activity of constructing
a shared meaning and understanding. This definition makes support-tool construction, and the evaluation of support tools, a tougher challenge than encoding a large
enough set of common-sense knowledge required for reasoning about and producing solutions to military problems. However, that is not to say that computer-based
support tools for analyzing and automating parts of the deliberation in command
and control are not desirable, but that we must understand whether the basis for
building decision-support applications are sound with respect to the environment
in which they will be deployed and the function we expect them to have. This thesis consists of three case studies of command and control scenarios where we have
studied both the tasks performed by decision-makers and specific intelligent decision support tools devised for each scenario with the purpose of eliciting concrete
affordances and limitations of the underlying techniques for each support tool.
The term affordance is used in this thesis in the original meaning coined by Mace
[174], according to whom an affordance is a quality of objects in the environment
that direct our senses to perceive them in a particular manner. Another explanation, specifically attuned to the subject of this thesis, is to say that affordances of
technologies are latent qualities of use that, in relation to a certain user1 , determine
their utility. These use qualities are latent in the sense that they are possible, but
not necessarily the most obvious, given the pedigree of the technologies and their
historical trajectories of development as technical systems.
During the course of the project, we have attempted to reconcile descriptions
of command and control and decision support based on artificial intelligence (AI) techniques
with one another in a manner that would help us characterize what it means to build
a successful intelligent decision support system for command and control. Chapter
2 provides us with a background to these concepts and how they are treated in this
dissertation.
Specifically, the questions that we address in this dissertation are:
1. What affordances and constraints do decision support systems based on AI techniques
have that are relevant for the support of commanders?
1 In our case higher-level military and civilian emergency commanders as well as command and
control researchers in case study 3.
2
1.1. Research question
2. Based on these affordances and constraints, how can AI techniques be incorporated as
support tools for commanders to maximize the utility of support tools?
These two questions have been explored in the context of three case studies
based on technical platforms.
The first case study was based on a support tool for military planning in a
specific planning scenario, ComPlan. The tool was intended to provide a framework for different planning scenarios and critiquing mechanisms. Insights from
the study, along with a better appreciation of current challenges in command and
control, directed us not towards implementing other scenarios or critiquing mechanisms but towards studying general-purpose information management tools and
task-specific analyses of data in them (see Figure 1.1). Military staff typically make
use of general-purpose office tools such as Microsoft Office for managing information and communicating with others, as task-specific information management systems may require more training and support than is available, and as staff members
may need to interoperate with others who lack specialized tools.
The second case study was therefore based on the analysis of standard desktop
documents through the use of a Semantic Desktop framework. In our work, we
extended an existing framework for Semantic Desktops with mechanisms for harvesting desktop events from users that related to their particular context of work,
and provided visual representations of concepts they reason about using a domainspecific ontology in what we called a Planning Desktop. The first and second case
studies mostly involved the construction of task-specific support in command and
control, with a focus on the visualization and manipulation of context-specific information. Task-specific support requires knowledge of the specific constraints
that should hold in planning an operation, or of the specific types of information
that it is important to extract from documents.
The third case study concerned the use of general techniques for information
extraction that can be used to improve the communication and performance of a
staff. In this study, we used general models for analyzing text-based data sources,
and employed them for the specific purpose of supporting communication within
a command staff. The tool used for text analysis was specifically designed for reasoning about communications among members of staff and was evaluated together
with researchers who study command and control.
All three case studies used technical platforms for studying methods of analysis
of critical information. Figure 1.1 describes the relationship between these three
case in terms of two parameters: The scope of the purpose of the technical system
that was used and the scope of the purpose of the analysis techniques.
The purpose of the ComPlan system described in the first study was specific to
the task of planning for, and optimizing resource usage during, military or civilian
operations. ComPlan was primarily intended for scenarios in which the resource
constraints were known in advance and therefore possible to encode as part of
the scenario itself. One of the primary lessons from designing the system was that
specialized tools present difficulties for commanders precisely because they require
a lot of task-specific knowledge to be useful. In situations where constraints are
not known in advance, the effects of actions are ambiguous, or there is little general
3
1. Introduction
Task-specific
General
Tool purpose
Analysis purpose
Task-specific
General
Case study 1
Workflow
Visualizer
Case study 3
Planning
Desktop
Case study 2
Semantic
Desktop/Semantic
Web
ComPlan
Figure 1.1: Three cases of intelligent analysis systems in command and control.
structure available, staff members use standard office software tools for outlining
their intentions and synchronizing their work.
The second study was therefore conducted in a general information management framework: a Semantic Desktop. The analysis conducted on the products
(documents) provided by staff members was specific to the domain at hand, but
did not require logistical or other constraints to hold, it merely assumed that terms
and concepts used in planning would be present in the documents managed by
staff. The Planning Desktop approach could be considered a general tool in the
sense that the desktop system is general, that the semantic desktop manages all
information available and the support, and that the analysis is provided based on
the information specific to the task of planning.
The third study focused on understanding staff communications, and evaluated
options for analyzing patterns in written communication among members of staff.
The study used a task-specific technical platform for reasoning about patterns in
datasets from command and control exercises in which text-based data sources
were available for analysis. However, the analysis support did not merely provide
commanders with a system for detecting specific patterns or upholding specific
constraints, but was intended for general, explorative analyses of command and
control data.
1.2 Contributions
All three case studies address the two research questions stated in Section 1.1
but with different foci. They contribute to addressing the research questions by
demonstrating different models of intelligent decision support in military command
and control, and are based on analyses of the three contexts of work they have been
applied in. Specifically, our contributions can be summarized as follows:
• Three design criteria for intelligent planning support systems, based on domain descriptions of medical evacuation scenarios and the implementation and anal4
1.3. Papers
Analysis purpose
Task-specific
General
Tool purpose
Task-specific
General
Paper I
Paper IV
Paper V
Paper VI
Paper II
Paper III
Not
considered
in this thesis
Figure 1.2: The relationship between the papers included in this thesis.
ysis of the ComPlan constraint management system for tactical planning (Paper I). The design criteria are justified by observations on the constraints and
affordances of automated planning and critiquing systems and how planning
is conducted by military commanders.
• A model for extending Semantic Desktop systems with domain-specific knowledge for
the purpose of supporting information management in military planning. The model
is grounded in a description of domain-specific document management at
the operational level of military command (Paper II). This contribution describes the affordances of a Semantic Desktop environment in relationship
to planning as document management. In Paper III, based on two of the design criteria elicited in Paper I we provide an implementation of this model,
and use our implementation to demonstrate how the model to extend Semantic Desktops can increase the utility of Semantic Desktops for command and
control applications.
• An empirical evaluation of machine learning approaches for classifying communication in command and control in which text-based classification methods outperformed non-text classification (Paper IV). This contribution relates specifically to the affordances and limitations of text clustering as a method for
understanding C2 communications.
• A model for supporting the analysis of decision processes in command and control
through the use of text-based clustering (Paper VI). Two of the design criteria from Paper I were re-used in this setting and the design was also justified
by an interview study regarding how researchers conduct analyses of C2
decision processes (Paper V).
5
1. Introduction
1.3 Papers
Figure 1.2 describes how the papers in this thesis are related to each other and to
the case studies conducted. Paper I represents the first case study, papers II and
III represent the second, and papers IV, V, and VI represent the third.
Paper I Ola Leifler. “Combining Technical and Human-Centered Strategies
for Decision Support in Command and Control — The ComPlan Approach”. In: Proceedings of the 5th International Conference on Information
Systems for Crisis Response and Management. Brussels, Belgium, May 2008
The first paper presents the first case study in this thesis: the ComPlan approach to critiquing as decision support for mission planning. Based on
observations of military planning and studies of previous support systems
for command and control, the study investigates how critiquing can be implemented as a support concept for military planning. Several techniques
for feedback and visualization are integrated to create a framework for critiquing, and it presents a framework for the use of visual critiquing based on
constraints in tactical, military planning.
Paper II Ola Leifler and Henrik Eriksson. “A Model for Document Processing in Semantic Desktop Systems”. In: Proceedings of I-KNOW ’08, The
International Conference on Knowledge Management. Graz, Austria, Sept.
2008
Paper II, which is the first paper in the second case study, describes the
general requirements for constructing support systems for domain-specific
information management in military planning. In military planing, we describe how plan documents are refined concurrently by several instances of
command in the process of planning and how this has effects on the design
of support tools based on document management.
Paper III Ola Leifler and Henrik Eriksson. “Domain-specific knowledge management in a Semantic Desktop”. In: Proceedings of I-KNOW ’09, The International Conference on Knowledge Management. Graz, Austria, Sept. 2009
The second paper in case study 2 describes an implementation of the model
presented in Paper II. In the implementation, semantic document processing
is introduced as a set of extensions to an existing semantic desktop through
content- and structure-based information extraction, domain-specific ontological extensions as well as through the visualization of semantic entities.
Paper IV Ola Leifler and Henrik Eriksson. “Message Classification as a basis
for studying command and control communications - An evaluation of
machine learning approaches”. In: Journal of Intelligent Information Systems (2011)
In the first paper of case study 3, we present a feasibility study of using
machine learning algorithms to classify messages in a command and control
scenario. The aim of the study was to determine the technical options for
6
1.3. Papers
using machine learning to support command and control researchers understand patterns in workflow data. We also describe how the requirements
for constructing support tools in command and control research relate to the
requirements for planning tools that we elicited in Paper I.
Paper V Ola Leifler and Henrik Eriksson. “Analysis tools in the study of
distributed decision-making: a meta-study of command and control research”. In: Cognition, Technology & Work (2011)
The second paper of case study 3 reports the results from an interview study
in which we have interviewed command and control researchers on their
work process, their tools to understand better what parts of their work process that is not well supported by current tools and how text clustering as
a support technique could be useful for the analysis of distributed decision
making in command and control.
Paper VI Ola Leifler and Henrik Eriksson. “Text-based Analysis for Command and Control Researchers — The Workflow Visualizer Approach”.
In: Cognition, Technology & Work (2011)
In the third paper of case study 3, we build on the results presented in Papers
IV and V and present the results from a study regarding a workflow analysis tool (Workflow Visualizer) aimed at assisting researchers to understand
command and control workflows. The study comprises two parts. In the
first, we constructed a support tool to facilitate the study of command and
control scenarios, and in the second, we evaluated the tool together with the
participating researchers to understand the affordances of machine learning
techniques for command and control research.
1.3.1 Other contributed papers
Parts of the work presented here are found in papers that are not included as part
of the thesis. They are referred to in the thesis, but are not considered to be major
contributions. The first part of case study 1 was reported in two papers [169, 170]
and is described in Chapter 4, and a preparatory study on requirements for critiquing systems in networked organizations such as military command and control
was reported in one of the papers [171].
[169]
Ola Leifler and Johan Jenvald. “Critique and Visualization as decision
support for mass-casualty emergency management”. In: Proceedings of the
Second International ISCRAM Conference. Ed. by Bartel Carle and B. van de
Walle. Brussels, Belgium, Apr. 2005.
7
1. Introduction
[170]
Ola Leifler and Johan Jenvald. “Simulation as a tool for problem detection in rescue operation planning”. In: Proceedings of the Conference on Modeling and Simulation for Public Safety: SimSafe 2005. Ed. by Peter Fritzon. The
Programming Environments Laboratory, Department of Computer and
Information Science, Linköping University. May 2005.
[171]
Ola Leifler et al. “Developing Critiquing Systems for Network Organizations”. In: Proceedings of IFIP 13.5 Working Conference on Human Error, Safety
and Systems Development. Ed. by Philippe Palanque. Toulouse, France, Aug.
2004.
1.4 Outline
The remaining chapters of this thesis describe the background of the technical systems and the tasks carried out with them in each case study we report. In chapter
2, we present a theoretical background to C2 research, including studies of intelligent decision support systems devised for the support of commanders. Chapter
3 describes the research method employed when studying these three cases, how
the cases were selected, and how each system was designed and evaluated. In
chapter 4, we describe the first case study, during which the ComPlan planning
system was devised as a means of providing analysis support in military, logistically oriented planning. The second case study, The Planning Desktop approach
to analyzing semantically rich text documents in planning, is described in chapter
5. Chapter 6 describes the third case study, which we conducted with command
and control communications data, from where we extracted information from several command and control scenarios and provided a design for an analysis tool
that could provide command and control researchers with new methods for understanding how commanders cooperate. In chapter 7, we relate the results from
each of the studies to each other and frame them in a general discussion. Finally,
in chapter 8, we provide conclusions about the conditions for successfully implementing intelligent support tools in dynamic command and control environments
that have been outlined in the thesis.
8
2 Background
This thesis addresses the issue of how intelligent decision support systems can be
used in military command and control and how the function performed by such
systems relate to the tasks performed in the domain of C2 . To this end, this chapter provides an overview of what classes of support tasks are performed with each
technology and how these tasks are related to human tasks in command and control. To understand the relationship between the functions of command and control and the intelligent support systems that provide help to commanders, we will
need to define first what commanders actually do, in terms of the functions they
perform and how they perform them and then, how intelligent technical systems
interact with commanders and influence their behavior. Lastly, we will also elicit
how the functions in C2 are evaluated and the relationship between three classes
of support systems and the performance of staff members. These three classes of
support systems are automated planning systems, information management systems and C2 research analysis systems.
2.1 Command and Control
The traditional view of what a staff of commanders do and how they do it centered
around the notions of planning, preparations and deliberate actions [261, 161].
Military commanders, tasked with gathering information and organizing an assembly of military units into a force appropriate for accomplishing a given, political goal were long considered to be the central points of decision-making. 19thcentury military thinker Carl von Clausewitz described that military affairs can be
described as finding and targeting forces against a single center of gravity of the enemy, or Schwerpunkt as coined by military thinker Clausewitz [64]. Another military
9
2. Background
Observe
Unfolding
circumstances
Observations
Outside
information
Orient
Implicit
guidance &
control
Feed
forward
Decide
Implicit
guidance &
control
Cultural
traditions
Genetic
heritage
Analysis and
synthesis
New
information
Act
Feed
Forward
Decision
(Hypothesis)
Feed
Forward
Action
(Test)
Previous
experience
Feedback
Feedback
Unfolding interaction with environment
Figure 2.1: Boyd’s Observe-Orient-Decide-Act-loop (OODA), adapted from [42].
thinker, Jomini [143], analyzed the strategies employed by Napoleon and wrote
of how combinations of massing and manoever led to success on the battlefield.
Jomini used similar concepts for describing how and where to employ force for
maximal effect as did other military thinkers. The center of gravity in modern warfare has been re-interpreted as an abstract concept, such as “the will of the people
of a country to continue fighting”, or something not pertaining to battlefields or
combat, such as “the emperor of Japan” (see, e.g., Tecuci [256]). This notion is
supposed to convey the most essential condition for an opponent’s continued war
effort that, if you managed to significantly affect them in your favor, would result
in victory. However, Clausewitz did not specify the concept of Schwerpunkt in
other than circular terms. That is, a Schwerpunkt is that which will bring victory
about, and that which will bring victory about is a Schwerpunkt. The difficulty
in revealing the nature of how strategic goals are selected and evaluated has persisted since Clausewitz. In general, Clausewitz stressed that despite the patterns
of warfare he tried to elicit, there are no universal laws of warfare and no universal
means by which to achieve victory [261]. Although the challenges in characterizing military affairs in general have been recognized for a long time, the concepts
of command and control were long considered to be less complicated.
The conceptualization of command, along with the conceptualization of warfare and the role of military forces, has changed since the end of the cold war. The
end of a single, well-defined, overarching threat, together with the simultaneous
arrival of new means of communicating, visualizing information and dividing labor
through technical systems [5], in conjunction with new tasks and threats for military commanders, has resulted in the task of command and control being studied
by researchers from many fields (see e.g., [6, 150, 207]).
One of the central assumptions held in traditional views of military command
was the concept that plan generation was a central act in command and one which
precedes and determines the actions of units at the disposal of a commander. Although the circumstances of war always present commanders with uncertain, ambiguous intelligence and outcomes, the idea of producing plans, formulated in writing and presented to subordinate commanders for execution, were considered to be
10
2.1. Command and Control
the most central aspects of exercising command. The uncertainties and dynamics
of war were considered “frictions” and suitable targets for technical development.
Command and control is often a function performed by large groups of commanders with different roles and responsibilities, who collaborate to provide a course
of action at the echelon they are assigned to. The process they follow has been
considered analogous to that of a single fighter pilot [42], to the extent that commanders react to external events that call for action, deliberate on options, execute an action and monitor the outcome. These four activities have been labeled
Observe-Orient-Decide-Act and constitute a loop in which commanders, alone or
in group, are supposed to operate.
The four principal activities of the OODA loop have long been identified as
the main tasks performed in C2 . In older guidelines for planning at the operational
level in NATO [121], after initiating directives have been established, a certain,
well-defined procedure is assumed to be followed by members of staff for gathering intelligence and, in particular, evaluating options for action. Intelligence information along with requests for assistance typically move up the chain of command one
step at the time, towards single nodes in the directed tree of the command structure
where commanders and analysts decide on how to process requests and interpret
information. Orders and directives, on the other hand, are submitted to lower
levels of command to be further specified and dissected, until tactical orders are
issued to single units on the field. Several changes to this classical model of command have been suggested or already implemented over the last 20 years, however.
Most significantly, a shift in the nature of the objectives military commanders are
given has opened opportunities for new interpretations of how commanders treat
the concepts of options and action. With the end of the cold war, the operations
conducted by military forces in many western countries have become much more
diversified, with peace-enforcing operations, anti-terrorist and humanitarian aid
as becoming equally important as the ability to subdue a traditional military adversary. As the military forces have been given new military tasks to perform,
researchers and policymakers have begun to identify how new information technologies could be used to accomodate these new tasks. In particular, command and
control researchers have described how future commanders could use NetworkCentric Warfare (NCW) as a means of leveraging information superiority against
military opponents [5, 55].
The NCW concept aimed to reduce the time it takes a large organization to process information, both intelligence information directed to higher command and
orders directed to subordinate units. As a means of reducing the time to make
decisions and put them into effect, Alberts, Gartska, and Stein [5] suggested that
commanders should be given access to information systems that would reduce or
completely eliminate the need for human intervention in the communication between units. By using electronic communication networks inspired by the World
Wide Web, units on the field would be able to contribute battlefield information
to central command directly and receive updates on the high commander’s intent
instantaneously. In addition, the NCW concept stated that units should be able to
synchronize their actions without any need for interventions by higher command
at all, further reducing the need for detailed instructions to subordinate units. With
11
2. Background
a network of military units connected not as a tree but as a web, the authors of the
NCW concept saw a possibility of transforming military organizations much the
same way as the World Wide Web has transformed the ways people around the
world communicate. With technology as an enabling factor for new methods of
conducting warfare and the end of the cold war marking the end of entrenched
views about the purpose of military action and military tactics, Alberts, Gartska,
and Stein saw information technology as a means to break old bonds. Arquila
and Ronfeldt [18] also describe how new tactics will be enabled through the use
of communications technology, and how the concept of swarming an enemy with
all available resources at the same time will become possible and desirable once
the necessary coordination can be achieved through communication technology.
Together with the end of the cold war, however, military thinkers, and particularly those concerned with the essence of command and control, have also begun
to consider what it is that commanders actually do when they deliberate and communicate, how they do it and how it is relevant compared to all other factors for the
outcome of conflicts.
Commanders are supposed to command others by rationally planning for a
course of action that takes into account all the known facts about a situation, a given
operational goal, and a set of military resources available to achieve the goal. This
formulation of the deliberation that precedes the production of a battle plan, suited
the designers of computer-based support systems in the 1980s who interpreted
this description as amenable to at least partial automation by AI planners [162].
However, in several AI systems developed and tested since then [41], a number
of problems have emerged. Although AI systems have been demonstrated to be
capable of producing plans that conform to a well-defined set of criteria and using
a well-defined set of possible actions, it is still problematic to develop and deploy in
several command and control settings due to the discrepancy between the nature
of the cognitive tasks performed in command and control and the affordances of
technology [71].
One way to understand the issues involved in deploying decision support systems for command and control is to describe the joint cognitive and social functions
of a command and control staff in more detail and in doing so also describe the relationship between these functions and technical systems for aiding in plan development, execution monitoring and other aspects of command and control. In the
following sections, we discuss how command and control should be described—as
a decision-making process, as a set of cognitive functions, as a process of formulating and communicating intent or as something else—and how these descriptions
affect the evaluation of team performance.
2.1.1 Command and control as Decision-making
Decision-making is classically described by Game Theory as the process by which
an intelligent being selects optimal solutions among a set of alternatives to maximize some expected utility value [188]. According to this theory, a known goal
should be described as something that depends on performing certain actions,
and that performing an action deterministically produces an effect that brings the
12
2.1. Command and Control
decision-maker closer to his goal. Given several options for how to achieve his
goal, the decision-maker should select the option that has the highest expected
utility according to some metric of utility.
Game Theory, and related probability-based schemes of rational decision making, assume a decision maker who is able to perceive all that is relevant with respect
to making a decision, and perform all those calculations that are of significance to
decision making. Assuming such an omnipotent rational human being, capable of
shaping the world according to his will, was soon considered too far from the realities of decision making processes. In 1955, Simon [235] therefore proposed a
behavioral model of making rational choices for rational individuals with bounds
on sensory as well as cognitive functions. In his model, he divided the constraints
inherent in making choices between the decision maker and the environment. Simon claims that all models of rationality require
• A well-defined set of alternatives to choose from,
• a well-defined set of alternatives that the organism can perceive,
• a well-defined, enumerable set of world states that can result from choosing
an alternative,
• a utility function that assigns values to all world states,
• a well-defined set of possible outcomes that result from choosing a certain
alternative in a state, and finally
• a well-defined probability function that assigns definite probability estimates
to all outcomes in the set of possible outcomes when choosing an alternative.
In later work, Simon describes human problem solving as the more general
cognitive activity within which rational choice takes place, as a process that can essentially be analyzed through the creation of computer programs that emulate human problem solving [237]. In his further elaborations on bounded rationality, Simon
[236] argues that models of rational behavior must take into account the boundaries of human cognition as well as the context of decision making. The model
of bounded rationality introduced by Simon was empirically tested by Gigerenzer
and Goldstein [107] through computer simulations that indicate how bounded rationality may not only be a relevant model to describe human behavior, but also
technically superior to universally rational inference procedures.
The concept of bounded rationality has the ambition of taking the characteristics of human cognition into account in decision making, but has also been interpreted as a basis for computational models of cognition and decision making.
However, the description is very difficult to reconcile with decision making and
other work in command and control.
Goals in command and control are very difficult to define in the manner required
for rational problem-solving. Simon does note that goal formulation, with the reconciliation of different perspectives on how to frame a situation, needs to precede
the activity of decision making [236, p. 500]. Although overarching, political goals
13
2. Background
guide the operations military commanders are tasked to perform, such goals may
be defined as “providing security for people living in area X”—something that is
difficult to evaluate. Trends in military policy have also come to emphasize the
idea that goals should be less concerned with what can be achieved with military
power alone but rather look at what can be achieved by combining military and
other resources together [26]. In complex, open systems, such as those in which
military commanders find themselves, several goals may even have to be attained
at the same time [43].
Options available for action may be straight-forward to define if goals are defined in terms of the direct effects of military power, such as the destruction of
a strategic enemy center of gravity. In current operations, where commanders
are supposed to collaborate with the surrounding community, non-governmental
organizations and local authorities, the options for actions become much more difficult to enumerate, and even relate to the achievement of goals such as providing
security. Planning at higher levels of command in military operations may therefore require adopting a variety of styles for managing goals and options dependent
on the nature of the task at hand [147]. For example, peace-keeping operations
may require a planning process that is much more integrated and less linear than
planning traditional war-time operations.
Similarly, enumerating all possible world states and evaluating the effects that
could come as a result of performing an action is complicated, particularly at higher
levels of command. Much of the work defined to be part of military decisionmaking concerns defining that which, according to Simon, is required for making
rational decisions. However, making those decisions by selecting rationally among
options, given constraints on what is known, is rarely a central part of the decisionmaking process in C2 .
2.1.2 Decision-making as selecting optimal solutions
In many of the professional situations studied by decision-making research, the
act of making decisions is not characterized by the selection of optimal solutions
by combinations of a limited set of possible actions [150]. Instead, it can best be
described as process through which intentions are formed by the actions taken,
and actions are taken based on cultural assumptions about what it is possible and
appropriate to do [203]. In this respect, the entire command process can be regarded an ongoing social activity of negotiating between the interpretations of the
goals, means and outcomes of actions [248]. Using this context of reasoning, there
is no meaningful definition of optimal solution and selecting action. Optimality cannot
be evaluated because there is no single well-defined utility function to apply and
selection cannot be performed because it presupposes a pre-determined set of options, whereas options for how to use resources are usually constructed as part of
the problem-constructing process in military command and control [126]. Schön
[225] provides examples of what it means to construct a problem in this manner:
A nutritionist, for example, may convert a vague worry about malnourishment among children in developing countries to selecting an
14
2.1. Command and Control
optimal diet. But agronomists may frame the problem in terms of food
production; epidemiologists may frame it in terms of diseases that increase the need for nutrients or prevent their absorption; demographers tend to see it in terms of rate of population growth that has
outstripped agricultural activity; engineers, in terms of adequate food
storage and distribution; economists, in terms of insufficient purchasing power or the inequitable distribution of land or wealth. [225, pp.
4-5]
Although it would be highly unethical not to think of malnourishment as a
problem per se, all the above framings of the problem hinge on the notion that
no child should be malnourished. Had the problem concerned a broken car on
the other hand, one could imagine that some people would not even consider it a
problem to be solved but merely a cause of the unreflected, situated action that you
take the bus instead [248]. In such situations, framing a situation as a problem is
an essential prerequisite to conceiving goals, options and utility functions. Thus, a
different framing of a situation could result not only in a different utility function
for evaluating options, but could even result in that the situation is not considered
a problem at all.
2.1.3 Functions of command and control
Decision-making in general may not be best characterized as a process of selecting optimal actions, which is why researchers have considered models for command work that focus not so much on how commanders produce plan documents
but instead on how they build a common understanding of a situation (situation
awareness) [220, 257]. Descriptions of command and control began to outgrow
the observe-orient-decide-act (OODA) [42] costume when researchers began to
add new possible activities and interplays between activities in the battle staff and
new ways to frame the problem of command and control.
In Brehmer’s description of the Dynamic OODA loop [44] (Figure 2.2), there
is an inner circle of activities, all related to making sense of a situation, that is common to all actors in a battle staff. Jointly, they improve their understanding of a
situation by gathering information and integrating new information in their existing frameworks of understanding. In the course of building an understanding, the
participants reach a state in which they feel confident to act and issue orders to
their subordinate units. Those orders can later be modified as new information becomes available, or old information is re-interpreted. In this description of military
command and control, there is no linear process similar to that of OODA, where
there is a clear end-point when issuing orders (the last step, act). The process of
command and control, as described in the DOODA model, emphasizes instead the
importance of sense-making [52].
Sense-making has been described as a goal-oriented, more specific concept
than situation awareness [88] but one that is used in a similar manner. The two
terms have been used to describe understanding, both individually and in teams,
with a vocabulary that is similar to that used when studying the process of learning
15
2. Background
Mission
Sensemaking
Orders
Planning
Data
collection
Military
activity
Effects
Command and control
Figure 2.2: Brehmer’s Dynamic OODA loop, where the central act is sense-making rather
than decision-making, adapted from Brehmer [44].
in general [89]. In his description of situation awareness, Endsley focuses on the
process of recognizing a situation, that is, recognizing what is important and how
a person can act. Situation awareness (SA) is supposed to be comprised of three
levels of understanding:
1. Perception of elements in current situation (SA level 1)
2. Comprehension of current situation (SA level 2)
3. Projection of future status (SA level 3)
Although Endsley exemplifies SA specifically in professional contexts, the hierarchical view of knowledge as internalized in larger structures and used for making
inferences and evaluations is similar to the learning taxonomy of Engelhart et al.
[89] that has been used in education for several decades:
1. Knowledge (SA level 1)
2. Comprehension (SA level 2)
3. Application (SA level 2)
4. Analysis (SA level 2)
5. Synthesis (SA level 3)
6. Evaluation (SA level 3)
The theory of SA was developed within the context of aviation safety [88] and
has since been adopted metaphorically for command and control [243, 15]. In command and control, the conditions for perceiving a situation and acting are radically
different though. The distributed nature of work, the complicated distribution of
16
2.1. Command and Control
tasks between human machines and technical systems, longer time scales, and different means of affecting the environment, call for different interpretations of the
concept. Also, SA been operationalized both as the end result of a process of comprehension and as the process itself [87]. Equating SA to a process makes the task
of evaluating the level of SA difficult. It also makes the application of SA as an indicator of team performance risky. Several technical researchers have attempted
various definitions of SA that can guide the construction of command and control
support systems (e.g., [15, 178]). In their attempts, they assume that patterns that
are of relevance in situation awareness can be encoded as elements of explicit computer representations and internalized by humans directly. Similar attempts have
been made by support systems developers who adhere to the concept of sensemaking [255].
2.1.4 Command and Control as communication of intent
Pigeau and McCann [206] view most elements of a military organization, as well
as the culture, operating procedures and technical systems employed, as part of
the control element, and view command as the expression and communication of
human will. They argue that the communication of intent effectively defines the
command function of a military commander [207], and that control functions follow
from command. In their call for returning to the basics of command and control
research, they argue that command and control should be defined as follows [206,
p. 56]:
control: those structures and processes devised by command to enable it to manage risk.
command: the creative expression of human will necessary to accomplish the mission.
With this intentionally broad definition of control, any joint cognitive system
of humans and machines could be considered a part of the control function, as long
as it is being devised with the explicit purpose of managing risk, whether to detect
if orders to subordinate units are carried out, or to monitor how the enemy seems
to move and behave. According to Pigeau and McCann [206], the degree to which
humans are able to communicate intent effectively depends on three attributes of
commanders: Their competence, authority and responsibility, which Pigeau and
McCann abbreviate as the CAR model. With subordinate commanders of low
general competence, the most effective command style is one which does not grant
them much authority and thereby also removes responsibility. However, for subordinate officers of higher general competence, such a command style is directly
detrimental to overall performance. Having subordinates with higher competence
necessitates another command style that grants more authority to them but also
asks for more responsibility. In the CAR model, Pigeau and McCann view competence of subordinate commanders as relatively inherent, pre-defined property
and not as a function of expectations from and interactions with higher command.
17
2. Background
Shattuck and Woods [229] similarly characterize Command and Control as a
system for communicating intent, but use the phrase distributed supervisory control
system to describe C2 :
A remote supervisor uses a communications process to provide local
actors with plans and procedures and to impart his/her presence. The
degree of control established by the remote supervisor influences the
ability of the local actors to adapt to unanticipated conditions based
on the actors’ assessments of their local environments. [229, original
emphasis]
Shattuck and Woods contribute not only with a conceptual apparatus for C2 ,
but also provide some clues as to how this can guide commanders in their work.
They note that subordinate commanders who relate to their own situation, the
assumed intentions of the enemy, the intention of the plan and the needs for coordination, were most successful in following the general intent provided by higher
command [229]. Only following a plan, or only reasoning about assumed enemy
intentions but not the plan, causes lower performance with respect to following the
superior commander’s intent compared to considering all aspects of the situation.
2.1.5 Command and Control Performance
When characterizing command and control as a process through which commanders make sense of a situation and impose their will on others, the performance of
a command team can be understood as the level of understanding they have of a
situation and of how to act. Such understanding may be difficult to qualify and, in
particular, to quantify. Quantifying other approaches to characterizing command
and control are no less challenging. Command and control can be understood as
a continuous process for decision making [203, 215], a process for sensing the environment [7], as a joint cognitive system integrating people and machines [135],
a system for distributing functions among actors [134], for communicating intent
[229], as a structured workflow among a set of actors [1], or in terms of the specific psycho-social aspects of a command team [46, 14]. In recognizing that commanders operate within a larger organization and must abide by the norms and
rules of a larger system, one could also describe C2 as an abstract system of agents
and reason about the properties of the larger system and completely disregard the
mental models pertaining to individual commanders, through the use of Systems
Theory for studying large-scale characteristics of systems that have similar largescale behavior irrespective of the components they comprise [33]. Depending on
the perspective, different methods and tools are required to understand the work
of staff and evaluate tool support.
Although all of the perspectives offered by all these approaches to studying
and explaining command team behavior may have their distinct advantages for
understanding command and control, the abundance of theoretical concepts for
understanding C2 may in itself present a problem for the evaluation of the effectiveness of intelligent support systems and they may not always be helpful in the
design of qualitatively new tools and methods of work.
18
2.2. Planning and Cognition
2.1.5.1 Evaluation of knowledge
When considering metrics for evaluating team performance, Cannon-Bowers,
Salas, and Converse [52] have suggested that team knowledge should be seen as the
foundation of team behavior and performance, which means that accurate measurements and crisp characterizations of team mental models is essential for establishing the effectiveness of teams. It is, however, a great challenge to characterize
shared mental models so that their characterization and relationship to team performance is clear enough for instrumentation and measurement [151]. Although
it may be fruitful to describe cognitive activities in a group as joint ventures, such
characterizations do not readily translate into prescribing training for group tasks
[67]. Indeed, each particular research context may require a unique approach to
measuring and interpreting team mental models. This makes the transferability of
results challenging, it is further complicated by the fact that the concept of a mental
model can be used metaphorically [151], and is hard to separate in practice from
other factors determining performance in teams [187]. Mohammed, Klimoski, and
Rentsch [187] describe the challenge of measuring team mental models by outlining no fewer than 17 dimensions for evaluating techniques that can be used for measuring team mental models. The appropriate selection of techniques, and thus the
types of mental models one can capture, can be considered a challenging methodological problem. It is further complicated by acknowledging the specializations
that can occur within teams of commanders, and the different needs they have in
their respective tasks [130].
2.2 Planning and Cognition
One of the central issues in understanding command and control, and which is of
particular importance to building support systems for decision-makers, is the role
of plans and planning. In studies of other professional settings, Schön [225] has
characterized the relationship between action and deliberation using three main
concepts: practice, patterns and theory. The practice of a professional, whether
military commander, physician or teacher, shapes the patterns they are likely to observe, and that in turn guide the theories and explanations they devise to organize
those patterns. Conversely, the theories constructed make it possible to observe
certain patterns and have a direct influence on how practitioners perform their
duties and their practice. Similar views on the tight interplay between situated actions and deliberation have been described by Suchman [248, 247] and Winograd
and Flores [266]. Suchman, Winograd, and Flores all describe cognition and action with the intent of supporting the design of support systems for professionals.
In their descriptions of what constitutes cognition, they use Heidegger’s theories
of thinking and being as a situated practice that, to be properly described, always
needs to relate to a context in which cognition is performed. Suchman, Winograd
and Flores all stress the importance of the situatedness of actions and the interplay
between deliberation and action. An important consequence of their view is that
deliberate plans do not direct human action; plans merely frame a problem and
(possibly) improve our understanding of goals, conditions and priorities. Such19
2. Background
man further describes how we use plans in civil engineering and other settings as
tools for thinking about problems, but that they are not treated as scripts for action
by the people who are supposed to execute them [247]. Following that view, planning systems must not merely, or even primarily, support the production of plans
but rather the dissemination, monitoring and alteration of them [24].
Hutchins [140] describes how the communicative aspects of problem-solving
are central to collaborative problem-solving. In particular, he provides an example
of how a team commanding a naval vessel maintains control over the ship even
when an important instrument has broken down: the crew manages to navigate the
ship by organizing themselves as individual problem-solvers, sharing constraints
and partial solutions to the problem with one another, thereby contributing both
to a solution of the problem at hand (navigating the ship) and an understanding of
the problem (how the instrument had failed).
The social nature of making joint decisions, of contributing both partial solutions and partial problems to the group, presents certain requirements to support
system builders [3]. Decision support systems must acknowledge that people do
more during planning than merely construct an optimal sequence of actions given
constraints [130]. Usually, there are many decisions and discussions at higher levels of command that have more to do with discussions about what kinds of decisions
must be included in a plan in the first place, and how important different aspects
of the plan are [130]. However, Ackerman argues that as technical support systems for decision-making become more advanced, the more brittle they seem to
be. Ackerman argues that this brittleness could be due to support systems’ rigidity
in interpreting and presenting the world, how they define roles and authority and
how they allow people to communicate with others [3].
It is not the case that planning as a way of organizing actions is inappropriate
for all human endeavors. For some problems, the act of specifying a partially ordered sequence of well-defined actions is very productive. Managing the logistics
of air transports that go on regular schedules with tight regulations is a problem
that lends itself well to the type of ordering that Game Theory stipulates [154].
Problems that can be solved by more or less automated methods are called tame
problems, in contrast to wicked problems [217] (or fuzzy tasks [270]). Understanding when a problem is to be considered tame and when it can be considered wicked
is thus of great importance in support tool design.
2.2.1 Wicked problems
Wicked problems were first described in the social sciences [217] when it became
apparent that professionally crafted solutions to complex societal problems seemed
bound to fail. The concept of wickedness has to do with the issues inherent in
formulating and framing problems in society. A wicked problem is, by definition,
one that is resolved by being formulated in a certain way. There is no single solution
to a wicked problem, because people are bound to approach the problem with
different sets of values, and some may not even see the problem as a problem at
all (see Section 2.1.2). Thus, a resolution, or an acceptable compromise, seems to
be the best a decision-maker can offer. Framing and re-framing a problem using
20
2.2. Planning and Cognition
T
C
FRAM
function
I
P
O
R
Figure 2.3: The FRAM hexagon (adapted from [133]), represents six aspects of functions
that are central to the FRAM representation of cognitive functions in joint cognitive systems.
The hexagon represents a single cognitive function that can be coupled with others in a
larger, joint cognitive system.
different sets of concepts, and not formulating plans for optimal solutions, is central
to managing wicked problems.
Researchers into cognition have advocated communication tools based on
structured arguments as one approach to support the successful management
of wicked problems [180]. Facilitating team communication and understanding
through shared visual representations has also been proposed in general [61, 104],
and specifically in the context of command and control for space flight applications
[144] and in military settings [189]. An early attempt to facilitate the communication of team understanding was concept maps that use boxes and arrows between
boxes to denote concepts and relationships between concepts [202]. These shared
representations are believed to be useful in particular if the shared constraints that
the participants reason about regarding problems are made visible to all [245].
2.2.2 Functional modeling
The process of command and control can be described as a set of abstract functions performed by either humans, machines of combinations thereof. Command
and control can thus be seen as a socio-technical system that performs joint functions and is best described not in terms of what either machines of humans do as
part of that system, but in terms of what components based on combinations of
humans and machines do [135]. Modeling a battle command staff as a function
allocation [134] could yield models for how functions of man-machine configurations affect one another in a network of constrained actions [133]. Hollnagel introduced a method for visually representing such constraints in the FRAM model
[133] which has been applied to both aviation safety and command and control
[268].
Using the FRAM analysis method, researchers model six principal aspects of
each function as seen in Figure 2.3: input (I), output (O), preconditions (P), time
(T), resources (R), and control (C). Typically, resources are energy or materials
that are consumed or transformed when performing a function, and control represents that which controls the performance of the function, such as a plan or a
guideline. These can be linked to aspects of other functions, so that the output of
one function may be the input, resource, precondition or control of another. Once
21
2. Background
modelled, this manner of visualizing functions can help in visualizing constraints,
and thus the conditions for system performance. Cognitive systems researchers
argue that through the identification of how aspects of functions affect other functions, commanders are better equipped to reason about the constraints they operate under. In command and control, such constraints can include the coverages
of radar sensors, movement radii of military units or communication bottlenecks.
Several projects have developed decision aids that aim directly at bringing those
constraints directly visible for commanders [205, 102, 8].
In general, however, the concept of decision aid, or decision support, is very
broadly defined as any system that helps users in making decisions about the planning and execution of actions. General decision support systems can be classified
according to the theories of analysis that they adhere to, and the issues that are
managed through the system.
2.3 General decision support systems
In an early overview of the state of the art and practice of decision support systems,
Andriole provided an ordering of decision support systems according to their purpose and the structure they add to the decision-making process [12]. According to
Andriole, the stated purpose of decision support systems could be to:
1. define and structure problems
2. collect, fuse and filter data sources
3. generate courses of action
4. select from a set of possible options
This list presents different possible approaches to decision support systems, depending on the structure inherent in the problem domain at hand. Another characterization of decision support systems comes from Management Science, a field
that is concerned with corporate strategies and theories of how large corporations
work. Clark [63] provides an overview of decision support systems for Management Science in which the systems are categorized according to the activities they
are intended to support. Incidentally, the four stages of activities that comprise
planning in the corporate domain (analyzing the environment, planning direction,
planning strategy and implementing strategy) fit well with the description of the
OODA model (Figure 2.1) developed for military command and control. Clark
describes how, until 1992, most decision support tools for corporate planning had
been concerned with forecasting and were used to support either the generation
of plans, their implementation (through the formulation of plan documents for instance, or as part of control systems) or the analysis of the environment. Only a
few were concerned with the evaluation of what missions, objectives, values and
expectations to focus on. In one paper on a system to support strategic planning in
corporate settings for high-level corporate managers, Pinson, Louçã, and Moraitis
22
2.3. General decision support systems
[208] describe how corporate strategic planning is considered to be a linear process model consisting of four stages, as also proposed by Clark, but with the provision that Pinson, Louçã, and Moraitis stress that decision-making processes in
general, and in corporate management in particular, are ill-structured. In their
paper, Pinson, Louçã, and Moraitis propose a decision support system to support
corporate strategic planning that is based on an automated planning system. Their
example problems concern Enterprise Expansion, to which their system proposes automatic solutions through a sequence of abstract actions beginning with Research
Market Segmentation and Target Segment Viability and Report Consumer Attitudes to the
New Product. Goals can be hierarchically ordered and dependent on one another,
but are based on a fixed repository of known actions. The ill-structured nature
of decision-making is stated in their problem description but is not discussed in
relation to their system.
In Andriole’s general description of decision support, systems for helping decision makers have generally performed best when targeted at well-structured problems:
Generally speaking, decision support system are more successful when
targeted at [structured problems] and progressively less successful
when targeted at less structured, ill-defined problems such as strategic
planning and tactical operations [12, p. 8].
In a survey of DSS papers, Arnott and Pervan [16] confirm the prevalence of a
technology-centered perspective on decision support systems (DSSs), which has
emphasized the production of systems for automatically solving well-structured
problems in various domains, and not considering the relevance of such approaches
to human work:
Almost half of DSS papers did not use judgement and decision-making
reference research in the design and analysis of their projects and most
cited reference works are relatively old. A major omission in DSS
scholarship is the poor identification of the clients and users of the
various DSS applications that are the focus of investigation [16, p. 2].
However, there have also been some theoretical constructs devised for the purpose of guiding the construction of decision support systems, although these may
not have been used in technically oriented papers on decision support in command
and control.
2.3.1 Guidelines for Decision Support Systems
Decision support systems can be viewed as general software systems that have to
adhere to the requirements of any computer system to be considered practically
accepted [201, p. 25]: they have to be useful, provide some well-defined utility, be
reliable, compatible, cost-efficient and socially acceptable. The concept of utility
as a measure of how systems improve user performance has been considered an
23
2. Background
⁃
⁃
⁃
Task characteristics:
Non-routineness
Interdependence
Job Title
Technology characteristics:
⁃
Particular Systems Used
⁃
Department
Task-Technology Fit:
User Evaluations of 8 factors
⁃
Data Quality
⁃
Locatability
⁃
Authorization
⁃
Compatibility
⁃
Timeliness
⁃
Reliability
⁃
Ease/Training
⁃
Relatinship
⁃
Performance Impacts:
⁃
Perceived Impacts
Utilization:
Perceived
Dependence
Figure 2.4: Task/Technology Fit and related concepts in Goodhue’s and Thompson’s TPC
model relating technology to performance [114].
specific aspect of system engineering that is separate from from usability engineering and has been explored in some detail for systems in general and group decision
support systems in particular.
One of the concrete theoretical constructs that have been developed to reason about the relationship between users, technology and performance is the
Technology-to-Performance Chain (TPC) by Goodhue and Thompson [114] (Figure 2.4), which attempts to describe how the concept of Task-Technology Fit (TTF)
influences the utility of technology used as support. The TTF as introduced by
Goodhue and Thompson, is composed of 8 user-determined factors, and is one of
four components that indirectly or directly influence the performance impacts of
a system. The other three described in Figure 2.4 are Task Characteristics, Technology Characteristics and Utilization. Goodhue and Thompson address how users report their individual performance as experienced by the users themselves in broad
terms. That is, they noted that users described information systems as important
and valuable to them in the performance of their jobs if and mostly if they also
described the information system favorably with respect to the 8 TTF factors.
In a study of decision support systems as well as of other support systems for
groups of decision makers, Zigurs and Buckland [270] also characterize the relationship between tasks and technology with the concept of fit, although they do not
relate this to the same factors and concepts as proposed by Goodhue and Thompson. Instead, Zigurs and Buckland operationalize fit as the degree to which a computer support system provides adequate support for the class of tasks it is designed
to support. Adequate support is defined as support matching the “Fit Profile”
with respect to the three technology dimensions: communication support, process
structuring support and information processing support pertaining to each class of
tasks. Their task classification introduces four task types, each according the its
inherent degrees of freedom and according to the description of task types. Using
their classification scheme, military command and control tasks at the operational
and strategic level, as well as a lot of corporate strategic decision making, would be
considered the least structured task type (fuzzy tasks). For such tasks, Zigurs and
24
2.3. General decision support systems
Buckland argue that technology that supports both communication, process structuring and information processing would fit well. This, however, does not narrow
the design space for researchers working on decision support for higher levels of
command and control functions. Moreover, some researchers have claimed that
even for straightforward tasks such as scheduling, humans tend to re-define goals
and conditions as they discuss the problem at hand, which calls for decision support systems that go beyond the mere structuring of processes [34].
Zigurs and Buckland view tasks that are not well structured, in terms of the relationship between required inputs and purported outputs of the planning process,
as inherently fuzzy tasks that lack structure. In their treatment of how technologies fit tasks, the relationship between fuzzy tasks (using their terminology) and
technology is not as clear as the relationship between other classes of tasks and
technology: they advocate that tasks demonstrating less structure would benefit
from all three types of technological support systems, in contrast to the argument
made by Andriole [12], namely that decision support systems are generally less
useful in complex tasks, which was also the conclusion reached by Goodhue and
Thompson [114].
Although C2 researchers in general may not have characterized command and
control in general as consisting of well-structured problems, the characterization
of C2 as fuzzy may be not productive for guiding decision support system construction. If we describe cognitive tasks in C2 as simply fuzzy, then we may interpret
the results of the studies on technology fit as an indication that we should not even
consider building decision support systems for C2 . That, in turn, is a claim that is
both very general and not very helpful.
Therefore, to find guidance in the construction of decision support in command and control, some researchers have considered other ways to characterize
command and control and the use of technology in dynamic crisis situations. In
these characterizations, the adaptiveness and improvisation capabilities of staff are
vital components for efficient as well as effective responses and are key to describing the complex behavior displayed by command staffs. Mendonça and Wallace
[182] characterize improvisation in crisis management as understanding the inherent
variability of a plan and finding invariants that allow planners to substitute one resource for another if conditions call for alterations to the plan. The authors also
propose to use cognitive models of improvisation as the foundation of support tools
for commanders [181], in accordance with the cybernetic tradition of using computational models of human cognition as the basis for computer-based support systems.
The construction of support systems in military planning has been guided
mostly by generalizing and extending technical research on automated problemsolving systems and adapting them to solutions domains that are believed to be
relevant to military C2 [13, 41]. However, despite a strong technical imperative,
some researchers have used cognitive models of decision-making as a basis for
validating the behavior of computer agents used in simulations of military scenarios (see, e.g., [75]). Due to a strong AI heritage, most military decision support
systems have been deployed in domains that have been considered similar to the
restricted domains of basic AI research, only little consideration has been given
25
2. Background
to the nature of decision making processes. For example, most course of action
(CoA) generation systems have been modelled on automated planning systems.
In the following sections, we provide an overview of a sample of intelligent
decision support systems that have been devised for command and control. In
Section 2.4, we present a set of support systems for plan generation. Sections
2.7 and 2.8 present the critiquing approach to decision support, how it has been
deployed for military C2 scenarios, and the formal knowledge requirements when
devising such systems. In Section 2.9, we present the ontology-backed Semantic
Desktop concept, which we explore for the purpose of information management
support in case study 2. Finally, In Section 2.10 we present the Machine Learning
AI paradigm, which has only to a limited extent been applied in C2 settings but
which is studied more closely in case study 3 for the purpose of communication
analysis.
2.4 Automated planning
Planning systems are here defined as systems designed to produce a formally
sound, partially ordered sequence of actions, that start at an initial state and end
when a final, desired goal state has been reached. During the course of planning, that is, producing the set of actions necessary to attain the goal, an approach
based on automatic planning may cede control to a user who fills in the information required to produce the final plan, in which case the system is called a mixedinitiative planning system. Other systems have capabilities that include support for
execution-monitoring or communication other than planning, which we shall call
combined systems. These systems represent both research prototypes and working
systems employed by armed forces for planning and monitoring aspects of military
operations.
2.4.1 JADE
JADE is an operational system for force deployment planning, and especially for
force composition [191]. Force composition can be defined as the problem of determining which military units should be assigned which tasks in a battle plan,
depending on the abilities and the constraints that can be assigned to each unit.
JADE integrates data on force capabilities from software called the Force Management and Analysis Tool (ForMAT), which can be used to create custom-tailored
groups of military units for solving individual assignments. These may consist of
command units, maintenance units, defensive units and units with different offensive characteristics. Together, these units may form groups suitable for particular
tasks. The problem that JADE solves involves assigning optimal groups for larger
operations where forces may support one another and have different assignments
over time. To accomplish this, JADE provides an interface to several legacy data
sources to provide information on airports, seaports, military locations and military units. The information that JADE receives from these data sources, along
with a user’s intentions regarding mission goals, are fed to an automated planner
that creates final mission plans.
26
2.4. Automated planning
Plan
Echelons of commands
Plans,
feasibility
Think
Objectives,
constraints
Common
planning data
Do
Resources,
schedules
Figure 2.5: A distributed, collaborative planning process as described by ARPI.
2.4.2 CTAPS
The Contingency Theater Automated Planning System (CTAPS) is the collective
name of a group of components for providing an air tasking order, which coordinates air missions that are to be performed over a relatively short time period
[112]. The CTAPS system has been fielded within the US Air Force and, similarly
to JADE, has been used to integrate the various steps in a process involving manual work and rigorous procedures of calculating flight schedules for airborne missions. The CTAPS system has evolved from a tool designed to create a prioritized
target list into a comprehensive software suite for producing complete plans for air
missions, called the Theater Battle Management Core System (TBMCS) [146]. At
its heart, CTAPS provides a software for deciding what targets to attack with air
units. The target lists that become the end result of the process of using CTAPS
are usually subject to change during the execution of targets, either because of
changes in evaluations of priorities, or because new information that makes some
targets less interesting than others becomes available, or because breakdowns or
losses force planners to re-evaluate their plans. The authors of CTAPS outline
some suggestions for how re-planning in such circumstances could be achieved.
This includes the option of letting units coordinate their own efforts to a larger degree, and to have more freedom and control over how to manage conflicts in case
of re-planning needs. Interestingly, the option of providing such freedom, which
would be in line with the vision of Network-Centric Warfare, was, at the time
when CTAPS was constructed, considered too expensive in terms of the required
bandwidth for communication between units on the same level in the chain of command. Instead, the authors describe how CTAPS could be made even faster as a
planning system so that incremental changes could be performed during missions,
or that such changes would have to be postponed until the next planning cycle due
to the costs of canceling operations that have already begun.
27
2. Background
2.4.3 ARPA-Rome Planning Initiative
The ARPA-Rome Planning Initiative (ARPI) was a research effort intended to
radically enhance the capabilities of automatic planning and scheduling so that
these research areas would address real-world problems faced by military commanders [35]. The initiative was divided into three tracks: a research track, a technology transfer track and a demo track [142]. The research track was focused on
basic AI research efforts on large-scale, realistic problems faced by military commanders. The program also recognized that the model of crisis management as a
sequential process akin to the OODA loop, would not be appropriate for describing the complexity of a distributed, collaborative planning process (Figure 2.5).
Instead, due to the nature of planning as a collaborative venture in which shared
data sources are used, manipulated and populated during the planning process by
several command functions, the initiative stressed that AI systems deployed for
helping commanders in their work should support collaboration. The research
track aimed to bring metrics-based evaluation to applied AI research, with a focus
on large-scale, shared problems. The project also recognized that finalized plan
products were not the central aim of the command process, but that communication of intentions and rationale was more important. Specifically, the products
that emerged from the initiative were technological showcases of new planning
and scheduling systems that generated plans with new abilities to visualize the sequence of actions that form a plan and use previous cases as the basis for new plans.
Several specific applications were developed as Integrated Feasibility Demonstrators (IDFs) that provided automated planning capabilities for force deployment,
evacuation operations, and transportation planning.
2.4.4 ACPT and O-P3
As one part of ARPI, researchers created a planning tool for air campaigns by using an approach they called a Decision-Centered Design Approach, which is supposed to
incorporate human judgments in the planning process. The resulting tool, ACPT
[184], was developed as an attempt to make the assumptions and human heuristics used in planning explicitly known to an AI planner. One such assumption that
could be made of the planning process was that planning is a collaborative activity
with shared representations that are manipulated, created and dissected continuously by various parties. Most importantly, one conclusion from the ACPT project
was that assumptions made as part of the planning process must be kept visible at
all times to remind people of the conditions assumed to hold. By “eliciting information” from human planners, the ACPT researchers assumed that human evaluations of plans would become more transparent and possible to encode as part of
a plan critic.
In the Open Planning Process Panels system (O-P3 ) [172], Levine, Tate, and
Dalton developed a system for collaborating during the planning process and for
sharing different parts of the products with one another in a manner defined by
the Planning Process Panels. Using the O-P3 model, the ACPT planning system
28
2.4. Automated planning
was adapted for collaborative planning. The augmentation of ACPT had three
purposes:
1. To make explicit which part of the planning process every user was in,
2. allow users to compare and evaluate planning products, and
3. control the next step in the planning process, given the information from the
current step.
The ACPT planning system was used to automatically create a comparison matrix of different military Course of Action (CoA) plans, together with a flowchart
tracking the military planning process used to create plans. Following the test case
of air campaign planning, the O-P3 model has been extended and used for casebased, collaborative, mixed-initiative planning for emergency management [255,
210] under the name I-X Process Panels.
2.4.5 HICAP
HICAP [192] was an integrated environment, with several planning aids, which
let users create a hierarchical task network through a task network editor prior
to planning, so that an HTN planner could plan accordingly. The system also
included a conversational case-based planning tool called NaCoDAE [45]. Through
the task network editor, the user was guided through a set of questions about how
a task might be planned and what options there were.
The authors of HICAP were guided by a set of requirements that they had
found necessary to implement for the ensured success for a planning tool. They
claimed that a planning tool should be:
• doctrine-driven so that doctrine task analysis would guide the plan formulation
process,
• interactive by letting the human planner edit a plan interactively,
• provide case access to previously created plan segments, and
• perform book-keeping of task responsibilities for the force elements available
for planning.
These requirements were primarily justified by reflections on earlier system
designs and not based on literature on decision making or planning.
2.4.6 CADET
CADET [153, 118, 155] was a planning tool devised by Kott, Rasch, and Forbus
that was used experimentally in an integrated environment featuring several different views, and different input mechanisms as well as the use of a mixed-initiative
planning system.
29
2. Background
It was a planning application that could act on its own as well as be guided by
a user. At its core, it consisted of an automated AI planner that used task decomposition as its main strategy for planning, much like Hierarchical Task Networks
(HTN) planners do. CADET also offered the possibility of either creating a plan
in a mixed-initiative fashion or of generating a prototype plan directly from a highlevel CoA description. With the latter option, a human planner was expected to
verify the results afterwards and correct the results of wrong assumptions. The
CADET developers found that, in an evaluation with real commanders and real
exercises as a basis, their tool compared well when put against the performance
of human planners who received no help [214, 155]. The quality of the plans produced was determined by human judges to be on par with, or even slightly better
than those produced without the help of CADET. Also, in their testing scenarios,
Rasch, Kott, and Forbus found that it took significantly less time for those who
had used CADET compared to those who had no tool support at all.
However, there were some problems with the CADET system. Most notably,
the developers had decided not to develop the interface for presenting the results of
CADET beyond that of a table with all plan information (a synchronization matrix).
This representation proved to be troublesome for users to interpret, despite the
fact that synchronization matrices were something they were accustomed to from
their work. Kott, Rasch, and Forbus speculated that this may be due to the fact
that another agent’s ideas (CADET’s in this case) might require different visual
representations to make sense than does the representation you produce for your
own thoughts. Another finding of interest was a reflection made by users in an integrated experiment where the CADET tool received information from a sketching
application (NuSketch) [100] and a natural language interpreter (CoA Statement Creator). When the users had received a plan from CADET and modified the resulting
synchronization matrix, they wanted CADET to reflect these changes in the other
views of the plan. Phrased differently, the users wanted to have all the views (the
sketching application, the statement creator and the synchronization matrix) interconnected so that NuSketch and CoA Statement Creator would not only be used to
create planning information, but also to present the resulting plans.
2.5 Mixed-initiative planning systems
Apart from the systems above, several others have been developed as planning systems for military command and control and similar applications. One class of planning systems that has aimed at overcoming the limitations of AI planners that must
encode all domain knowledge completely and consistently, is mixed-initiative planners [48, 242]. Initiative in general, and in particular what mixed initiative means,
can be defined by using different reference points. Cohen et al. [66] present an
overview of four different definitions of initiative used by designers of mixedinitiative planning systems.
• The first definition considers the flow of conversation as the object to be controlled when taking the initiative: either the user controls the dialogue by
30
2.5. Mixed-initiative planning systems
asking questions and making demands to the planning system, or the planning system may ask the user for clarifications and resolutions to conflicts.
• The second definition relates initiative to controlling the formulation of goals
and tasks during the problem-solving process, and where the interaction between user and system is not taken into account.
• The third definition combines the first two, by defining initiative as presenting a goal to the other party in a conversation, making initiative be situated
in a conversation but contextually limited to the formulation of goals.
• The fourth definition presents a slightly more advanced view of control in
which a user takes initiative if she takes the first turn in a goal-oriented process, in which a dialogue may consist of several such processes.
Cohen presents TRAINS as an example of a system that uses definition 2.5
(see Section 2.5.1). Traum’s work on agent systems [258] is an example of definition 2.5, and the work of Ferguson and Allen [96] on conversational agents that
act independently of the problem solver would be examples of definition 2.5. In
Cohen’s overview, definition 2.5 is offered as a possible superset of the other three
that none of the reviewed mixed-initiative planning systems had implemented until
then (1998).
In another survey conducted by Burstein and McDermott [48], of issues which
it is important to consider when constructing mixed-initiative planning systems in
general, a set of aspects are presented that are inherent in planning systems and
especially important to develop or manage:
1. Plan-space search control management, which means to coordinate how different
agents in the planning process (humans or machines) search for solutions to
planning problems.
2. Representations and sharing of plans to communicate intents and ideas. Humans
often represent plan information using only texts and graphics. Thus, to
make it easy to process the information provided by an automatic planner,
system designers need to consider a representation that is most convenient
for humans.
3. Plan revision management. Several revisions of plans may have been developed in parallel, which presents the mixed-initiative planner with a problem
of how to manage these together, by merging them or otherwise arbitrate
between conflicting revisions.
4. Planning and reasoning under uncertainty. A principal reason for an AI planner
to include a human partner in the planning process is to manage uncertainty,
but exactly how can the AI planner make use of a human’s appreciation and
management of uncertainty?
5. Learning, to ensure that an AI planner learns what is expected as output.
31
2. Background
6. Inter-agent communication and coordination, to make sure that plan revisions and
modifications can, in fact, be communicated in a unified manner between all
agents.
2.5.1 TRAINS
One of the earliest mixed-initiative planning systems developed was TRAINS by
Ferguson, Allen, and Miller [97]. In TRAINS, users interact through multiple
modes with a logistical route planner for trains, both through a map and a domainspecific natural language dialogue. When designing the system, Ferguson, Allen,
and Miller realized that command and control scenarios, even as well-defined as
those of train transportation, were difficult to model since there are huge sources of
potentially useful information that can enter a planning problem, and it is not until
the act of planning that human planners decide on what aspects are useful to consider [97]. Also, they noted that not many completely automated planning systems
were in use for logistics planning due to the fact that the problem which was to be
planned for was often defined as part of the problem-solving process, and that option exploration was more critical to success in the planning process than creating
complete and consistent plans. In later work, Allen and Ferguson [10] proposed
the use of dialogue systems for collaborative problem solving, where the dialogue
system produces plans as a response to interactions, or where the dialogue systems
provides an interface for human planners as well as other problem-solving agents
[96].
2.5.2 Case-based plan re-use
Another early mixed-initiative planning project was the OZONE project [242]. In
the OZONE project, the researchers strove to provide support for solving practical planning and scheduling problems, something that traditional AI planner had
ignored. In their critique of AI planning systems, Smith, Lassila, and Becker stated
that AI planners forced users to adhere to the conditions and the formalism of the
planning system rather than the system following the conditions of the users. In
particular, they claimed that the iterative nature of refining constraints and goals
during planning, the integration of planning and resource allocation, and the reuse
of earlier solutions to typical problems had not been supported by previous planning support systems. To address these issues with the OZONE system, the researchers developed an ontology of concepts to be used both by different parts of
the planning system and also to reuse components of earlier planning and scheduling sessions. In their ontology, they defined the concepts of demands, activities,
resources, products, and constraints. The concept of plan reuse by seeding earlier plans to a planning system has also been tested by Gervasio, Iba, and Langley
[106] for crisis response planning and by Breslow and Aha [45] for a general, conversational plan development system (NaCoDAE).
32
2.5. Mixed-initiative planning systems
2.5.3 Plan sketches, templates and meta-theories
Myers [193] provided an early architecture for a mixed-initiative planner that
would flesh out a user-supplied plan sketch, that consisted of partial information
about goals and constraints, into a complete plan. For users to better understand
how to influence plans, Myers suggested the use of a domain meta-theory in planning, where general concepts of domain descriptions could be made more easily
available to human planners. In a 2000 paper on domain meta-theories [194], she
suggested that user involvement in the planning process was critical for the success
of using automated planning systems, and that an ontology based on the concepts
of roles, features and measures could be used to describe the planning from a human perspective. For example, with a planning operator move with two parameters (location1 and location2), the role of location1 could be Origin,
whereas the role of location2 might be Destination. A feature could be used to
differentiate operators from one another, so that a movement operator requiring
location1 and location2 to have ports would be described as having the feature Water transport. A measure would be a partial ordering that could be applied to
features, with affordability as a measure that could value Water transport higher than
Air transport. The measure time-efficiency would provide the reverse order. These
concepts, and their intended use in a planning scenario, convey a clear conceptualization of a user’s role in planning: to direct and influence an automatic process
of using a given set of planning operators to create an ordered sequence of actions
in the form of a plan. In later work by Myers on CODA, a system to be used in
the application domain of military Special Operations Forces mission planning, the
conversational and collaborative aspects of planning were emphasized more than
domain-modeling [195]. Myers also stated that the strategic nature of planning for
Special Operation Forces would effectively prevent a complete formalization of
the domain. However, plans were seen by CODA users as useful tools to communicate ideas and they also realized that collaboration in planning with shared,
limited resources benefitted from a unified system for creating and sharing plan
fragments. In later work on the PASSAT mixed-initiative system, Myers et al. put
the user in focus by using templates of old plans, formulated as hierarchical task
networks [93], and a tool for sketching plans through the combination of fragments of previous plans into new ones[196]. All constraints encoded as part of the
domain in PASSAT had to be resolved before invoking the planner, so no violations
of constraints or unknown information would be allowed.
2.5.4 Other mixed-initiative systems
The Jet Propulsion Laboratory of California Institute of Technology has designed
a planning system (ASPEN) [212] specifically for the purpose of iterative repair,
when a certain subset of tasks are common to several operations but where others
need to change with changing conditions and requirements. ASPEN has been used
for managing space missions where timely and accurate planning of similar tasks
is critical. One specific property of the ASPEN system is to identify conflicts in
resource usage and how they affect a plan [240].
33
2. Background
In contrast to the automation facilities provided by ASPEN, with respect to
repair and constraint violations, the MEXAR space planning system by Cortellessa
et al. [68] was developed specifically to develop trust in the system by adding a
new interaction module that was supposed to more closely model the workflow
of a human planner. The authors also argue that transparency and usability were
two important concepts for user acceptance of artificial support systems in their
domain. MEXAR utilizes a constraint solver that can use different strategies to
improve given solutions to planning problems. Cortellessa et al. also note that, in
contrast to other domains where automated planning and scheduling have been
deployed, planning decisions in space planning are always taken by humans and
only low-level activities are allowed to be automated.
The SIADEX and CHARADE systems are examples of emergency management systems developed for helping people contain and manage wildfires [95, 20].
CHARADE was the first of these, and featured a storage of earlier plans for reuse, and a constraint reasoning engine for outlining the possibilities for action and
the temporal constraints. When describing SIADEX, Fdez-Olivares et al. characterize the conditions for building a system for crisis management as fundamentally
different from those that govern the construction of space mission planners such
as ASPEN. One reason for this is that the users of SIADEX would not likely understand the formal planning concepts that are required to use a system such as
ASPEN and would therefore make it necessary for the system to have extensive
capabilities for translating planning domain concepts into those relevant to crisis management. One of the main contributions of SIADEX was that it allowed
users, who were not familiar with formal concepts in automated planning, to add
new knowledge to the domain used by the planning system.
One mixed-initiative planning system for command and control, ComiRem
[241], could be defined as a constraint manager as well as a planning system. The
system contains a set of known time and resource constraints for the conduct of
special forces missions and allowed a user to enter information about when certain
tasks should be completed. The system was used to maintain temporal restrictions
among the tasks given by the user, and performed continuous checks on the feasibility of the plan as the user conducted structured modifications. The range of
possible modifications was determined by the internal model that ComiRem maintained over the possible degrees of freedom in the scheduling of special operations.
In the Weasel Course-of-Action (CoA) planning system, user modifications to
the internal model and constraints were considered potentially debilitating [129]
to system performance, as the designers of the system believed that such modifications would result in making the user interface incomprehensible. Weasel was
designed to model an enemy’s possibilities for an action in response to a plan developed by a user. The authors of Weasel note in particular, that support systems must
require only minimal extra effort when used compared to when they are not used,
but assume that there will always be an extra cost incurred by a planning system.
They also caution against the use of planning systems that model only parts of the
domain of work as this can result in the potential brittleness of plans that are developed without taking all domain constraints into consideration. Furthermore, they
claim that the ability to create domain models of human problem solving depends
34
2.6. Research issues in Planning
on whether users are accustomed to reasoning about their own mental processes
in a manner that would make the models suitable for representation in a planning
system.
2.5.5 Evaluations of Mixed-initiative systems
Most mixed-initiative planning systems have been evaluated in a manner unique
to each system. To unify the evaluation criteria used for mixed-initiative systems
and incorporate the effectiveness of the human-machine configuration as a whole,
Kirkpatrick, Dilkina, and Havens [149] proposed a framework of evaluation criteria that was centered on the context of use. In their framework, they claim that
domain properties such as “high dynamism” should imply the system requirement
“rapid solution revision”, meaning that the system should offer solutions, but that
in cases of rapidly changing domains, the system should be optimized to offer fast
revision capabilities.
2.6 Research issues in Planning
Several of the descriptions of automated or mixed-initiative systems provided in
Sections 2.4 and 2.5 include propositions for desirable system properties. These
properties have been derived from the process of constructing each system and
reflecting on their performance or effect. In some cases, researchers describe how
their systems may serve users to understand a situation and communicate ideas
(e.g., [195, 96, 255]). In the discussion of performance or effect, however, none of
the projects include decision-theoretic references to reason about the relationship
between the support tool and its’ interaction with human planners, or how they
have been able to isolate issues pertaining to the specifics of the AI components of
the system compared to all other factors pertaining to factors that determine system
impact, such as fit [114, 270] or usability [201]. For instance, whether “rapid solution revision” would be appropriate to have as a development target or not depends
on whether a support system should be at all concerned with generating solutions
to complex problems [78]. Some researchers on human-machine interaction argue
that, rather than viewing the problem of how to design computer-based support
systems as a problem of allocating cognitive functions optimally between men and
machines, we should view it as a problem of how the support system best can inform the user of a situation, including all constraints and relationships encoded in a
computer model, so that the human decision maker is best equipped to make the
decision [79].
2.7 Critiquing
Critiquing systems is another class of support systems devised to complement human decision makers instead of replacing them, as earlier expert systems had attempted to do [200]. One of the principal arguments against expert systems in the
1980s was that these systems attempted to incorporate all the procedural and rele35
2. Background
vant domain knowledge required to automatically diagnose and solve problems in
expert domains, such as medical diagnoses [117].
2.7.1 Foundations of critiquing
The main underlying assumptions that guided the development of early
knowledge-based systems, so-called expert systems, came from the cybernetic
tradition of viewing knowledge as something that can be encoded as packages
and transmitted verbatim between sender and recipient [265]. With a formal
knowledge base, that is, a distillation of human knowledge in digital form, expert
systems builders conjectured that they could build artificial experts, capable of emulating human reasoning processes, by creating functionally equivalent symbolic
processing, as proposed by Simon and Newell [237]. In the cybernetic tradition,
knowledge is
• composed of universally true pieces of information that are interpreted by
agents to form larger structures, and
• separated from the subject knowing something.
By building reasoning agents that encode the models we use to describe our
own reasoning processes, and by making our every-day assumptions about our
world explicit in the form of knowledge bases, we can verify many of our earlier
claims about how human beings function in a manner not available previously.
Daniel Dennett, cognitive scientist and philosopher, has claimed that “computers
keep you honest in a way that philosophers have been hankering after for a long
time” [37], alluding to the potential of verifying models of human cognition that
simulated reasoning agents can provide us with.
Such a view of knowledge and reasoning contrasts with the view that knowledge is coupled to the subject knowing something and that it is intrinsically nontransferrable [84]. According to this latter view on the concept of knowledge,
human beings are intimately coupled to the world in which they live, and think
through the artifacts they use and the context they live in. Concepts in the mind
of one person, made concrete through the action of speaking and writing, set in
motion a process through which the concepts acquired by another individual are
changed, although not in a well-defined relation to the concepts in the original
speaker’s mind, as the cybernetic tradition would have it. Dreyfus [83] used this
line of reasoning to launch a general critique against AI approaches to knowledge representation, knowledge reasoning and problem solving, and argued that
to frame the world in concepts relevant to reason about in contrast to those that
are not, is fundamentally flawed. Human reasoning, according to Dreyfus, cannot
be de-coupled from the reasoning apparatus that performs it, and artificial, true intelligence would therefore require complete, low-level emulations of the physical
functions performed the human brain. Apart from such radical critiques against
the general aspirations of artificial intelligence as a discipline, even the more moderate critiques of expert systems have noted that human knowledge is difficult to
36
2.7. Critiquing
encode [117] for general application, but have maintained that systems for identifying clearly defined errors in reasoning about well-defined tasks could be helpful
in notifying people of what they might have overlooked [124].
2.7.2 Expert critiquing systems
A critiquing system makes the user aware of faults, possible improvements that
can be made, or of disregarded stimuli. Originally, critiquing as a concept for such
systems originates from research on Expert Critiquing Systems [232, 124], where researchers developed means for physicians to evaluate their courses of action before
proceeding with medical treatment [183]. These critiquing applications were developed as an improvement of expert systems (i.e. automated problem solvers that
suggest, patient treatments given descriptions of symptoms for instance) to overcome limitations with automated systems [231, 62, 159]. It was recognized that
establishing trust [190] and dialogue [28] with users was an even more important
issue than that of performing correct inferences from large sets of medical knowledge. Critiquing systems could also be seen as an application of Simon’s description
of bounded rationality when choosing between options [235], according to which
humans apply heuristics and simplification processes to cope with uncertainty and
information overload. These simplification processes have been described as responsible not only for human ingenuity in dealing with complex problems, but also
for humans errors in reasoning [232]. Schwenk [226] describes the simplification
processes as consisting of four principal components:
1. Prior hypothesis bias (you do not trust data if it violates prior beliefs)
2. Adjustment and anchoring (you adjust your beliefs according to new data,
but not enough to account for all observations)
3. Escalating commitment (you do not re-evaluate courses of action once they
are under way but invest more heavily in them in the belief that you err on
the side of commitment, not of judgment)
4. Reasoning by analogy (you mis-interpret a problem as being one you have
already seen and do not consider options that you can not interpret as being
similar to others you have chosen previously)
These are similar to the categories of errors listed by Silverman [232] and have
been fundamental to the design of critiquing systems. Although the assumptions
made about cognitive biases, upon which critiquing systems are based on, are
claimed to be universal to human cognition, only a few systems (e.g., [59]) explicitly refer to them in the justification of specific support mechanisms [16].
Expert Critiquing Systems are, in essence, relaxed versions of expert systems
that only analyze user solutions to problems and notify users if their solutions are
potentially dangerous or wrong. By allowing users more control over the problemsolving process, researchers noted that expert critiquing system were much easier
to use and trust than the expert systems they were based on, although little empirical investigations were conducted to establish the effectiveness of critique [233].
37
2. Background
Problem-solving level
Write a decision paper
Task level
Write Criteria
Cue level
Integrated
Operational
High Level
Other
Measurable
Human bias level
Stand-Alone
Technical
Too low level
Other
Not measurable
Strategy clusters
Strategy level
Influencers
Debiasers
Check over-specificity
modes
Hint
Tutor
Attempt Generalization
As a Repair Action (A)
Show Analogs
Show defaults
Explain Effects (E) &
Causes (C) Upon
Document Validity
Figure 2.6: An example of an expert critic ontology according to Silverman and Wenig
[234].
Silverman and Wenig [234] outlined a method for constructing expert critics
that mostly involve the knowledge acquisition process. In their presentation of a
method for acquiring knowledge of human problem-solving knowledge, they outline five hierarchically ordered types of questions:
1. Domain Task-Structural Questions (e.g., what are the types of tasks that
occur?),
2. Normative Cue Utilization Questions (e.g., what is the relevant universe of
cues?),
3. Missing Concept or Misconception Questions (e.g., what cues are experts
focusing on, and which of those are relevant?),
4. Influencer, Debiaser Tutor and/or Director Questions (e.g., how should the
system better cure the user to dampen his bias), and
5. Fine Tuning Questions (e.g., how should bias and strategy information be
modulated and presented to enhance the collaborative relationship?).
When using during the knowledge acquisition process, these questions would
provide the foundations for building sound critiquing systems according to Silverman and Wenig. As an example of how these questions could result in a specification for an expert critic, Figure 2.6 shows a tree where answers to each type of
questions are ordered according to the sequence outlined. The scenario described
relates to writing decision papers that explain the rationale for procuring military
equipment and also provide justifications of the evaluation made as part of the decision. The relevant cues for determining that a decision paper is correctly written, as
determined by an expert on writing procurement documents, are categorized with
the adjectives Integrated, Operational, High Level and so on. Paired with each
of these normative cues are possible human errors that are likely to be committed,
and these in turn are coupled to strategies for repairing each type of error.
38
2.7. Critiquing
Given this outline of questions to guide the knowledge acquisition process in
constructing expert critiquing systems, the domains in which expert critiquing systems are relevant are the ones in which these questions make sense and to which
domain experts could provide clear answers. In the context of command and control systems, critiquing systems have been deployed mainly to support the production of plan documents. We provide examples of such systems in the following
sections.
2.7.3 INSPECT/EXPECT
INSPECT was an air campaign planning tool with critiquing support [260] and
was primarily used to test plans for structural correctness with respect to decomposition of tasks and resource assignments. As a basis for its critiquing, it used
knowledge entered through the EXPECT knowledge capture tool. It used a textual and form-based representation of plans and presented the user with a dialoguelike textual interface. The cues about air campaign plans, which INSPECT was
responsible for bringing to the user’s attention, concerned the structural correctness of the products involved. INSPECT was developed as an add-on to ACPT
(see Section 2.4.4) and provided an inspection of issues that related to the completeness of an air campaign plan. As examples of how an air campaign plan could
be inspected and criticized, the authors of INSPECT describe a set of issues that
INSPECT tries to resolve:
• Objective with no child, meaning that a goal (node) has not been decomposed
into tasks to perform to achieve the goal,
• Objective with no parent, meaning that a task (node) does not serve a higher
objective,
• Objective has no measure of merit, meaning that it has not been stated how to
evaluate when the objective is reached,
• Objective with no structured specification, meaning that the objective is not defined
using the language known to INSPECT, and it can therefore not be further
evaluated,
• No objective fulfilling the basic tenants of air power, meaning that there is no description of how to achieve one of the basic prerequisites for conducting air
operations,
• Objective with too many parents, meaning that a task relates to too many higher
objectives, possibly indicating a vaguely formulated task in need of specification
• Incompatible sequence restrictions, meaning that the sequencing of tasks may be
incompatible, so that tasks A and B have to precede one another B simultaneously
39
2. Background
Critique
Constraint
check
Resource
check
Plan
structure
Link check
Entity-level
Discrete
resource
Complete
plan
Causal link
Complete
statement
Clear
statement
Correct
causal link
Capacitated
resource
Reusable
resource
Consumable
resource
Missing
causal link
Figure 2.7: A critic evaluation ontology as described by Blythe and Gil [38].
• No primary aircraft available for an objective, meaning that resources that should
primarily be used for the stated objective are not available,
• Incoherent decomposition, meaning that the a task is described using more general terms than the parent objective it is related to.
2.8 Knowledge Acquisition for Critiquing
INSPECT provides a concrete example of a system that attempts to help users correct structural errors in air tasking orders. Other approaches to provide critiquing
as a means of decision support have studied what knowledge it is actually useful to
model as part of the critiquing knowledge base, and how to extract it. Specifically
related to Course of Action critiquing, Blythe and Gil [38] present the EXPECT
method of eliciting expert knowledge. They describe a method for adding new
critiquing knowledge through a generic set of evaluations which can be extended
with new evaluations for each domain. They describe how the structure of evaluation follows a generic pattern, much as Silverman and Wenig [234] state that the
structure of a constructed critic follows a particular, hierarchical pattern.
Figure 2.7 describes a critic evaluation ontology as proposed by the problem
solving method in EXPECT. A notable difference compared to Figure 2.6 is that
the relationship between correctness and incorrectness is less straight-forward:
missing causal link and correct causal link are both related to the concept of a causal
link. In addition, the ontology for how to provide feedback and evaluate plans is
treated as a completely separate ontology and is not defined in relation to what to
evaluate. Although still using the concept of an ontology for evaluating plans and
products, the approach demonstrates an alternative strategy for knowledge acquisition.
In the Asgaard project [227, 228], ontology researchers built a support tool for
critiquing in medical settings, which was based on an ontology of medical guidelines and patient records. They also describe a set of questions that need to be
answered as part of the knowledge acquisition process, although these questions
40
2.8. Knowledge Acquisition for Critiquing
are ordered according to a set of corrective tasks that the system should perform,
such as the verification of a guideline, the applicability of guidelines given a patient, the recognition of intentions and so on. They also define a strict ontology for
when certain parts of a medical treatment plan should be put into action and when
not.
2.8.1 The HPKB and RKF Programmes
As a response to the difficulty for early knowledge-based systems to provide
enough common-sense guidance to commanders in military command and control, DARPA launched first the High Performance Knowledge Bases (HPKB)
and later the Rapid Knowledge Formation (RKF) research programmes, which
aimed to radically improve the knowledge bases available to both existing and future decision support systems. As a result of the programmes, ontologies [109],
ontology construction systems and rule construction systems tailored to command
and control were developed.
Both research programmes were highly technical in their focus, driving research on artificial intelligence systems through the construction and evaluation
of ontology- and rule-related systems. One of the first systems created as a result
of the HPKB initiative was Disciple, a knowledge-acquisition system developed
for military applications and especially for the analysis of enemy center of gravity
[256]. Disciple was essentially a system for instantiating a general ontology of Center of Gravity concepts provided a general description of the scenario in free text.
When evaluating the system, students in an army course were supposed to enter
scenario-specific information by themselves in a custom ontology editor. The Disciple system would use scripts for asking questions to the course participants and
eliciting the assumptions they made. The Disciple system mixed natural language
dialogue with pre-defined terms that were specific to the scenario to ensure consistency, as the human expert would add steps in his or her reasoning process about
what should or should not be considered a center of gravity. Disciple could guide
an expert to produce rules on how to determine what should be considered the
center of gravity in pre-defined cases, but the rules produced were not always consistent or optimally formulated. However, the formal approach to learning about
the center of gravity was appreciated by the participants in the studies undertaken
using Disciple.
In an RKF effort to enhance rule-acquisition from human subject experts, Pool,
Gil and others evaluated the SHAKEN and KRAKEN rule elicitation systems for
Course of Action analysis [209, 110]. Military experts on Course of Action analysis were asked to enter their knowledge regarding how they would conduct and
evaluate suggested courses of action in the SHAKEN/KRAKEN knowledge acquisition tools. The authors noted similar issues as had been discovered in the
Disciple project, namely that core ontology concepts, such as sufficient and necessary conditions when describing classes of objects as represented in description
logics-related languages [244], were difficult to grasp for military experts. When
provided with concrete examples of how their own rules for evaluating courses of
41
2. Background
action could be represented as rules in propositional logic, however, they could
more easily relate to the formalism required by the knowledge acquisition tools.
In one case, the rules and ontologies developed with the tool sets from these
two research programmes were evaluated in the same context in which they were
developed, and were not evaluated at all in the other. The Disciple ontology regarding what constitutes a center of gravity represented one, hard-coded version
of a human expert’s evaluation and was intended to be used when training others
to use the exact same evaluation criteria, regardless of whether this would make
the course participants more apt at making strategic decisions in the face of new
situations. The SHAKEN/KRAKEN systems were evaluated on the basis of the
ease with which a subject-matter expert could produce formally sound rules, and
not on the basis of whether these rules were useful in any larger context.
The PLANET ontology, which was developed as a plan ontology for the purpose of inspecting or more efficiently generating military and other plans, was
evaluated based on the percentage of concepts in the ontology that were considered similar to concepts in existing plans developed without PLANET [109]. In
their paper on the development and evaluation of PLANET, Gil and Blythe noted
that “[O]ntologies are generally accepted to be useful just on the basis of their
existence” which they wished to contrast to their evaluation of PLANET. Their
remark on the evaluation and role on ontologies reveals how knowledge representation has been regarded an enterprise mostly conducted by, and indeed reserved
for, computer scientists.
2.8.2 The roles of ontologies
Contemporary computer-represented ontologies are generally used to model a restricted aspect of the world, using a formal language derived from Description
Logics [21]. Ontologies have been considered a necessary prerequisite for building
systems that are capable of providing support for humans doing tasks that require
a lot of background knowledge, such as planning military operations [108, 213].
Gil [108] describes three distinct taxonomies for planning scenarios: action taxonomies, plan taxonomies and goal taxonomies. Action taxonomies can be used to relate
individual plan actions to each other, so that, for instance Speech Acts can be declared to be special forms of Communication Acts with the special condition that the
actors involved must be capable of speaking and hearing. Using such a declaration
of action properties, an automatic planner or plan authoring tool may deduce how
plans can be constructed. Plan taxonomies can be used to classify solutions to a
planning problem (plans) by relating them to each other with respect to their solution space. That way, a planner can search more efficiently for solutions and ensure
that any given solution space is not searched more than once. Goal taxonomies are
used to relate goal statements and method capabilities in the action taxonomy. For
example, there may be no individual action that achieves the goal as expressed by
the user, in which case the reasoner needs to establish if there is possibly a set of
actions that together may achieve the desired goal state, or if several objects need
to be involved when actions are performed to achieve the goal. Such goal reformulations are possible using the EXPECT planning system [251], which allow efficient
42
2.9. Semantic Desktops
translation mechanism between goal statements in human-readable form and the
domain representation accessible to the planner.
However, the approach of creating formal ontologies so as to statically structure taxonomies of actions, plans and goals, and enumerate all possible types of
entities and their relationships in planning, has been challenged by others. Those
who criticize the very utility of constructing formal ontologies, argue that the requirements for formal correctness make the systems created too narrow to be of
real use [25]. In this respect, formal ontology construction can be contrasted with
the hermeneutic approach to knowledge, where the act of constructing knowledge
structures defines knowledge [54] and where knowledge is not so much an object on its own but rather an attribute of people knowing something. Therefore,
knowledge cannot be considered without reference to explicit users and their context [176].
In an alternative approach to building system ontologies, Fonseca and Martin
[99] argue that the very foundations for building ontologies is flawed. In a formal
ontology, different perspectives on concepts are considered to be inconsistencies,
but in human reasoning, they can enhance the appreciation of a problem1 . Ambiguities in formal ontologies must be resolved, whereas ambiguities for humans
can signify that a concept is simply ambiguous. Fonseca and Martin’s primary criticism of formal ontologies is that the domains in which they can be formulated must
necessarily be very narrow for them to be as consistent and complete as required
by formal reasoning engines, and therefore, that they risk being brittle and of very
restricted use. In their outline of how to construct systems for helping people in
knowledge-intensive tasks, Fonseca and Martin argue that ontologies should primarily be considered tools for helping people communicate, in the same way as
Winograd and Flores propose that communication tools should be developed for
helping people manage problems instead of automated reasoning tools [266].
2.9 Semantic Desktops
For the purpose of personal information management, or information management
in larger groups, researchers have taken the formal models developed in the context of the Semantic Web and applied the same models and reasoning mechanisms
in desktop settings. The Semantic Web is the name of a research vision of the
World Wide Web as more than a collection of text documents but instead, as documents with formally specified semantics that can enable a computer to reason
with rich descriptions whatever the documents contain [32]. For the purpose of
describing the contents on the web, researchers have developed formal languages
for ontologies on the web [137, 138] that are based on Description Logics [21]. Description Logics in turn is a logical formalism for knowledge representation, based
on the two notions of a T-Box and an A-Box which describe classes and objects
in a domain. Description Logic languages come in several flavors with different
trade-offs between expressivity and inference capabilities. The most common Semantic Web language, OWL [76], exists as a set of three increasingly expressive
1 see
Section 2.1.2 for an example of different perspectives on a problem
43
2. Background
sublanguages Owl Lite, OWL DL, and OWL Full, of which OWL Lite and OWL DL
are both computationally complete and decidable [40]. By that we mean that all
statements entailed by an ontology in the two sublanguages are guaranteed to be
computable and that expressions in either language will compute in finite execution
time. For OWL-DL, reasoning engines capable of determining class memberships
and ontology soundness have been developed [122, 239]. With such reasoning engines and ontological descriptions in OWL-DL of the conditions for membership
in a class, instances can automatically be classified with respect to classes of the
ontology. Also, concepts (defined in T-Boxes) that have differently stated but semantically identical statements, can be detected to make the formal description of
the ontology clean.
Semantic web technologies have generated concrete applications such as
search engines capable of inferring concepts that are related to a term specified in a
free-text query [179], extracting semantic concepts from web pages [128] and relationships between documents [94]. Apart from such applications, Semantic Web
research has brought about many technologies for annotating documents [216, 92,
91, 119] and extracting ontological concepts and instances from more or less structured text [80]. Some researchers have even used ontologies to model how regular
HTML documents that contain objects of interest to harvest are represented in the
text and structure of the document [136].
Semantic desktops are inspired by the semantic web in the sense that the semantic desktop brings Semantic Web technologies to users´ desktops [60, 224, 81].
A principal idea behind the semantic desktop approach is to support working and
reasoning with semantic entities that are normally scattered across several different
resources. One of the advantages of semantic desktops, if they are implemented as
middleware platforms that are accessible for all applications [120], is that they can
manage all the interactions and documents of a user. By building an incremental
knowledge base of patterns in user interactions and in the documents managed,
and by using a common ontology storage shared by several users, the Semantic
Desktop is argued to be a promising avenue for realizing the personal memex envisioned by Bush [49].
Some semantic desktop projects have built ontological models that include
desktop-related features such as what files are attachments of what e-mail messages [103], and interpretations of structural information (so that both the PDFAuthor field of a PDF and the FROM header of an e-mail create author objects in an
ontology) [50]. Although much of the research on semantic desktops has been
technology-driven, some researchers have asked what people actually recall from
the documents they use [36] and what mental models they actually use when reasoning through objects in their desktop environment [198].
2.10 Machine Learning
Automated planning systems can automate parts of the plan construction process
involved in command and control, and critiquing systems may be used to correct
for known errors when plan documents are produced. A third class of systems,
44
2.10. Machine Learning
Universe
Sample data
Filter
Test data
Classifier
model
Construct model
Classify
Figure 2.8: Supervised machine learning as a sequence of three steps: filtering, module
construction and classification. This describes learning from examples in batch mode with
a set of training examples.
which have only to a limited extent been deployed in command and control (see,
e.g., [47]), is machine learning [185].
Figure 2.8 describes the prototypical machine learning scenario, in which an
automatic classifier builds a model of the relationships between attribute values of
instances and the decision classes they belong to. Every instance may only belong
to one decision class. Alternatively, a clusterer may infer groupings of instances,
provided only metrics for comparison of attribute values but not requiring any
prior decomposition of instances in groups. An instance is a multi-valued observation that the classifier or clusterer uses to build an internal representation of
instance groups. Instance attributes may be nominal (discrete and finite), textual,
or numerical and, as assumed by almost all machine learning approaches, relate
only to single instances. That is, no single instance attribute value may relate to
more than one instance, or denote another instance. Instance attributes are used
by the classifier to make inferences about the class of new, unknown instances.
The degree to which a classifier can build a model that accurately classifies known
instances is denoted the precision of the classifier. Generally, clusterers and classifiers consider only single observations and not sets of observations as the basis for
patterns. That is, models built using traditional machine learning techniques will
not be able to find inter-relationships between individual measurements but only
be able to classify single measurements as belonging to one of a discrete number of
classes. Each individual technique may have further limitations as to what specific
patterns it is likely to find.
The most prominent use of classification and clustering techniques for command and control has been in the area of information fusion, where low-level sensor data are to be converted into high-level concepts, according to some accepted
standards [15]. As in the case with the creators of ontologies for the support of plan
construction, machine learning researchers tend to define problems such as sense45
2. Background
making in terms of instrumental, game-theoretic terms. By assuming that there
are crisply defined expected utilities and probabilities for success for all actions,
Arnborg et al. describe how situation awareness in command and control can be
reduced to game-theory. With such assumptions, they go on to state that quality in
decision making can be crisply defined in terms of the extent to which objectively relevant information is used. The justifications for making such assumptions in the light
of research on the nature of command and control are not discussed. Apart from
their use in information fusion, machine learning approaches have been suggested
for war-gaming, that is, the process of simulating a battle between a country’s own
forces and those of an opponent [47]. Such games rely on game-theoretic, rationalistic assumptions of decision-making and on the notion that concepts such as
hostility can be modeled as numerical entities.
Machine learning techniques have also been devised for the problem of information management, both as an automated approach to process e-mail [175, 222]
and as a support for navigating document collections (e.g., [73, 19]). Machine
learning techniques have been devised either as separate parts of e-mail clients for
classifying messages either into two classes, (spam/no spam) or in a fixed number
of classes related to the workflow of the person using the system [157], and also
as components of agent-based e-mail assistants that organize messages in contextually relevant categories for users (e.g., [158, 23, 253]).
2.10.1 Information extraction
Depending on the domain, contextually relevant information may be found in either the structure, the contents and the usage of messages and documents. Mining for relevant information from large heterogenous corpora became popular and
increasingly relevant with the advent of the World Wide Web [152], and with
applications such as personalization [85] and terminology mining [223]. Some
applications have relied on the ability to extract semantically relevant concepts
from the structure of machine-generated documents (e.g., [156, 17]) or workflows
(e.g., [105, 264]). Other projects have studied how to extract contextually relevant
terms from messages as support for finding similar conversations, by using some
metrics of similarity [160, 56]. Tracking and gathering data on user behavior has
been applied to personalization, and to this end the user interface of web services
can be adapted to the navigation behavior of a user [85].
Machine learning applications that require user interaction, such as e-mail tagging systems that rely on user-supplied data to complement their input and make
predictions about the classes to apply to messages, are usually not labeled mixedinitiative as planning systems. However, similar to planning systems, machine
learning systems can be categorized using two dimensions: as autonomous (rulebased) or interactive along one dimension, and as systems for supporting navigation (clustering) or systems for the classification of instances along the other.
In Figure 2.9, we present two aspects of machine learning systems: the application mode and the interaction mode of the system. Information fusion systems
[219] create clusters autonomously from low-level sensor data, with the intention
of capturing meaningful entities. Clustering sensor data can also be performed
46
2.10. Machine Learning
Clustering
Application
Classification
Autonomous
Information fusion
Junk e-mail filters
Interactive
Navigation systems
E-mail assistants
Interaction
Figure 2.9: A categorization of machine learning systems according to the interaction mode
and application mode of the system.
by providing training examples, in which case information fusion becomes similar
to junk e-mail filters [222], which work autonomously by utilizing large banks of
examples that provide a basis for classification as junk or non-junk e-mail. Navigation systems provide clustering capabilities in an interactive manner, so that users
can, for instance, browse large collections of documents or texts (e.g., [246, 218,
73]). E-mail assistants are examples of interactive classification systems that ask
a user for classifications and can suggest categorizing or acting on new messages
with a precision that increases in proportion to the amount of messages the system
has trained on [74].
47
3 Method
In this chapter, we describe the methods that we used when we studied the properties of decision support systems in command and control settings in the three case
studies of this thesis. Our work builds on the scientific traditions from several disciplines, which is why we begin with a general overview of scientific methods and
conceptualizations of knowledge in Sections 3.1 and 3.2, respectively. The theoretical background governing information systems research and the study methods
employed in our work are described in Sections 3.3, 3.4, and 3.5. In Section 3.6,
we describe the methodological components of each case study in relation to the
general methodological framework presented.
3.1 Scientific methods
Scientific results stem from successfully persuading others of an argument, where
the argument in the case of natural sciences relates to properties of natural phenomena in this world. Exactly how a method for persuading others is established,
how we can reason about the concept of method and whether or not methods in
natural science can be considered to be strictly rational and logical over the course
of history has been the subject of extended philosophical arguments [57]. Of the
different scientific disciplines, natural science in general, and physics in particular,
have been most closely studied with regard to the nature of science. Physics has
historically been viewed as a purportedly rational system of thought that, when applied to any aspect of this world, yields insights that are unavailable through other
means. Early characterizations of the natural sciences, which have been suggested
as archetypical of any rational inquiry of any state of affairs, claimed that the main
virtue of natural science lay in the inductivist stance that a scientist has. An in49
3. Method
ductivist is not (or at least should not be) biased towards favoring any particular
explanation of observed phenomena, but should have an open mind for any relationship that emerges from a set of observations. This view was challenged by,
among others, Karl Popper, who suggested that science be characterized not so
much by the absence of theories prior to observations, because theoretical frameworks frame possible objects of study in the first place, but by a willingness to
subject theories to tests. He went on to postulate that, any theories not passing
the tests should be abandoned [57, pp. 39]. There are, however, several examples
of when scientists have not abandoned theories that seem to fail to make correct
predictions, and who have afterwards been judged correct in doing so. Incorrect
experimental setups may be regarded as more probable explanations than false
theories of failed predictions, so the naïve falsificationist stance is not consistent
with scientific work.
Not only Popper’s, but also later approaches to rationalize progress in natural science, have been subjected to the criticism that they do not capture the true
nature of scientific work [98]. In particular, Feyerabend, who advocates an anarchistic methodology where any means should be allowed for individual scientists
to pursue a higher understanding of our world, denies the possibility of any inherent rationality in the history of the natural sciences. In his sharp critique of the
purported rationality of Galileo and other historically important figures in natural
science, he does not attempt to explain why some results find a better resonance
with both experimental results as well as with other scientists [98]. However, his
critiques of how traditional scientific work has been conducted stand essentially
unchallenged, even if he does not make any claims of a prescriptive, restrictive
model for scientific progress.
Explaining scientific advances in general, in the social sciences, the natural
sciences, as well as in design sciences such as computer science, challenges us
to understand not only the social foundations of science, but also the concept of
knowledge itself.
3.2 Epistemology
As part of the reasoning about science, philosophers have considered different
methods of characterizing knowledge. The study of knowledge is called epistemology, and we will briefly outline two contrasting views about knowledge that have
implications both for the method employed in this thesis and the representation
of knowledge in the systems we have created as part of the thesis work. The first
view is called the objectivist view, which dates back to Plato [57, chapter 10]. Plato
stated that there is an external, ideal world, which exists in parallel to our own; a
world in which universal truths hold and abstract ideas exist. Such a world is not
dependent on the subjects (humans) who conceive of it, but is independent of it.
Through observing the world and reasoning about relationships that are visible,
the ideal world can be revealed to us. Both inductionism (observation should precede reason) and rationalism (reason should precede observation) can be deduced
as methods of inquiry, based on the view that observations and reason can guide us
50
3.2. Epistemology
to true knowledge. The other view of knowledge is that it is intrinsically connected
to us as subjects, and is used primarily as a vehicle of communication between one
person and another. We cannot be sure what other people actually think, but that
we can use our own knowledge to engage in conversations with others. There is no
way to actually model true and universal knowledge because, by the very definition
of knowledge as an attribute of someone knowing, it cannot be made universal (see
Section 2.8.2). Everything we see must thus be interpreted and given meaning by
ourselves. The study of such interpretations is called hermeneutics.
3.2.1 Hermeneutics
In this work, we adopt the view of hermeneutics proposed by Heidegger, later developed by Gadamer. Most relevant in the thinking of Heidegger and Gadamer
to this dissertation is the essence of interpretation. Heidegger originally suggested
that we cannot view ourselves as individuals separate from the world we live in,
but we are all part of a larger context, through which we act and even think. As
a consequence, concepts that we formulate are also situated, that is, dependent
on the context of those who name and frame the concepts [84]. Gadamer posits
that thinking, such as it occurs in the search for truth in various guises, is tantamount to conversation between people. Although Gadamer acknowledges that much
thinking can occur without it being the direct result of verbal, direct discussions,
conversations always occur in a context where previous questions are answered in
a dialogue-like manner, even if one participant of the conversation is only represented in the form of a text written long ago [125, p. 226]. All statements about
the world, all plans that are constructed and all theses presented must therefore be
understood as answers to questions, asked in a specific discursive context of some
specific human beings. One cannot make claims about them without reference to
this frame, and specifically, systems designed to assist human beings must take into
account the role of conversation in human problem solving.
Both Winograd and Flores [266] and Fonseca and Martin [99] apply
Gadamer’s and Heidegger’s philosophy of thinking as concrete vehicles for understanding the role of information systems and presenting designers of such systems
with a new orientation. In the view presented on cognition and information systems by Winograd and Flores, to understand cognition, we must understand that
the representations we use are mediums for managing problems, through which
we may end up resolving problems by treating them as puzzles that we solve theoretically, or by restating them in other terms that make the problem irrelevant or
trivial. Philosophically, we can regard this as a dichotomy between true statements
about the world and practical statements about the world [125], where Winograd
and Flores argue that knowledge is fundamentally about having (more or less)
practical vehicles for communication. Further, our knowledge and understanding
is shaped by the environment we find ourselves in, and restricts our reasoning
about exactly how technology can be useful to us [131].
The research in this thesis has been conducted through a series of decision
support system case studies [86], in which we have analyzed a domain of work in
command and control, built a support system for the domain and evaluated its
51
3. Method
1. purpose and scope,
2. constructs,
3. principles of form and function,
4. artifact mutability,
5. testable propositions,
6. justificatory knowledge (kernel theories),
7. principles of implementation, and
8. an expository instantiation.
Figure 3.1: Structural components of information systems design theories according to
Gregor and Jones [116].
utility. The knowledge production process of this research process is therefore
characterized both by case-study research and research through the design and
development of information systems [132].
3.3 Theory in information systems research
When designing and evaluating information systems, we operate within the theoretical frameworks provided by both design science and natural science. Sometimes, the focus of study is human behavior when using the systems we construct,
and sometimes it is the system itself. With different stakeholders and different
views on the role and nature of a theory, defining unified criteria for theory is challenging if even desirable. Basing his ideas on the traditions of both natural and
social sciences, Gregor [115] provided a classification of information systems theories. According to Gregor’s classification, theory types are ordered according to
what information they provide and the types of statements they make about the
world:
1. Analysis theories provide names to concepts and describe events taking place
in relation to the construction or use of information systems.
2. Explanation theories explain relationships between concepts but provide no
falsifiable or quantifiable propositions.
3. Explanation and prediction theories provide testable propositions about relationships concerning information systems per se or in relationship to their use.
4. Design and action theories present guidance, criteria or principles concerning
the construction and use of information systems.
52
3.4. Case Study research
Gregor and Jones have also provided a set of structural components that they
argue should form the basis of information systems design theories [116] (Figure
3.1). Some of these components are closely related to the original four causa, or
explanations of things, given by Aristotle in the 4th century BC: the causa finalis
(“purpose and scope”), the causa materialis (“the constructs”), the causa formalis
(“principles of form and function”) and causa efficiens (“principles of implementation”). However, there are also components among those proposed that are derived
from contemporary natural science and design science traditions, such as “testable
propositions” and “expository instantiations” (implementations of systems).
The contributions in this thesis can, using this framework, be described as the
analysis (theory type 1) and the design (theory type 4) of intelligent decision support systems. Based on the list of structural components proposed by Gregor and
Jones, we outline the structural components of our studies in Tables 3.2, 3.3 and
3.4.
3.4 Case Study research
Case study research is often performed in social sciences to study complex, organizational phenomena [86]. Case studies in information systems research can
relate to situated studies of information systems use [30] or analytical case studies
of technical systems regarding properties that are relevant to a domain (e.g., [9]).
We can also regard the analyses conducted in case studies 1 and 2 as primitive
forms of action research, in which the researcher engages in changing some aspect of
a work situation and reflecting on the results [58], even though the changes have not
been propagated throughout an organization and back to the researcher in these
cases.
3.5 Information Systems research
The research presented in this thesis concerns the utility of intelligent decision support systems for command and control. We explore what intelligent decision support
systems can do currently that is of relevance to command and control and what those support systems require to function (research question 1) and, given what intelligent decision
support systems can do and their functional requirements, how they should be implemented in
command and control settings (research question 2).
Information systems research combines two main research traditions: those of
the natural sciences and those of the design sciences. As a framework for combining the knowledge-building traditions of both natural science and design science
in one coherent framework, March and Smith [177] suggested that the activities
and research outputs may be organized as in Table 3.1. One important activity
in design research is to build systems for the purpose of evaluating their utility. To
understand utility, we need to make sure that systems are not just internally sound,
but relevant to the domain of work. In one characterization of how design science
can be made relevant to practice, Hevner et al. [132] outline a research process
shaped by an environment in the form of people, organizations and current technol53
3. Method
Table 3.1: A framework for information technology research, adapted from March and
Smith[177].
Research Activities
Build
Evaluate
Theorize
Justify
Constructs
Research outputs
Model
Method
Instantiation
ogy, and a knowledge base in the form of theoretical foundations and methodologies
for conducting research. According to this model, information systems research is
driven by the construction of theories and artifacts, that are justified or evaluated
through one of the methodologies available in the knowledge base, and in which
the researcher operates. The degree to which the environment is taken into account defines the relevance of the research, and the degree to which methodologies
and formal knowledge bases are used define rigor. In this thesis, the three case
studies have different emphasis with respect to relevance and rigor, although they
all attempt to strike a balance between being qualitatively new systems, which offer insights into criteria for building intelligent decision support, and being systems
that are of high relevance to the most pressing tasks for commanders. In several
studies of decision support systems, the nature of decision making has not been
given as much consideration as the technical aspects of systems devised [16]. In
this thesis, however, we attempt to combine traditional design science methods of
iteratively developing prototype systems while considering the implications of contemporary characterizations of command and control work on the design criteria.
3.5.1 Prototyping and iterative development
The first two case studies were built and evaluated as prototype systems [111]. In
each case, the domain descriptions in written form were given as scenarios that had
been either produced during, or been the basis of command and control scenarios.
We conducted the first case study in two stages to produce a prototype of a combined plan authoring support tool, ComPlan. ComPlan can also be regarded as a
provotype, that is, a prototype intended to stimulate discussions and improve our understanding of some phenomenon, not provide the basis for a product [186]. In developing decision support systems, it is imperative to adopt iterative development
methods, where system prototypes successively lead to more refined approaches,
and where evaluations of prototypes are adapted to the stage in development to
which they belong [16].
The third case study was conducted in a domain where performance metrics
were available as researchers had already analyzed the material we created a tool
support for analyzing. Therefore, we performed both quantitative and qualitative
evaluations of the tool approach presented. The quantitative evaluation of machine
54
3.5. Information Systems research
learning approaches was produced to delineate the feasibility of machine learning
as an approach to supporting communication analysis.
3.5.2 Evaluations
Our research questions concern the relevance that intelligent decision support systems have for command and control. To that end, it is imperative that the evaluations conducted have practical relevance to the stakeholders we address in our
work [31]. The first two studies, concerning planning and document management,
were based on studies of existing scenarios, documents and technological environments. Our justifications for the models in those two case studies came from
observations made on workflows, concrete scenarios, and documents produced in
command and control settings. The evaluations of both those systems were analytical in the sense that the qualities of both systems were compared to the features
of the domains of work, described by the scenarios and documents used. No objective quality measures for support systems in planning were available, due to
the challenge of establishing objective functions for measuring team performance,
similar to measuring quality of team sensemaking (see Section 2.1.5).
Parikh, Fazlollahi, and Verma [204] demonstrate one example of how to measure the effectiveness of a decision support systems for sensemaking. In their
study, individual users were given one of several different types of support. In
their study, users were to interpret historical data sets using either no decision
support at all, or using either informative, suggestive, predefined, or dynamic decisional guidance. Given a pre-defined correct classification of data, researchers
could then measure the increased decision performance, improved learning and
reduced time required to reach decisions given each type of decision support. Although no task descriptions or details of the support systems were provided, the
general characterization of the decision task only remotely resembles tasks in military C2 , which makes direct methodological comparisons difficult.
When reasoning about the properties of systems for supporting commanders,
we would ideally like to have measurements that fulfill the evaluation criteria proposed by Hollnagel with respect to system evaluation in cognitive systems engineering [135, p. 51]: (1) they must be possible, (2) they must be reliable, and (3)
they must be meaningful and possibly also valid. However, recognizing that the
study of computer and information systems is tightly integrated with the study of
human psychology, organizations, culture, and attitudes towards technology, the
research field must adopt qualitative methods suitable for the interpretive analysis
required to elicit cognitive and social phenomena related to the use of computers
and information systems [58].
Our third study allowed a more rigorous analysis. In our study, we first established a baseline for the precision of machine learning approaches in message
classification of C2 communications through the use of previous human evaluations of command and control communications. We followed up with an interview
study, were the participants had extensive experience in conducting the tasks the
designed system was intended to support and could provide valuable qualitative
evaluations of the use-cases and system design when presented with them.
55
3. Method
Research activity
Research outputs
Development and study of
a plan critiquing system
based on an automated
planning system
Development and study of
the ComPlan system
Knowledge representation
issues, user interaction issues
Criteria for
planning support
systems
Figure 3.2: The two stages in study 1 and the products attained in each stage.
3.6 Our research method
The three studies explore two dimensions in support tool design: the general/taskspecific purpose of the support tools and the general/task-specific purpose of the
analysis performed by the tool (see Section 1.1). We began the work presented in
this thesis with a task-specific tool which had task-specific analysis mechanisms.
The conclusions from the first study indicated in part that task-specific tools are
difficult to employ and study, due to the need for standard desktop tools in planning and the restrictions of planning support systems with respect to the extensive
formal domain modeling required for the system to be functional. The second study
built on the lessons from the first, and studied task-specific augmentations of general document management tools in a desktop setting. The third study explored
another variation: general analysis and support mechanisms for a task-specific scenario. Here, we investigated tool support for a meta-problem in command and
control: studying communications and other data from command and control scenarios to understand and improve team performance.
3.6.1 Case Study 1: Planning
Case study 1 consisted of two stages of systems development and analysis. These
stages are described in more detail in chapter 4, but for an overview, Figure 3.2
lists the two main activities involved and the outputs produced.
Figure 3.3 describes the work conducted in the first study, through the framework suggested by Hevner et al. [132]. The relevance of the study results from the
relationship between the people and organization that provide the environment of
the study and the developed system. The rigor is defined, according to Hevner
et al., as a function of the theoretical foundations and methodologies employed
during the study. As a scientific contribution, case study 1 provides three design
criteria for intelligent planning support systems which were used as an hypothesis for
designing intelligent decision support systems in case studies 2 and 3. Based on
the categorization of Gregor and Jones [116], these results of the case study can
be structured as shown in Table 3.2.
56
3.6. Our research method
Environment
Relevance
People
Professional
commanders,
logistics
experts
Case Study 1
Rigor
Foundations
Cognitive systems
engineering,
Decision theory,
Mixed-initiative
planning,
Critiquing
systems
Develop/build
ComPlan
Organization
Collaborative
work,
iterative plan
refinements
Knowledge Base
Justify/evaluate
Domain
modeling,
Prototype analysis
Methodologies
Prototyping.
iterative
development
Figure 3.3: The relevance and rigor of the first study, as defined by the considerations taken
of professional commanders’ work, the collaborative nature of planning and refining plans,
the theoretical foundations from decision theory, mixed-initiative planning and critiquing
systems and methodologies such as prototyping and iterative development.
Environment
Relevance
People
Professional
commanders,
government
officials and
NGO staff
Case Study 2
Rigor
Develop/build
The Planning
Desktop
Organization
Documentdriven
workflows
Justify/evaluate
Domain
modeling,
prototype analysis
Knowledge Base
Foundations
Semantic
Desktops,
Formal
ontologies,
Information
management
Methodologies
Provotyping
Technology
Standard
desktop tools
Figure 3.4: The relevance and rigor of the second case study
3.6.2 Case Study 2: Information Management
In the second case study, we moved from studying a task-specific system for planning support to a more general computing environment, in which commanders
conduct their tasks through general computer desktop artifacts such as office documents, e-mail messages, and through special-purpose systems such as command
and control, intelligence, and weapons systems.
Due to the connection to existing document management systems (Figure 3.4),
the relevance of the second study was therefore a function of all three environmental components referred to by Hevner et al. [132]. The results of the study, which
57
3. Method
Table 3.2: Structural components of the contributions from the first study of this thesis.
Statement
Component
The study concerns affordances and constraints of planning
support systems
purpose and scope
Planning support systems need crisp environment models, including plan operators with their effects and preconditions,
as well as domain objects to reason about plans, both for the
generation and critical analysis of plans.
constructs
(planning support
systems),
principles of form
and function
Planning support systems need to allow plan constraints to
be relaxed or made compulsory, depending on how well the
environment model represents reality.
artifact
mutability
To be useful for military tactical planning, planning support
systems should be based on the principles of transparency,
graceful regulation and event-based feedback.
testable
propositions
The design is derived from studies of other planning support
systems, from cognitive systems engineering literature and
scenarios of how military commanders work when planning
and monitoring operations
justificatory
knowledge
A planning support system needs interconnected plan views
and must give users control over how constraints are managed
by the system, and how the system model is visualized
principles of
implementation
The ComPlan system is described.
expository
instantiation
are outlined in Table 3.3 were a set of propositions and principles of implementation for Semantic Desktop environments as a basis for intelligent information
management. These propositions and principles form the second contribution of
the thesis: A model for extending Semantic Desktop systems with domain-specific knowledge
for the purpose of supporting information management in planning.
3.6.3 Case Study 3: Understanding C2
With an appreciation of the challenges involved in describing staff work processes,
and in providing crisp metrics of the utility of intelligent decision-support systems
targeted towards staff sensemaking from case study two, the third case study addressed these methodological issues by using available data sets for comparison.
In the study, we investigated the construction of a general analysis tool for staff
work where earlier, human analyses were available for the evaluation of intelligent
decision support. Specifically, our study was of the analysis of C2 communications.
58
3.6. Our research method
Table 3.3: Structural components of the contributions from the second study in this thesis.
Statement
Component
The study concerns affordances and constraints of Semantic
Desktop-based document management systems
purpose and scope
Semantic Desktops use an ontological framework of semantic
desktop objects, actions, and the relations between them to
reason about them.
constructs
(semantic
desktops),
principles of form
and function
Semantic Desktops need domain-specific knowledge to reason about and visually represent relevant domain objects in
interfaces that users are familiar with.
artifact
mutability
Semantic Desktop systems should be extended with harvesting components and domain-specific visualizations that make
use of representations used in the domain.
testable
propositions
Planning systems and doctrine reason about plan objects, not
document structure. Documents are intentionally designed
not be custom-tailored to represent plan knowledge. Therefore, the Semantic Desktop needs to extract information from
documents and present them.
justificatory
knowledge
For semantic objects that are present in the structure and in
the contents of documents, harvesting components should be
added to populate an ontology with relevant information, and
the corresponding ontology concepts should be added. Visualization components should be developed to represent objects
in documents or relationships between documents.
principles of
implementation
The Planning Desktop system, with a map view describing
Secure Points of Debarkation and a timeline view describing
task assignments to force components, is provided.
expository
instantiation
Research activity
Research outputs
Technical
evaluation
Interview
study
Affordances
and constraints
on machine
learning
methods
Workflow
Visualizer
Critical tasks
in
communicati
on analysis
Paper IV
Workshop
evaluation
Use cases for
analysis tool
based on
clustering
Paper V
Criteria for
analysis tools
Paper VI
Figure 3.5: The research stages in study 3 and the products attained in each stage.
59
3. Method
Environment
Relevance
People
Command and
control
researchers, staff
coordination
officers
Organization
Exploratory
analysis,
structured
workflows
Technology
Free-text
communication,
observation
tools, simulation
environments
Case Study 3
Develop/build
The Workflow
Visualizer
Justify/evaluate
feasibility study,
interview study,
design study,
workshop
Rigor
Knowledge Base
Foundations
Machine
Learning,
Text clustering,
Exploratory
Sequential Data
Analysis,
Role-playing
simulations
Methodologies
Stratified crossvalidation,
Semi-structured
interviews,
Prototyping,
Workshop
Figure 3.6: The relevance and rigor of the third case study.
The study of communication analysis was performed in four steps. The first
step, a feasibility study regarding the use of machine learning techniques for messages classification, is reported in Paper IV. The second, an interview study with
C2 researchers, is reported in Paper V, and the last two, including the design and
construction of a support tool and a workshop evaluation, are reported in Paper
VI. Figure 3.5 describes each of the stages in the study. In Figure 3.6, we relate
the relevance and rigor of the third case study to the environment and knowledge
base pertaining to the study of command and control.
The contributions from this case study were an empirical evaluation of machine
learning approaches for classifying communica- tion in command and control, and a model
for supporting communication analysis in command and control, which are structured according to Gregor and Jones [116] in Table 3.4.
60
3.6. Our research method
Table 3.4: Structural components of the contributions from the third study in this thesis.
Statement
Component
The study concerns affordances and constraints of machine
learning as a basis for communication analysis in command
and control
purpose and scope
Communication analysis is performed as part of Exploratory
Sequential Data Analysis (ESDA) as part of research to understand command and control. In communication analysis,
researchers use ESDA tools to organize and explore patterns
in communication as well as in other data.
constructs
(ESDA tools),
principles of form
and function
ESDA tools need to assist researchers to navigate more
quickly and using machine-detectable patterns.
artifact
mutability
Text clustering offers a viable option for exploring patterns,
including clusters of messages and important terms, in communication data. Support systems based on text clustering
need to be transparent, with respect to the model for clustering, and open-ended, to the extent that the clustering model
should be available for automatically making inferences or
highlighting important terms and word co-occurrences.
testable
propositions
Empirical evaluations of machine learning algorithms provide
a basis for suggesting text clustering as opposed to other learning approaches, interview information stresses the importance
of efficient exploration of possible patterns and a workshop
evaluation warrants the criteria of open-endedness and transparency.
justificatory
knowledge
Flexible management of data sources, with direct manipulation and selection, a unified timeline of events and options for
configuring tool components for different types of research
questions and explorations are important for ESDA support
systems.
principles of
implementation
The Workflow Visualizer, with multiple views for navigating,
clustering, coloring, grouping, and selecting communications
and events from command and control scenarios is provided.
expository
instantiation
61
4 ComPlan
In this chapter, we present the first case study of this thesis. The study was conducted with the purpose of studying different planning scenarios and how critiquing mechanisms could contribute to a team’s planning process in those scenarios. The case study comprises two stages, the second of which is presented in
Paper I. We begin by providing a brief overview of the material used in the study,
followed by a description of both stages following the method described in Section
3.6.1.
4.1 Material
At the beginning of study 1, we had access to documents from a medical evacuation (MEDEVAC) scenario [70], which was subsequently used to reason about
the working procedures of a tactical command staff in charge of managing tactical
operations of similar nature. In Figure 4.1, we describe such a MEDEVAC scenario as consisting of 8 steps: 1) Preparation, 2) Request for MEDEVAC, 3) Dispatch, 4) Insertion flight, 5) On scene stabilization, 6) In-flight care, 7) Transfer
of patients to higher care, 8) Recovery and mission documentation. At a command
center charged with medical evacuation as well as with other mission types, there
are typically many active, ongoing, preparing, and requested missions. Commanders collaborate in this environment to manage a large set of common resources,
under much time pressure and with uncertain information. Their goal is to ensure
the safe and timely transport and treatment of all those reported as injured in the
field, whether civilians or their own personnel. All stages listed in Figure 4.1 occur
in any particular MEDEVAC task, but there may be several such tasks conducted
simultaneously, and commanders must make sure that available resources—which
63
4. ComPlan
1
2
3
8
Base
4
7
6
5
Mission area
Figure 4.1: MEDEVAC scenario used in study 1, adapted from [70].
may include helicopters but also ground vehicles or marine vessels, depending on
the environment—are used to the best possible effect, given the requirements of
the operation.
4.2 Study
The study was conducted in two stages (see Figure 3.2), of which the second stage
is reported as part of this thesis in Paper I. During the first stage, we identified a
set of fundamental knowledge representation issues that directed us to the development of the ComPlan planning support system.
4.3 First Stage: Critiquing system
During the first stage of the study, we studied how to represent the MEDEVAC
scenario in the domain language of an automated planning system, SHOP2 [199],
with the goal to create a system for providing planners with feedback on the feasibility of suggested plans. SHOP2 is a Lisp-based, open source planning system
capable of representing complex, hierarchically ordered behaviors and of allowing
arbitrary Lisp expressions to be part of the body of operators which means that
any programmatically representable behavior can be encoded as part of the effects
of an operation, even if only effects that manipulate domain objects will be visible to operators that come later in the sequence of operators that, in total, form a
plan. SHOP2 also has a graphical user interface that can be used to visually represent the final plan generated, given a domain description and problem description
(Figure 4.3).
64
4.3. First Stage: Critiquing system
(:method (medevac ?locations)
;; No locations to pick up from
((same ?locations ()))
((!dummy-op))
;; At least one location left to plan for
(and (first ?locations ?first-location)
(rest ?locations ?rest-locations)
;; Only plan for helicopters that are available at the
base
(helicopter ?helicopter)
(at ?helicopter ?base)
(base ?base)
)
(:ordered (:task get-patients ?helicopter ?first-location)
(:task medevac ?rest-locations)
)
)
Figure 4.2: The main composition method in the MEDEVAC scenario represented in a
the language used by the SHOP2 planner.
Figure 4.2 describes the main method in the domain, medevac. The method
medevac is composed of an ordered set of get-patients tasks, which in turn
may be composed of other tasks. The most primitive building block of a SHOP2
plan is called an operator, which modifies the world state. In Figure 4.4, an
operator !retrieve is described in terms of preconditions, effects and the cost
of using the operator.
Our intention of the medical evacuation scenario using the SHOP2 formalism was to support the evaluation of user plans and proposals with traditional critiquing systems (see Section 2.7), so that if a user was to suggest a certain resource
for evacuation, the system would indicate whether the resource would be applicable, and in case the resource would be inapplicable or otherwise unsuitable, declare
operator preconditions that were not met. We identified a set of problems with this
first approach to formalizing MEDEVAC planning:
1. The representation is suitable for generating plans, but not for feedback:
program code would be needed to give the explanation when a certain operator cannot be performed in a context, or when other forms of support would
be appropriate (such as using a map to visualize the effects of moving units).
2. Human statements of problems, goal states and methods are usually not as
crisply defined as would be necessary for automated planning. Also, the
process of creating complete domain models in advance as a prerequisite
for solving problems in the domain may have fundamental problems with
respect to how humans treat the problems and domain knowledge that are
used to manage problems in the first place (see Section 2.8.2).
Given that program code, possibly written in language other than the one native
to the planner, would be needed to create feedback mechanisms in a critiquing ap65
With this in mind, we have begun constructing a tool that will help the user experiment with new approaches to s
problem and continually get visual critique based on the problem formulation.
The current research prototype aims at supporting the planning, monitoring and replanning of transportation of
during a mass-casualty emergency. This prototype is currently based in part on the SHOP2 automatic planner, w
developed on the principles of a hierarchical task network (Nau, Au, Ilghami, Kuter, Murdock, Wu and Yaman
Although we intend to evolve our tool into being user-driven where the user initiates all plan generation, we are c
using the automated SHOP2 planner to generate plans to test various visualization methods. The generated p
visualized in different, but connected, views. Our use of such views is inspired by the work of Morin who has pr
several applications for multiple coordinated views in command and control operations, both for training and
4. ComPlan
purposes (Morin, 2002; Morin, Jenvald, and Thorstensson, 2000).
Hierarchical view
Figure 4.3:
A graphical
representation
of a planplan
created
the SHOP2
planner
for the as presented in the graphical plann
Figure
2. A hierarchically
structured
of a by
rescue
mission using
helicopters,
MEDEVAC domain.
interface that accompanies the SHOP2 planner (Nau et al, 2003).
When planning for how to handle an emergency, the emergency dispatcher may use a hierarchical view of
decompose tasks into smaller parts. The reason for having this kind of view would be that standard proced
handling emergency operations can be stated in this way. In the case of military planning, hierarchical task n
(:operator
?helicopter
?patients
?location)in this manner as well as for automated plan gen
have
been used(!retrieve
to structure the
work of a human
decision-maker
;; Precondition
(Muñoz-Avila,
Aha, Beslow and Nau, 1999). As illustrated in by the example in Figure 2, to retrieve patients
(and (helicopter ?helicopter)
area by helicopter
(GET-PATIENTS), we need to
(at ?helicopter ?location)
(patients
?patients)
1. fly to the
location after
we have refueled our helicopter, if necessary (REFUEL-AND-FLY)
(at ?patients ?location)
(timestamp
?helicopter
?time)(RETRIEVE)
2. load the evacuees onboard the helicopter
(time-to-pick-up ?pickuptime)
(+ ?time ?pickuptime))
3. fly them(assign
back to ?new-time
our base (FLY-TO)
)
Proceedings of the Second International ISCRAM Conference (Eds. B. Carle and B. Van de Walle),Brussels, Belgium, April 20
;; Delete list
((at ?patients ?location)
(timestamp ?helicopter ?time)
)
;; Add list
((in ?patients ?helicopter)
(timestamp ?helicopter ?new-time))
;; Cost
(calculate-cost ?new-time :criterion ’time)
)
Figure 4.4: An example operator in the MEDEVAC scenario represented in a the language
used by the SHOP2 planner.
66
4.4. Second stage
plication for planning, we saw that it was hard to justify a planning formalism separate from the programming language of a critiquing application. Also, if human
planners have difficulties eliciting exactly what the preconditions, costs, effects and
hierarchical orderings of plan operators are, then knowledge that is encoded must
be useful even if it is not a complete domain model, in the sense that it can generate a plan. The usefulness of a system could be defined as the degree to which
humans understand constraints better, where constraints are used to reason about
feasibility and desirability of a plan, when manipulating constraints encoded in the
tool compared to when they are not using the tool for constraint manipulation1 .
As many earlier systems had focused on collecting as much static problem-solving
knowledge as possible to enhance system utility (see Section 2.8) but few had explored options for how to use this knowledge for different forms of feedback in military
planning, we decided to use the second stage to study interaction techniques with
constraint knowledge encoded in a planning support tool.
4.4 Second stage
The second stage of our study did not use a formal planning language to express
details about properties of plans and actions. The implementation of the ComPlan support tool for planning accentuated the issue of how the representation of
domain knowledge in planning affected the interaction logic.
4.4.1 Knowledge representation
In Section 2.8 we presented some of the approaches used to extract and use knowledge in military mission planning. In the decision support projects EXPECT and
HICAP, domain knowledge was encoded using the formal ontology languages
Loom [251], and Hierarchical Task Networks [192], respectively. In both cases,
the ontology language used was well suited for expressing relationships like inheritance, part-whole relations, sequencing of actions and, to a limited extent, numerical constraint properties, such as fuel levels which cannot be negative. When we
considered how to implement domain knowledge in the first stage of this study,
these types of description-logic-based languages were our choice. They offer formally sound structures, good expressiveness and are widely used in the knowledgebased systems community. However, we also realized that this approach has some
clear limitations. Ontology languages imply declarative descriptions, but offer little when it comes to computational or procedural descriptions. This is similar to the
relationship between formal descriptions of computer programs (such as object diagrams in UML2 ) and the actual implementation of the program. In its implementation, the program contains not only an object structure, but also an algorithmic
description of how the program behaves. We found that when modeling critiquing
support that not only analyzes task structures but also task behaviors over time,
1 although,
2 Unified
as explained in Section 2.1.5.1, that may not be straightforward to evaluate
Modeling Language
67
4. ComPlan
a pure declarative formalism for describing a task domain is restricting, for the
following reasons:
1. When describing task types using, for example, the formalism of SHOP2,
you cannot integrate a description of how task types should be manipulated
in the planning tool together with definitions of their effects and preconditions. When describing a transportation task, we could, for example, use
the name, description, set of agents involved in the activity, preferred start time, preferred end time, starting location, and destination. Start and end locations may be
best viewed and modified through a map-like view presenting real geographical data, whereas time information may be best handled through a time-line
view. Moreover, some of the parameters may need special routines for specifying how they are presented, and may be modified in each particular view,
if the planning application offers multiple views of the same planning situation (e.g., [172]). A location may be presented as an outlined area in the
geographical view, but as text in another view, and the set of agents assigned
to a task may have similar color in a geographical view or represented by visually connecting graphical representations of selectable agents to tasks, or
using another method.
2. When relationships such as inheritance or sequential ordering are introduced between tasks, parameters in the related tasks can be affected nondeterministically, which makes it hard to declaratively state the exact effects
of performing a task and require that the effects of performing one operation
constitute hard constraints on other operations. For example, if a commander decides that he needs to transport a number of units to a location (task
A) at which he subsequently needs to establish an emergency shelter for
refugees (task B), this course of action ought to introduce a relationship between the destination of task A and the location of task B. How should this
property be defined using a standard ontology language? In propositional
logics, which forms the underpinning of both HTNs and Loom, we could
probably unify the values of the two locations to solve this. But what if we
only want to introduce a relationship such as that the end time of task A
should not be preceded by the start time of task B? Then again, perhaps task
B can start before all units involved in task A have arrived, and we may not
want to ban such activities from commencing before the latest time of completion of some sub-activity of A. So, one human planner may have one opinion as to the effects of transportation and how transportation tasks may be
used in planning, whereas another planner may have another view. Therefore, in a support tool for planning, we may want to visualize the relationship
between activities A and B as interpreted by the tool, and not require that it
be enforced if the human planner disagrees with the interpretation.
3. To provide useful advice, we would have to implement a tutoring strategy
that appropriately calls attention to the possible problems and how to manage them. Implementing such strategies as attributes of concepts in the domain, as suggested by Silverman and Wenig [234], would entail that oper68
4.4. Second stage
ators or methods in the formalization of the planning domain include information about possible human biases or errors. We may not really know if
constraint violations in a plan, as interpreted by the planning tool, are the
result of inadequate information supplied by the human, or of an incomplete
model of how operators in a plan can be connected. Thus, we may need to
make explicit what tutoring strategy is active and make it possible for human
planners to evaluate whether certain advice is called for.
The examples above illustrate some of the problems encountered using ontological descriptions of tasks, constraints and resources for supporting human-centered
planning. Due to these problems, we decided not to incorporate a special-purpose
knowledge representation language as part of ComPlan. Hierarchical descriptions
of activities and resources are represented in an object-oriented programming language, along with procedural definitions of activities and constraints.
There is another issue with the use of formal ontology languages for specifying
domain knowledge: They require special expertise. Although there has been a lot
of research into how to automate or streamline the knowledge elicitation process,
at some level in the process of describing domain concepts there are still requirements for special knowledge engineering competence. This competence may be
harder to find than programming competence, and must therefore be justified by
some considerable and measurable advantages. As both types of competence are
required to develop domain models for a decision support tool that uses formal
ontology language3 , to justify the extra resources needed to implement the domain
model in Loom or a Hierarchical Task Network compared to modeling it directly
in a programming language may become an issue.
However, in situations where the ontology used by a tool is support to be shared
by many others and system designers are required to make use of pre-defined
knowledge structures, it makes sense to use a shared schema for information exchanges. Especially for larger systems, the ability to communicate intermediate
results through the use of standardized protocols can be crucial to manage the inherent complexity in building large software systems.
As discussed in Section 2.8.2, ontologies may have other affordances than only
those related to automated reasoning and deduction. Also, humans use knowledge structures (ontologies) for communication, and not primarily for automated
deduction. Although automated reasoning may be a powerful mechanism in planning support tools, it presents great restrictions on the conditions for its use [25]. In
this study, we aimed at supporting a wider range of use cases with the knowledge
base in our planning tool than those pertaining to the creation of action sequences.
Therefore, we decided to build our support tool around the concepts of interconnected plan views to manage different plan aspects, flexible constraint management
to support different interpretations of constraints, interactive simulation to allow
the study plan effects, and collaboration support to allow several planners to work
concurrently on the same problem.
3 as
there may be procedural knowledge that needs to be represented in a programming language
69
4. ComPlan
Organization view
Resource view
Task view
Improvement
Requirement
Timeline view
Geographical view
Figure 4.5: Dependencies between the different views. For example, the time view only
becomes useful when information is supplied in the task view on the tasks we are about to
perform. As more information becomes available, the time view is able to provide more feedback, such as simulation-based feedback after both agents, locations and other parameters
are set.
4.4.2 Views
A core design feature of ComPlan is that it uses different interconnected views of
a planning situation, much as the intention of the O-P3 planning system [172].
Each view is used to illustrate and manipulate one aspect of the plan, such as resource organization and allocation, task structure, scheduling or geographical plan
properties. These are the four particular views we have implemented in ComPlan,
but they are by no means an exclusive or exhaustive list of the aspects relevant to
mission planning. They do, however, represent a sufficiently diverse visualization
and interaction mechanism to illustrate how such plan views could assist a human
planner develop better understanding of a situation.
In our design of the ComPlan, we considered various interaction modes of critiquing systems and have studied what these modes imply for users. In Section 2.4
we provided an overview of planning support systems that differed with respect
to, among other things, initiative. Initiative describes what a user is allowed to do.
It determines whether a user is allowed to ignore restrictions implemented in the
knowledge base of the support tool, whether he may choose the order in which
information is provided and what type of feedback he wants.
Although our aim was for the user to work as unrestricted by the knowledge
available in the tool as possible, it was apparent that the tool would not be very useful unless certain information was provided. If plans are made for an emergency
evacuation of hundreds of people, the time frame for planning is important. However, before a user can manipulate timing information in the ‘time view’, at least
one task must be created in the ‘task view’. We could of course allow for tasks to be
created in any view, using shortcut commands or menu options as in any application, but the reason for using views is really that a plan includes separate concerns
that one should be able to address separately.
In figure 4.5 we illustrate how the views in ComPlan are dependent on each
other. For the time view to be at all useful, the tasks that are part of the mission
must be determined first so that there are any missions to manipulate. However,
without any parameters for resources and geography, the time view can provide
only little feedback on the constraints that are being violated and the problems that
may arise from the current mission plan. The dependencies between plan views
70
4.4. Second stage
Figure 4.6: The organization view where properties of units that are part of an organization
can be inspected and modified.
are mostly related to increased feedback options, and do not restrict the user from
manipulating a plan in almost any order.
The organizational view, for example, is used to navigate and determine the
properties of available forces (Figure 4.6). It can be manipulated in parallel with
any of the other views so that the effects of assuming that a rescue helicopter can
transport more patients than stated by default, or that the range or speed of some
other resources differ from the usual, can be tested. All parameters encoded as
part of the resources can be displayed in this way using representations that are
suitable for each type of parameter.
The task view (Figure 4.7) shows a graph of the group of tasks planned for a
particular mission. This group of tasks together forms a mission to be carried out by
a subordinate commander. Tasks can have relationships such as being part of other
tasks. This is similar to how the HTN representation in Figure 4.2 describes the
medevac method as being composed of get-patients tasks. Tasks can also
be part of a sequence of tasks, which in many circumstances is similar to stating
the preconditions of tasks as in Figure 4.4.
The resource view allows the planner to allocate resources to tasks. For simple scenarios, the representation in Figure 4.8 is appropriate and allows graphical
representations of tasks as symbols under the task structure, so that tasks can be
combined with resources by means of dragging and dropping. The suitability of
resources can be indicated with a critiquing strategy of visualizing constraint violations, in which case the resources are connected with colored lines to tasks. Red
71
4. ComPlan
Figure 4.7: The task view showing a number of tasks ordered hierarchically as well as
sequentially.
lines indicate that the resource has been declared inappropriate for the task, and
green indicates that it is suitable.
As more information is provided, more calculations and simulations can be performed in the time view to help the human planner see potential problems. Thus,
the time view will receive improved functionality as more information is provided
in the other views. In Figure 4.9 we can see how the user has initiated a simulation
of the transportation of patients and receives feedback that one of the allocated
rescue helicopters may run out of fuel in mid-flight. The critic that checks and
notifies the user of this constraint, the FuelLowCritic, displays information about all
constraint violations of that type at the time in the plan selected by the user (by
dragging a slider along the time line of the scenario). Temporal restrictions introduced by the relationships introduced in the task view are used with an automatic
update policy in Figure 4.9 that forces temporal restrictions to be in effect at all
times during the mission. Therefore, missions are ordered automatically according to time, so all tasks are re-ordered automatically when a user changes their
order in the task view.
4.4.3 Constraints
In automated planning, all planners use strict formalisms for describing planning
domains, permitted operations, and resulting effects of operations. Constraints in
the context of automated planning can best be defined as the prerequisites necessary for atomic plan operations. If an automated planner is used as part of a mixedinitiative system for mission planning, the constraints put on a user are likely to be
mandatory, that is, they may not be violated by a user during planning. It may be
72
4.4. Second stage
Figure 4.8: The resource view showing a number of tasks together with resources available. Resources associated with tasks are denoted by colored lines.
troublesome to maintain such hard restrictions if the world model of the planning
tool is not sufficient. Another view on how to use constraints comes from the Cognitive Systems Engineering field of research, where researchers have studied how
human operators with time-critical real-time monitoring tasks, such as air traffic
controllers (ATCs), work and what role constraints play in that context [79]. Thus,
efficiently interpreting an information flow that describes ongoing events is critical. For ATCs, relevant constraints relate to, for instance, runway and air space
capacities. Any support system for such an application area needs to make absolutely sure that all relevant information is presented in as efficient a manner as
possible. Two airplanes that occupy the same altitude segment may not necessarily
collide, but in such cases it may be more important to present information that they
are almost at the same altitude (and about to cross each others’ paths) than if they
are several miles apart. Thus, “constraints” in real time control applications need
to be properly presented, but controllers are not always in a position to preemptively
ensure that they are not violated. In their 1999 article on ATC support systems
73
4. ComPlan
A
B
Figure 4.9: The timeline view showing a set of tasks planned over time. The FuelLowCritic
is activated (A) and shows that it has detected a violation in the plan. When selected, it
displays a message in the lower left pane (B) indicating that one of the resources, Helicopter
1, will run out of fuel in the middle of one of the tasks.
that provide “management by exception”4 Dekker and Woods note the following
[79]:
Management by exception traps human controllers in a dilemma: intervening early provides little justification for restrictions (and compromises larger air traffic system goals). But intervening late leaves
little time for actually resolving the problem, which by then will be
well under way (thereby compromising larger air traffic system goals).
So, many constraints in automated or mixed-initiative planning systems must
be respected by the human operator, but constraints in real-time control systems
should first and foremost be well presented. As we see it, in crisis management situations it may be necessary to use constraints for both purposes. Some aspects of a
plan may be more amenable to the policy of automatic enforcement used by project
management tools and some mixed-initiative planning tools. Other aspects may
more resemble the uncertain constraints in the ATC scenario, where visualization
as support is preferred to a thorough computer-based analysis of possible errors.
In our ComPlan design, we offer users the option of selecting between mandatory enforcement and visualization, through the use of constraint policies that can
change the behavior of the constraint checking engine.
4.4.3.1 Automatic enforcement
If a mission planner has stated that task “transport patients to rendez-vous location A” should precede another task that takes place at rendez-vous location A, he
may want this ordering property to hold even if he makes changes to when tasks
4 Management by exception means that the computer system is in control of the management situation but can relinquish control in case of exceptional circumstances. However, the system may be
interrupted at any time by a human operator.
74
4.4. Second stage
Figure 4.10: The timeline view, showing resource usage over time as part of a user-driven
simulation. By pulling a slider, the user can simulate various simple aspects of a plan to
better appreciate the effects of resource allocation for example.
should begin or how much time they should take. By automatically monitoring
and enforcing constraints, the planner can rely on certain properties of his plan.
On the other hand, such a policy may not be suitable if it is later discovered that
these restrictions are unnecessary or misleading.
4.4.3.2 Visualization
Instead of automatically enforcing constraints on behalf of the user, it may be that
the system only visualizes constraint information and possibly presents critique
based on this information. Humans are good at interpreting graphical patterns and
if provided with relevant and well-presented information on tangible constraints,
they may be better at interpreting such representations than most AI algorithms
(e.g., [11]). Critique and visualization are still closely related however, because a
critiquing engine can make a user aware of potential problems, and illustrate these
problems using representations that make the reasoning of the critiquing engine
more transparent. For example, the critiquing engine may assume that there is a
problem with the relation between the fuel range of a vehicle and its current location. Following this, it may choose to inform the user of this problem by graphically
representing the fuel range and distance to refueling facilities and not just tell the
user that ‘the vehicle may run out of gas’. This representation has also been considered for illustrating constraints by Woltjer [269] . In ComPlan, information about
constraints can be represented in graphs and charts in the interactive simulation
available through the time view (Figure 4.10).
75
4. ComPlan
4.4.4 Interactive simulation
Simulation can be a powerful mechanism for illustrating connections and constraints that are not visible in static information displays. Simulations are also used
by many decision-support tools to predict likely outcomes of different courses of
action. Predictions are potentially challenging for the same reason critiquing and
option evaluation may be hazardous, namely the world model may not be accurate
enough, and some parts of reality may just be very difficult to simulate. However,
simulation offers good prospects for efficiently presenting time-related information. For this purpose, we included the option of simulating events in the time
view. Based on specific information such as fuel consumption estimation, the simulation engine simulates the behavior of each resource in the plan over time, as
the resource (helicopter, ambulance) is part of executing one or several tasks in
parallel.
This kind of simulation can be useful not only for visualizing the development
of simple resource parameters, such as the expected fuel consumption in units over
time, but also for illustrating constraints and informing the user of potential problems. By “problems”, we mean not only constraint violations but also how fragile a
plan is with respect to constraints over time. For example, how much freedom will
a given set of resources provide later during the mission? Since information may
be scarce at the beginning, it is important that initial planning allows for adapting
to new circumstances and new information. Therefore, sensitivity analysis of the
constraint model is an important instrument for planners. In ComPlan, such analysis is conducted by presenting available resources at each point in time and by
displaying messages whenever constraints are violated.
4.4.5 Collaboration support
Planning at higher levels of military command is rarely an activity for single commanders. Rather, groups of staff members work independently to fill out information about various aspects of a plan, planning for sub-tasks or processing intelligence information, and this forms the basis of planning. In such a setting, the coordination of multiple planners becomes a major issue. When deploying ComPlan,
individual ComPlan planners on a network automatically discover each other and
propagate changes over the network, so that other participants are notified when
changes are made to a plan that clashes with changes made by other planners, similarly to the CODA system by Myers, Jarvis, and Lee [195] (see Section 2.5.3).
The mechanism for propagating changes is the same as for propagating changes
on a local plan, and all participants may subscribe to updated information from all
other planners.
4.5 Results
The results from the ComPlan study were articulated from analyzing the properties
of constraint management and propagation, visualization and the representation of
knowledge. The first stage had provided us with insights into how a MEDEVAC
76
4.5. Results
problem could be modeled using a planning formalism. Considering the issues
most central to human beings in planning, that of managing an incomplete formal
domain model and collaborating on managing a situation, the ComPlan system
resulted in three principal design criteria:
• Transparency,
• Graceful regulation, and
• Event-based feedback
Transparency relates to the degree to which the model used by the system is
made available to inspection and modification. The organization view, which describes all available resources and their properties, presents all resource parameters to users and allows modifications according to the parameter types. In the
timeline view (Figure 4.10), one of the activated critics that display violations is
the TooLongTime critic. This critic overlays a red time frame with an arrow,
indicating that a task should be extended in time to fit the suggested time frame in
order to conform to the internal model. In ComPlan, that internal model is based
on resource properties and geographical properties such as maximum speed and
available routes.
By graceful regulation, we mean the capability of regulating how the internal
model is used to provide support. In ComPlan, the system can be configured with
respect to both the general operating mode of the constraints and their appearance
to the user. The first operating mode is the active mode, which causes constraints to
alter the properties of activities as other activities are changed, so that constraints
between activities hold. The second mode is the passive mode, in which only visual
or textual notifications are given with regard to possible constraint violations.
Event-based feedback means that the planning system provides feedback based
on direct user actions and not as a result of batch processing. An incomplete plan
may have many potential errors, but only some of them may be interesting enough
to keep active. However, if plan analysis is postponed until the end of a planning
cycle, errors may have worse consequences than they would have had otherwise.
Each editing event may introduce conflicts with the intentions of other planners or
with the internal model, and it is important to provide a mechanism for propagating
information from the planner to the user. Direct and concrete feedback relies on
a mechanism for immediately propagating all the interactions in the user interface
back to the planner. With such direct feedback, presented using visual elements in
the interface, the user can see the consequences of plan modifications earlier, and
possibly have a better understanding of the planning model.
These three tentative design criteria were later considered in the two following
case studies, in which we investigated Semantic Desktop environments and Text
Clustering as two techniques to support the reasoning about data in command and
control.
77
5 Information Management
The second study of this thesis concerned the information management issues involved in operational planning in military command centers. The purpose of the
study was to study task-specific analysis mechanisms in a framework adapted for
more general tasks than in case study 1, with the intention of revealing how semantic desktops could improve the decision-making process of collaborative command
and control.
This chapter presents the material used during the study in Section 5.1, the
technical study of the options for using ontology-based semantic desktop systems
for information management in operational planning (Section 5.2) and the results
regarding whether semantic objects in documents can be managed in tools constructed with a Semantic Desktop foundation.
5.1 Material
Material from two military planning scenarios for staff training were used in the
study: DRESDEN and RAVEN. DRESDEN was an exercise focused on the targeting process and the management of indirect fire, in which participants engaged
in prioritizing and assigning resources for the elimination of strategic targets during
an operation. RAVEN was a training exercise at the tactical level, where commanders had received an operational order and were to implement the directives on a
tactical level.
79
5. Information Management
5.1.1 DRESDEN
The DRESDEN scenario was studied from an information management perspective, to learn various aspects of staff work during planning at the operational level
of command. During the DRESDEN operation, participants trained specifically
on assembling a list of important military targets and deciding on how to eliminate the threats that these might pose to the operation. The planning process in
which this particular activity, targeting, takes place is very complex, and involves
many participants at various levels of the military organization. In Figure 5.1, the
planning process in which targeting takes place is described, and the Joint Prioritized Target List (JPTL) is depicted as a product in one of the steps performed
before the execution of an operation. The Joint Target Working Group (JTWG),
which consists of members from several branches of the military, negotiates the
JPTL and continually re-assesses targets during an operation. In fact, the targeting process is best described as a continual process conducted before and during
an operation, planning for each week of an operation. Each JPTL is the result of
both directives from the strategic level (the JCB) and information from the tactical
planning process at the Corps level.
The DRESDEN exercise was conducted over the course of five days, including
initial systems training, briefing and debriefing. There were 32 participants who
played the various parts of the organization described in Figure 5.1. They were
divided into two groups and worked with roles in a virtual organization where
their individual backgrounds were matched against the profiles of those who were
expected to operate at each level of command. The DRESDEN training operation was conducted as a role-playing simulation, in which lower-level forces were
simulated and the staff being trained sat in one room, although they acted as commanders belonging to different part of a military organization. DRESDEN was
preceded by several other command exercises in which command methods and
systems were evaluated for use in command and control. During the DRESDEN
operation, two specific support system concepts were evaluated: common document
stores and a system for a common battlefield view.
The common document stores were introduced as a way to provide commanders with direct access to all the documents produced by others without needing
to request specific information, and it was implemented as an online file manager.
The common battlefield view was similarly designed, in that free access to battlefield information was believed to remove bottlenecks in the communication among
commanders. The common battlefield view was provided by a system integrating
both the intended plan fragments from several commanders, and simulated vehicles and weapons systems in one framework. A screenshot of the system can be
seen in Figure 5.2.
We participated as observers during the DRESDEN exercise and had access
to all the written material produced during the exercise, as well as to previous evaluations of the system for producing a common battlefield view. In the evaluation
material, there were interviews with previous participants about their experiences
with both the special-purpose battlefield view system and the document management via a common document store.
80
5.1. Material
Execution day
-120
-96
-72
-48
-24
JCB
CINC
JTWG
Periodic
D&G
D&G
JCO
JPTL
AOD
ATO
ACO
TNL
JCB
Refined
Draft JCO/
JPTL
Draft
JCO
+24
JCO
Release
Build JCO/
JPTL
JTWG
0
Joint Coordination Board
Joint Targeting
Working Group
Direction and Guidance
Joint Cordination Order
Joint Prioritized Target List
Air Operations Directive
Air Tasking Order
Airspace Coordination Order
Target Nomination List
Develop
AOD
AIR
JCC TNL
Build
ATO
CAOC
E-Day
ATO/ACO
WING
JCC
CORPS
COM
guidance
Corps TNL
Corps
Planning
Division
Planning
Bde/Btn
Planning
Figure 5.1: Part of the Operational Planning Process during which Targeting is performed
by members of the Joint Targeting Working Group. Each row in the timeline describes one
hierarchical level using NATO military abbreviations.
5.1.2 RAVEN
The RAVEN training operation was conducted in 2007 as a training exercise for
training commanders at a lower level of command than those in the DRESDEN
scenario, and many of the documents pertaining to operational planning had already been prepared by the course staff in advance. The documents included in the
RAVEN scenario followed the NATO Guidelines for operational planning (GOP
99) [121] and consisted of a set of those that were in place at the beginning of the
scenario and developed by the participants as part of their planning. The included
documents are described in Table 5.1, and those developed during the scenario
have highlighted names in the left column.
81
5. Information Management
Figure 5.2: The battlefield information management system used during the DRESDEN
training operation.
Table 5.1: Documents included in the RAVEN scenario
Document
Description
Main body
An overall mission statement, in which a force called COMSVFOR will enforce a UN embargo and ensure an end to hostilities
in a fictive administrative area in current Sweden. There was also
background information and an analysis of centers of gravity and
decisive points during the operation.
Conduct of
Operations
An outline of how the operation will be conducted at the operational level. This document was given as part of the initiating
directive.
Task organization and
Command
Relationships
An overview of the forces involved in the mission, along with their
commanders and organization.
Continued on next page
82
5.1. Material
Table 5.1 – continued from previous page
Document
Description
Forces Missions Tasks
A timeline and detailed description of phases, force compositions
and responsibilities.
Intelligence
Intelligence requirements during the operation, along with intelligence tasks assigned to different parts of the command chain.
Rules of Engagement
Rules for how units may engage in combat or use force according
to the laws and agreements regulating the operation. The three
policy indicators set specific goals and stated the means available
to achieve those goals as depending on the developments during
the scenario: de-escalation, returning the region to force balance
prior to the operation, and taking offensive initiative with a risk of
escalation.
Maritime
Operations
Regulates maritime operations.
Land Operations
Regulates land-based operations.
Air Operations
Regulates air operations.
App 1 Service
Support
A description of prior agreements for how to support the operation
with fuel supplies, food and other necessities.
App 4 Code of
Conduct
A description of personal rules of conduct for individual soldiers.
App 1 Risk
Assessment
Background information on the state of affairs in the region that
has resulted in the current mission being planned.
Logistics
A support plan developed in parallel with the Forces, Missions and
Tasks document. It detailed the assumptions, regulating documents, and execution of logistical operations during operation
RAVEN.
App 2 Movement and
Transportation Infrastructure
An outline of military and civilian infrastructure such as airports,
sea ports and roads that would be of strategic use during the operation.
Movement
and Transportation
A description of the options available for transporting troops and
supplies during the operation, given the infrastructure of the scenario.
83
5. Information Management
1. Situation
2. Mission
3. Execution
a) Concept of operations
b) Tasks
c) Coordinating instructions
Figure 5.3: Document structure in operational planning
These documents contained cross-references to one another, and many had the
same general structure with regards to tasks to be performed and coordination
with others (see Figure 5.3). Cross-references in documents were usually inserted
at the document level and were unidirectional, so that the Logistics document could
refer to the commander’s intent in the Main body. However, there were also sections
describing the commander’s intent in both the Conduct of Operations and the Main
Body documents. Important concepts in planning such as the military locations,
defined in App 2 Movement and Transportation Infrastructure, are only referred to using
free text, not by citing specific document sections in which they are defined, and
there is no index of where they are used. The commanders in the RAVEN exercise
spent much time organizing information through the use of whiteboards, formal
meetings, and informal discussions on the use of files.
During the operation, participants were specifically requested to develop plans
for their respective military units and combine these in a synchronization matrix, in
which the responsibilities for each unit in each phase of the scenario was to be
described.
5.2 Study
In the study, we used the tentative design criteria elicited during the first case
study and applied them on the design of a Semantic Desktop-based support tool,
the Planning Desktop, for the document management scenario described above. In
this work, we describe why and how a Semantic Desktop needed to be extended
to support the specific needs of groups such as the commanders in our scenarios
DRESDEN and RAVEN. From studying their documents, work procedures and
technical systems used during planning, we observed that:
• It is difficult to integrate specialized systems for information management in
the document-centric workflow of staff. Typically, the information entered
in the specialized command and control systems does not relate directly to
what is available in the planning documents, and vice versa. In specialized
systems such as the one used during the DRESDEN scenario, information
overload quickly becomes a problem, which can be seen clearly in Figure
84
5.2. Study
E-mail client
Web browser
External applications
Data browser
Chat
Calendar
Domain independent
Domain-specific
Communication
Task view
overview
Domain-specific
Map view
Embedded applications
Information views
Events
file
harvesting
e-mail
harvesting
Time lines
Domain-independent
Task
Taskview
editor
Contentbased
harvesting
Structurebased
harvesting
Existing
Harvesting
Event-based
information collection
Implemented
Ontology Events
Knowledge store
The IRIS environment
Planned
Legend
Figure 5.4: The Semantic Desktop IRIS [60] with domain-specific components to extract
and present concepts pertaining to command and control.
5.2 and which was evident from the evaluation forms from earlier exercises
with similar software. Although commanders had access to several functions
for reducing the amount of information seen, they had to do so pro-actively
by setting up filters, layers and role assignments, which was perceived to
interfere with their primary duties.
• Documents are typically well-structured with respect to content. They use
templates that define a standard for formatting and contain mostly unambiguous references to commanded units, important locations and other information relevant to the planners’s situation.
• However, the amount of information available which has to be managed and
the discrepancy between the representations available in specialized C2 systems and those in text documents make it challenging for commanders to
understand if all participants act according to the same plan. Additionally,
they have to make sure that information on the scenario, planned actions and
intelligence of enemy behavior is treated in the same way by all actors.
Guided by these observations, we studied how the unified information management model proposed by Semantic Desktops would be useful. Semantic Desktops in general provide mechanisms for harvesting information and representing
semantic concepts using formal representations such as OWL (see Section 2.9).
85
5. Information Management
The particular Semantic Desktop project we chose to study, IRIS [60], used an
ontology that was specifically geared towards providing support for collaboration
in scientific research where concepts such as authors, references, and conferences
are of central interest for users. However, all those concepts were generally found
in locations that were already processed automatically, such as sender and recipient fields in e-mail message headers, title information in PDF files, or locations in
VCal calendar entries. In our case, we had a well-defined set of documents (see Table 5.1) that had a well-defined structure (see Figure 5.3) and well-defined terms,
but all the structure came from the use of human conventions rather than from
programs that generated the structure.
Two specific questions thus emerged from our study of the material provided
by the DRESDEN and RAVEN scenarios:
1. Can we, with sufficient coverage and precision, extract semantic elements
that are relevant for the production of synchronization matrices or joint prioritized target lists directly from plan documents?
2. Provided that we can extract information, how can we present that information in a manner that matches the representations staff use themselves when
working with both general office documents and specialized tools during the
planning process?
We addressed the first question by incorporating components for harvesting
information into the Semantic Desktop environment IRIS (see Figure 5.4). In
Paper II, we propose a set of extensions to the formal ontology in IRIS and the
harvesting mechanisms, to represent semantic entities from planning documents
in a Semantic Desktop environment. The proposed mechanisms for information
extraction and graphical presentation were described in Paper III.
5.2.1 Design criteria
When designing the Semantic Desktop components for extracting information and
visualizing semantic entities, we evaluated the utility of two of the design criteria
from the first study: Transparency and event-based feedback. The third criterion:
open-endedness, described the flexibility in using a reasoning engine such as an
automated planner. In a Semantic Desktop, there is an inference mechanism that
can be used by components for determining whether ontology classes are described
in a logically sound manner and how new objects should be classified. By using the
inference engine in IRIS, it was possible to automate the act of inferring whether
two objects (e.g., people or events) were the same, whether they were related to one
another and so on. All inferences were based on the assumption that the ontology
would be internally correct and that there would be no way to state contradictory
facts in the ontology. Therefore, there was no way in which the ontology engine
could manage ambiguities or contradictions such as how constraints are managed
in ComPlan. Also, there was no possibility of offering open-ended mechanisms for
manipulating and using ontology information. We therefore decided to restrict this
study to the technical aspects of extracting and visualizing information, which are
86
5.2. Study
tasks that do not concern aspects of constraint management in the same manner as
in ComPlan.
The transparency criterion, however, was considered highly applicable. In designing the visualization component, we wanted to expose exactly how the internal
domain model was used to extract information. To make the user aware of how
the domain ontology was populated with named entity elements from documents
during information extraction, we created a graphical interface for selecting how
to map classes of named entities to classes in the ontology domain. Then, by visualizing the connection between ontology objects in the graphical components, we
made a direct translation of ontology information into graphical elements adapted
for the users’ domain in an effort to make the ontology representation more transparent and useful.
All updates of the visual representations were made based on ontologically defined events, and were therefore directly visible in the interface. We considered
event-based feedback to be a necessary criterion for this application as well, because the representations on the desktop were only supposed to reflect the work
conducted through documents, and was not intended to impose any additional
work on the user. We argued that, if we had imposed requirements for additional
actions for syncing the ontology information visible on the semantic desktop and
the contents of documents, that would probably have been confusing and nullified
any benefits of visualizing information in the first place.
5.2.2 Extraction and Visualization
When extracting information from documents, we explored two approaches:
structure-based harvesting and content-based harvesting. In our case, structurebased harvesting does not rely on machine-generated patterns, but instead on patterns in document structure. In Figure 5.5, we describe how a set of nodes in an
Open Document [4] file, recognized as part of a synchronization matrix, is used
to create objects in the application-specific ontology. For the purpose of using
OWL ontology classes and objects conveniently in the Semantic Desktop environment, IRIS maps OWL classes to Java wrapper classes called POJO:s1 that
can be used when searching for ontology objects or creating new ones. On lines 4
and 16, we use a method called findOrCreate, which executes SPARQL [211]
queries to search for objects with the specified properties in the ontology, and if
none are found, inserts new objects. Here, we use the assumption that in a given
plan, MilitaryUnit objects have unique names, as do Task objects. The code
shown in Figure 5.5 is triggered whenever files recognized by a combination of
file name and content structure are encountered. In IRIS, all plugins may subscribe to events according to a task ontology that describes all user interactions
and desktop events. Our plugin subscribes to file management events to receive
information about all actions taken by users that relate to plan documents.
The content-based harvesting relied on the GATE [72] framework for named
entity recognition. We added grammatical rules for recognizing particular military
1 An
acronym for “Plain Old Java Objects” that was coined by Martin Fowler.
87
5. Information Management
1
2
3
4
5
6
7
Node header = headerNode(node);
NodeList stages = header.getChildNodes();
String unit = node.getChildNodes().item(0).getTextContent();
IMilitaryUnitPojo currentUnit = (IMilitaryUnitPojo)
findOrCreate(
kbc.getSemanticClass(IMilitaryUnitPojo.URI), kbc
.getProperty(corePlusOfficeURI
+ ”displayNameIs”), unit);
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
ArrayList<com.sri.iris.store.pojo.ITaskPojo> unitTasks =
new ArrayList<com.sri.iris.store.pojo.ITaskPojo>();
for (int i = 1; i < stages.getLength(); i++) {
Node stage = stages.item(i);
final String stageString = ”Skede: ”
+ stage.getTextContent();
com.sri.iris.store.pojo.ITaskPojo stageTask =
(com.sri.iris.store.pojo.ITaskPojo) findOrCreate(
kbc.getSemanticClass(com.sri.iris.store.pojo.ITaskPojo.URI)
,
kbc.getProperty(corePlusOfficeURI
+ ”displayNameIs”), stageString);
stageTask.setSubTaskOf(project);
stageTask.setAgent(currentUnit);
stageTask.setDescription(unit);
unitTasks.add(stageTask);
}
Figure 5.5: Extraction of tasks for military units from a synchronization document
locations to GATE and also incorporated end-user configurability in determining
the mapping between ontology concepts in IRIS and the named entity classes in
GATE. The code in Figure 5.6 adds a two-way mapping with the OWL properties
referredIn and containsReferenceTo, which, using the POJO classes
that map to the ontology, are modified with the addReferredIn and addContainsReferenceTo methods, respectively. Changes to POJO objects are
translated to ontology change events, which in turn trigger an ontology-backed
map panel (Figure 5.7) to reflect ontology changes that relate to geographical locations. The panel contains both locations, marked on a map of the region of operations, and links to documents in which these objects appear.
Figure 5.4 presents an overview of the model proposed for using a domainspecific semantic desktop for information management, and is based on the IRIS
Semantic Desktop framework. One of the parts of the model that is listed as
“planned”, involves a Communication View. The study of communications in command and control was conducted as part of the third case study conducted as part
of this thesis and is described in chapter 6. The Task editor component, which was
suggested as an additional application from which plan information could be extracted, was initially conceived to be a second stage of the ComPlan plan authoring tool. ComPlan, re-implemented as an IRIS plugin, was envisioned as a planning tool to be used alongside standard plan documents so that information about
88
5.3. Results
1
2
3
4
5
6
SemanticClass cl = mappings.get(type);
Long startOffset = annotation.getStartNode()
.getOffset();
int start = (int) info.getOriginalPos(startOffset);
Long endOffset = annotation.getEndNode().getOffset();
int end = (int) info.getOriginalPos(endOffset);
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
String str = originalContent.substring(start, end);
log.debug(”Found an instance of ” + cl + ”: ” + str);
KnowledgeBaseConnection kbc = planAid.getObjectStore()
.getKnowledgeBaseConnection();
SemanticObject semObject;
semObject=retrieveSemanticObject(str,cl,kbc);
triggerChange();
if (semObject instanceof ICLibThingPojo) {
ICLibThingPojo thing = (ICLibThingPojo) semObject;
thing.setDisplayNameIs(str);
if (! Arrays.asList(thing.getAllReferredIn()).
contains(filePojo)) {
thing.addReferredIn(filePojo);
}
if (! Arrays.asList(filePojo.getAllContainsReferenceTo()).
contains(thing)) {
filePojo.addContainsReferenceTo(thing);
}
}
Figure 5.6: Extraction of content elements using the named entity recognition engine in
GATE [72].
SPOD mentioned in
Document !F Maritime
Operations RAVEN!
SPOD
Figure 5.7: The Map Panel which brings a map-based interface to geographical locations
that occur in plan documents in the RAVEN scenario.
tasks, units, locations and capabilities could be extracted from the text documents
in which these properties are described.
5.3 Results
Our results on the affordances and limitations on the Semantic Desktop were based
on an analysis, both on the technical properties of the system developed, but also
on the planning and document management process that we had observed and
collected information on. Specifically, we studied whether we could (1) harvest
semantic elements that were relevant for the production of synchronization matri89
5. Information Management
ces and joint prioritized target lists and (2) use the extracted information to present
useful visual representations of how semantic entities such as tasks, locations and
units relate to one another.
In developing our domain-specific semantic desktop for planning, we concluded that the documents were structured well enough to allow a high precision
in recognizing both content-based and structural patterns. Our data samples were
not large enough to conduct a more thorough analysis, but initial tests did confirm
that both the military locations and the military units and tasks that were used
when reasoning about JPTL:s. Also, synchronization matrices could be found
with fairly general patterns for matching such entities in the particular documents
we has access to.
The visualization of the elements of the ontology was made using two common
representations in military planning: a timeline chart and a map. Due to the nature
of military operations, specialized systems for C2 often use map-based representations, but, as noted both by us and by the participants of the training exercise in
the DRESDEN scenario, they tend to become very challenging to use. One of the
reasons stated was the visual representations of all possible types of information
in one and the same interface, where several user actions were required to specifically avoid having the screen cluttered with the visual symbols and annotations
created by members of staff. If specialized systems could send their information
to a common semantic desktop environment, visualization components on the semantic desktop for aspects of a plan could be re-used between applications, so that
user actions would not be required for extensive filtering of information in every
C2 system, and information could be consistently presented irrespective of origin.
We did make some other observations about the technical structure of Semantic Desktops that warranted caution. The architecture necessary and the frameworks introduced for ontology-based middleware frameworks such as IRIS may
cause disproportionately complicated systems for simple tasks, which, in turn, may
prevent the adoption of technologies that could prove useful. Even if they could
be argued to be useful for advanced reasoning and information merging tasks,
their adoption could fail if simple tasks such as entering and retrieving information, which tend to be more common, are more difficult with the system compared
to without it.
Moving forward, we recognized that a more thorough appreciation of the work
process of staff would be necessary for support systems that aspire to provide support for the work process in military command and control, and we hypothesized
that empirical investigations of the structure of data produced as part of staff training, could yield insights into the tasks they are involved in.
90
6 Communication Analysis
In the third study, we based our work on findings from the previous two studies.
First, from the ComPlan study we had elicited design criteria regarding the use of
an internal reasoning model in a decision support tool. The ComPlan tool was designed to support planning and resource management and the Planning Desktop
was intended to support information management tasks when planning in the technical environment used currently by staff, but we needed to understand the validity
of our criteria in another domain. Second, our work in studies 1 and 2 had made
us aware that work processes in command and control are difficult to describe in
great detail. To describe the work processes of command and control is an active
area of research and not the primary focus of this thesis, but to devise technical
support systems for the process in which commanders are involved, we decided to
study how a support system for the analysis of command and control work could be designed from the lessons learned in studies 1 and 2. The rationale for this choice of
domain was that, for us to support C2 work effectively, we need to understand the
work process involved, and to understand the work process requires researchers
or practitioners (commanders) to study data available from environments in which
commanders operate. We therefore decided to study a set of exercises conducted
as role-playing simulations in computer-simulated environments, where most of
the communications between members of staff, or between staff and their superior
command (simulated by researchers or exercise managers) was available as text.
In two of the scenarios studied, supplementary material was also available in the
form of observers’ logs and in one of the scenarios, in the form of simulation data
and also other material.
91
6. Communication Analysis
Figure 6.1: Participants in one of the role-playing exercises analyzed. To the left, researchers, training staff and technical personnel monitor the events that unfold. To the
right, participants play against human and simulated enemies in an information warfare
scenario. Picture courtesy of the Swedish Defense Research Agency, FOI.
6.1 Material
The material used in study 3 was assembled from role-playing simulation exercises
in military and civilian crisis management. One exercise (LKS) was conducted
with military commanders as part of a series of information warfare training exercises with the Swedish Defense Research Agency (Figure 6.1). The second and
third scenarios were played as emergency management scenarios concerning wildfires and had students and professional emergency managers as participants, respectively. We labelled the study in which students participated C3Fire -05 and the
study with professional operators ALFA -05. In all settings, the participants played
against other human counterparts or simulated systems through computer-assisted
training and simulation systems such as those in Figure 6.1.
These scenarios had been studied previously, or were in the process of being
studied to understand how commanders performed their duties and how well they
did in the situations studied. In the search for well-defined performance indicators
in military command and control, researchers used communication and other data
collected from C2 scenarios to study command and control based on the hypothesis
that those data would be of relevance for describing work in a command staff.
Because of the existing hypothesis that communications indicate staff performance,
we chose to study affordances and limitations of an intelligent decision support
system designed for analyzing communications in command and control, with both
researchers and possibly staff coordination officers as prospective users (see Figure
3.6).
92
6.2. Study
6.2 Study
Provided with the communication material and other related data provided from
C2 scenarios, we conducted a study in four stages, as outlined in Figure 3.5.
6.2.1 Technical evaluation
First, we used a set of machine learning algorithms to evaluate whether communication patterns that researchers had described manually were detectable with automated approaches (Paper IV). To successfully create a support tool for the analysis of communication patterns and staff behavior, we wished to establish whether
human-produced categorization schemes depended on machine-detectable features in the datasets or not. If the precision of emulating human-produced categorizations through automated methods would be very high1 , then the task of
categorizing messages could be reduced to, for instance, a set of propositional rules
as represented in a rule-based classifier.
If, however, the precision proved to be lower than, but still strictly better than
chance, we expected that the utility of using machine learning would not be based
merely on the precision attained from classification, but on the representation of
the patterns the algorithm is able to extract. The first step in the study gave us clear
indications that, in one of the data sets, text-based classification was able to detect
a known transition in a staff workflow. However, all the classification approaches
demonstrated precision results that would prevent the classification of messages
from being used automatically as a support method in C2 research.
6.2.2 Interview study
As a result of this, the second stage in the study consisted of a set of interviews
conducted with active researchers who had been involved in either the scenarios
from which we had collected the material, or other, similar scenarios (Paper V).
The purpose of the interview study was to gain an understanding of how material is
used, from exploration to hypothesis generation to hypothesis testing. Many studies were data-driven, in the sense that, although some included initial hypotheses
regarding teamwork performance, many used the broad research question “how
does the team work under condition X?” to frame reasoning about team behavior
and guide data collection. As a part of either discussions with senior practitioners
or researchers, based on previous experiences or experiences from attending the
exercises as observers or through initial analyses, researchers narrowed their focus to specific issues to be studied. In this process of narrowing research questions
and searching for critical episodes in a team workflow, we saw an opportunity for using
automated clustering of information as a possible support option.
When explaining how they used specialized analysis tools, several interview
participants noted that some tools used in analysis are easy to misuse and that,
1 Note that the threshold precision required for classification is not defined crisply in this context.
As a conservative approximation, though, a precision on par with that of an e-mail filtering agent could
probably be considered high enough.
93
6. Communication Analysis
apart from domain expertise in command and control, a firm mathematical understanding is required. When using Multi-Dimensional Scaling (MDS) [139], researchers can calculate the correlations between factors that come from estimates
provided by human observers, such as confidence in data, leadership confidence. The researcher searching for patterns and correlations between such factors will have to
properly understand the foundations and mathematical requirements for generating the mathematical model, to obtain useful information. In particular, there is no
way for researchers to weigh observational parameters differently, and no means
of calibrating for differences between observers. In the case of LISREL equation
modeling for finding dependencies between observers’ categorizations, researchers
have to be aware that the method requires measurements to feature multinormality
and that factors may not be completely determined (factor indeterminacy) [101].
These constraints on analysis tools are usually not visible to researchers using them.
6.2.3 Workflow Visualizer
In the third stage of the study, we designed the Workflow Visualizer support tool.
In designing the tool, we used two of the criteria from the first study presented in
this thesis: graceful regulation (in Paper VI called open-endedness) and transparency.
Those two criteria were considered most valid in the context of communication
analysis, based both on the precision results of automatic classification of messages
and the interview results from the first part of the study, in which participants
stated the difficulties in using specialized support tools that incorporated advanced
models that were easily misused or could not be communicated easily to others.
The Workflow Visualizer was constructed as an exploration tool in communication analysis. When we began the third case study, we conjectured that there
could be at least three different outcomes from the initial stage, in which we used
the annotated research data from the command and control scenarios to evaluate
machine learning algorithms:
1. The precision results would be close to those attained by random classification, so there would be no correspondence between machine classification
and human classification of messages.
2. The precision results would be significantly better than chance, but still not
good enough for automatic detection of a team workflow, given a description
of a workflow.
3. The precision attained when classifying annotated messages would be high
enough would allow us to automate the process of recognizing a team workflow from team communications.
The results from the first stage of the study (evaluation of automatic message
classification) were inconclusive with respect to the workflow classification. On
the one hand, a known transition in a simple workflow (in the LKS dataset) could
be detected by successively classifying a dataset with fictive categorizations and
94
6.2. Study
Figure 6.2: A version of the Workflow Visualizer preceding the one reported in Paper VI.
This version was intended to detect how messages sent between members of staff would correspond to a formal workflow specification, similar to how Workflow Management Systems
[1] work.
selecting the categorization that yielded the best precision relative to random classification. On the other hand, when classifying messages as pertaining to one of
the four classes that had been used in the manual categorization schemes, all machine learning approaches showed very low precision scores (approximately 50%).
The development of the Workflow Visualizer prototype began in parallel with the
evaluations and was initially targeted towards helping teams understand the current status of a joint, known workflow, where the team uses free-text messaging
for communication. However, as the evaluations gave indications that there was a
large discrepancy between automated text-based classification of workflow-related
message features compared to human classifications, and as the interviews provided information that there are no well-defined, detailed and accepted workflow
model (to which researchers assumed members of staff to adhere to), the support
tool was made into a navigation support tool for research on command and control.
Figure 6.2 displayed the initial iteration of the Workflow Visualizer Tool, in
which e-mail-related workflows between multiple parties can be tracked through
the ontology-based notification and harvesting mechanisms of the IRIS Semantic Desktop (see Section 5.2). In other projects, such workflow monitoring for
the purpose of supporting individuals usually incurred extra tasks on users (e.g.,
[82]), whereas the Workflow Visualizer was intended to automatically match
95
6. Communication Analysis
Cluster 1
Cluster 2
Cluster 3
Cluster 4
Cluster 5
Cluster 6
(a) The cluster selection panel of the Workflow Visualizer implemented to support communication
analysis.
(b) Clusters represented as colored sets of messages along a timeline of events.
Interferences
Communication
interference - major
impact
Communication
interference - minor
impact
(c) The timeline view of the Workflow Visualizer, in which several data sources are displayed using
different colors to indicate their type.
Figure 6.3: The Workflow Visualizer re-designed to support communication and workflow
analysis instead of workflow adherence.
workflow descriptions to user communications through e-mail harvesting in a Semantic Desktop. E-mails were interpreted as possible transitions between tasks
performed by the actors in the workflow, and e-mails could denote either the initiation or (partial) completion of tasks.
This initial tool assumed an underlying formal workflow model in XPDL [65].
As a result of our interview study and our understanding of how researchers conceptualize workflows in command and control (see Paper V for details), we modified the Workflow Visualizer to support researchers in their analysis of communications and other data obtained from C2 exercises. The resulting tool consisted
of several interconnected views of scenario data, much as the ComPlan planning
96
6.3. Results
support tool did. In one of the views, we implemented a text clustering selection
option, where clusters of text messages could be selected, colored and plotted in
a timeline for comparison with other events and with important scenario features
(Figure 6.3).
6.2.4 Workshop evaluation
During the concluding workshop evaluation, which formed the last stage of this
case study, a number of participants from the interview study and one external
participant were presented with a set of scenarios in which the Workflow Visualizer could be used as a support tool for exploring patterns in communication data.
After a walkthrough of the system, where the walkthrough used concrete scenarios the participants were already familiar with, the participants were asked questions regarding the possible utilities of the tool in their normal work settings. They
were also asked to comment on the features of Workflow Visualizer presented.
In their discussions during the workshop, they generally framed their remarks by
relating to the two design criteria open-endedness and transparency. The underlying
text clustering technology, which was used to present clusters of possibly related
messages in the communication flow was considered difficult to understand, but
they expressed appreciation of the flexible manner in which the system allowed
manipulation of data. Although the user interface for presenting and manipulating
clusters in the Workflow Visualizer was considered opaque at first sight, the workshop did not feature a direct comparison of the interface of Workflow Visualizer to
those of other cluster management tools, such as the Infomat cluster exploration
tool [218], which provides general clustering and exploration capabilities. Infomat represents clustering tools that offer general clustering capabilities but uses
an interface adapted for expert use, compared to the Workflow Visualizer which
had an interface adapted for C2 data analysis.
6.3 Results
Our results from the third case study came from both the empirical evaluation of
machine learning approaches and the subsequent interviews, support tool design
and workshop evaluation.
6.3.1 Results from classifier evaluations
The classification of messages in the first stage provided us some data points regarding the precision in classification. However, the precision results alone did not
provide us information on how to use the classification mechanism to support users
explore communication and other data from C2 scenarios. Therefore, we studied
the models constructed, something which provided valuable information about the
constraints of machine classification.
The features that a classifier can recognize and expose to the user as part of the
model constructed depend on how the classifier constructs its model. A classifier
typically builds an internal model by receiving a set of instances (messages) with
97
6. Communication Analysis
a limited, fixed number of attributes and a single classification attribute [267]. A
rule- or tree-based classifier may reveal combinations of attribute values or ranges
of values that correspond to decision classes in a way that a human could understand and interpret. However, some features of a message flow may be inaccessible
to such a classifier if the features concern the relationship between messages (For example the timing between messages, the relation between the last recipient of a
message and the current sender) or the relationships between attributes of a single
message (For example the relation between the roles of the sender and the recipient). Also, classifying with respect to text tends to be considered a problem in
itself, requiring methods for classification other than those dealing with data types
for which there exists a natural, total ordering. Texts can be analyzed with respect to string distances, keyword occurrences, grammatical structures et cetera,
although there is no universal, domain-independent, language-independent definition of similarity. This, in turn, means that it is difficult to combine message text
and other discriminating attributes of messages for classification purposes without explicitly describing how text features can be combined with other attributes,
which would probably be a domain-specific description. In our evaluation of the
precision of a combined classifier, we concluded that the text-based classifier was
much more important to classifying messages than the non-text-based one, but
none of the classifiers had access to both text and other attributes. This means that
patterns that emerge when submitting an order2 can elude the classifier.
6.3.2 Results from the Workflow Visualizer design and evaluation
The workshop discussions indicated that it is is challenging to adopt the design criterion transparency and translate it into more concrete system features. With a tool
that uses a complex internal mechanism such as Latent Semantic Analysis [77], for
building an internal model of texts and produce message clusters, the participants
indicated that the presentation of how the clusters are formed and how similarity
between texts is compared would be critical for adoption of the technique. Although the Workflow Visualizer may have presented a representation of clusters
and their defining features that was potentially more transparent to users than interfaces of other clustering systems such as Infomat, it was clearly not enough.
2 E.g., the combination of certain phrases in the text, submitted from an individual who operates at
a certain level in the organizational hierarchy.
98
7 Summary and discussion
In chapter 1, we stated that we would characterize the conditions for incorporating
AI techniques as support tools for command and control. Our work has consisted
of three separate case studies of command and control decision support systems for
planning, information management and communication analysis. In this chapter,
we summarize the results reported in chapters 4, 5, and 6 on the intelligent decision
support systems studied: planning support systems, semantic desktop systems and
machine learning systems respectively. We do this by summarizing the assumptions made by each class of decision support system and the human functions in
each work context that it is critical to support. In Section 7.4 we examine the general aspects of our studies such as the relationship between the technical systems
studied here and other approaches to decision support.
7.1 Requirements for planning support systems
Planning support systems can be designed to support either the automation of plan
construction or the analysis of plans, or combinations thereof. From the ComPlan
study, we have made some observations about the relationship between the domain
of work and the assumptions when building planning support systems.
7.1.1 Assumptions in planning support systems
Planning support systems are often applications of automated planning systems,
where the paradigm employed is that of automatically sequencing operations so as
to form a maximally useful plan with respect to some utility function employed.
Irrespectively of whether the purpose of a planning support system is to create a
99
7. Summary and discussion
sequence of actions (e.g., [191, 241, 27]), to present high-level options for action
(e.g., [29, 68, 254]) or to evaluate users’ selected actions (e.g., [38, 148]), planning
support systems need to have well-defined plan representations. Specifically, there
must be a set of operators that can be assembled in a partially ordered sequence
that, in turn, forms a plan, in addition to a metric for establishing plan utility based
on the sequence of operators. The requirements for such metrics is a central requirement to the paradigm of automated planning.
For operators to be assembled, they must have well-defined effects that modify
a world state and that can be interpreted as the preconditions for other actions.
There must be no ambiguity in what the effects of actions are, or whether they are
achieved as a result of performing an action. As a corollary requirement, all objects
that can be manipulated (e.g., moved, observed, destroyed) must be represented
as part of the world description, along with their capabilities for action (e.g., being
movable, visible, or destructible). In the ComPlan Study, we first created a formal
description of a planning scenario using the formal language of a hierarchical task
network planner (Section 4.3) and later found that representation to be unsuitable
for providing support relevant to human planners, given the functions they perform. Constraint management may be a central task for human planners, but they
do not consider constraints in the same way that automated planners do, which
makes it critical to allow a flexible utilization of constraint mechanisms.
7.1.2 Human functions to support
The group of commanders involved in planning an operation are tasked with determining what actions they should take to achieve the given overarching goal and
what physical resources they are supposed to employ to achieve the desired effects. Furthermore, they need to understand how much they do and do not know
about a situation, the basis for making assumptions about constraints, the capacities and availability of resources, the intentions of potential hostile forces and how
to manage uncertainties during an operation.
The plan document in itself may be a fragile (by-)product of the process of understanding the constraints involved in planning, and evaluating the performance
in command and control is challenging, if not for technical reasons involved in determining outcome then due to the number of perspectives offered on the nature of
command and control (see Section 2.1.5). However, no perspective on command
and control disregards the activity of understanding the environment, however that activity is compared to other functions of command and control. In most models of
human command and control, understanding the environment precedes the activities of formulating and communicating intent, and of controlling and monitoring
the outcome of activities performed by subordinate units.
As one part of understanding the environment, commanders have to make sure
that they synchronize their actions with those of other commanders. In the original
view of Network-Centric Warfare [5], it was considered beneficial to operations
if commanders could synchronize their actions among themselves autonomously,
without interventions from higher staff. Although self-synchronization is challenging to achieve and the exact conditions under which such strategies for coordina100
7.2. Requirements for semantic desktop systems
tion work better than others have not well studied [141, 90], the concept gained
political appeal in Sweden [127] and was used as a guiding principle during the
transformation of the Swedish Armed Forces from a territorial defensive force into
a rapid expeditionary force. One of the challenges involved in self-synchronization
is the difficulty involved in understanding the plans and intentions of other commanders. To coordinate actions successfully among commanders is critical when
conducting an operation, irrespective of the command strategy employed [69].
7.2 Requirements for semantic desktop systems
Semantic Desktop systems are intended to support the management of information
in desktop applications by providing a unified, underlying ontology model, which
can be used to reason about objects in the domain of work.
7.2.1 Assumptions in semantic desktop systems
To create a semantic desktop system capable of making inferences about objects in
the application domain, one has to produce a formally sound ontological description of a domain in the language of the desktop ontology. The OWL language of
the IRIS semantic desktop system requires objects to belong to a specific class, and
that classes are specified using necessary and sufficient conditions for membership.
The simplest form of ontological description of objects consists of classes with no
required properties. Without object properties, an inference engine cannot classify
objects automatically, but the ontology could still be used for filing purposes in the
same way as folders in filesystems. With more elaborate class descriptions, where
objects must have attribute values that make their classification unique, inference
engines can determine the relationships between objects and classes, and between
classes and other classes, to support the organization of knowledge.
Harvesting objects from the products (e.g., documents, e-mail or browser data)
or interactions (e.g., sending e-mail, opening documents, clicking on links) in a
semantic desktop, requires that all domain-specific semantic objects are available
in machine-readable format. Interactions with a desktop environment are defined
by the environment, and objects in the form of files are naturally opened, saved and
sent through the semantic desktop system, which means that they are accessible
for inspection at a low level. However, to perform reasoning tasks related to the
domain of work relevant to a user, there must be a mapping between those objects
that are relevant to a desktop system and those objects relevant to a user. In chapter
5, we described how concepts and objects that were relevant to reason about in a
C2 context could be found in a documents. In general, to capture the semantics
behind user-produced interactions or documents would require either:
1. a restricted, domain-specific workflow management system for communication or a document production system for documents that directs users to
use only expressions that are machine-readable,
2. an annotation mechanism for everything that is to be machine-translated
(documents, messages), embedded in the desktop environment, or
101
7. Summary and discussion
3. well-structured documents or messages that follow typographical, machinecomprehensible standards that map semantic objects to the structure of a
document or to well-defined patterns within the texts of documents or messages.
We also describe a system for managing documents that had not been authored
using dedicated authoring tools, and that had no annotation procedures. Due to
the types of documents and the characteristics of the semantic objects contained
in these documents, we were able to extract objects of relevance for the planning
process and demonstrate how they can be manipulated in representations that are
dedicated to each type of object (unit synchronization orders and military locations).
7.2.2 Human functions to support
When planning, human planners collaborate to develop courses of action by creating and exchanging a set of documents. In these documents, there are crossreferences to units, locations, targets, timing and background information found
in other documents. Some information may be duplicated in several documents to
make them easier to read, and parts of each document are based on information in
other documents. As several actors independently make changes to their respective
documents, misunderstandings and errors are almost inevitable over the course of
a few days. One planner may make assumptions about the status of a synchronization matrix (that it is nearly finalized) and decide to work on a plan to implement
the tasks assigned to his or her units. A few hours later, changes are made to the
synchronization matrix that are not propagated properly and which lead to confusions later on. Alternatively, a secure point of debarkation may be marked as only
tentative and not definitely decided, but misunderstood by naval officers who start
planning for debarkation at the suggested location only to discover later in their
planning that the tentative location has been changed by others.
One compounding cause of the problems in the planning scenario we have described, is the use of office documents per se for structuring information. However,
in an open environment with high demands are put on interoperability and explicit
demands are made on using off-the-shelf products that are likely to continue to be
maintained and upgraded, standard tools for document preparation are likely to
prevail. This makes it crucial for decision support systems to make use of existing
systems and data.
7.3 Requirements for machine learning systems
In our third case study of support systems for command and control research, we
investigated how machine learning approaches to classifying and clustering information could support in understanding C2 workflows. As part of our work, we
also constructed the Workflow Visualizer tool for helping in the exploration and
analysis of C2 data sets.
102
7.3. Requirements for machine learning systems
7.3.1 Assumptions in machine learning systems
Machine learning works in either supervised or unsupervised mode. In supervised mode, a machine learning classifier classifies instances as belonging to one of a
fixed number of classes, given a set of already classified instances. In unsupervised
mode, a machine learning clusterer clusters instances in groups, given a proximity
metric. In supervised mode. In supervised mode, a classifier can produce a model
from a set of observations, which is a condensed, and in most cases generalized,
representation of the patterns in data that correspond to classifications made.
There are many heuristics which can be applied regarding exactly how to generalize patterns from the given training examples and represent the condensed
model, such as propositional rules, decision trees, decision networks, decision tables et cetera. The patterns that can be detected by machine learning systems
concern relationships among attribute values of single instances in a dataset. Most
machine learning approaches use only a fixed number of attributes, and do not
consider patterns that involve sequences of several observations. Patterns that
can be found as combinations of patterns in text and patterns in other attributes
are usually not considered. For a pattern to be detected, there has to be a set of
examples of classified instances in supervised learning, with examples of all the
possible classes that may be found, and which the supervised learning algorithm
uses to create a classification model. Alternatively, in unsupervised learning, there
has to be a well-defined distance metric for comparing instances with one another
so that clustering instances can be performed for all instances.
The performance of a machine learning system is usually measured as a function of the precision attainable. In some instance, the precision itself may be enough
to determine the utility of a classifier. For example, when filtering junk e-mail,
precision requirements are not symmetric, as the acceptance for false positive classification (type 1 error, classifying non-junk messages as junk) is typically much
lower than for false negative classifications (type 2 error, classifying junk e-mails
as non-junk messages). Thus, the evaluation of e-mail classification systems would
need to consider the entire confusion matrix.
7.3.2 Human functions to support
In the third case study presented, we described the work of exploring relationships
in data obtained from C2 scenarios. The analysis of how researchers explore data
sets of communications in command and control is a meta-analysis of the work
in command and control. In the workflow presented in paper V, the researchers
describe how they selected research material for closer study and how they drew
conclusions from the material they had processed. They described an iterative,
explorative process, in which the selection of data sets and the visualization of
different kinds of patterns was central to their work. In some instances, when
faced with the amounts of data to analyze they were forced to change the research
questions, and they were forced restrict their studies of team communications due
to the time required to create high-level patterns of communication episodes and
responsibilities.
103
7. Summary and discussion
7.4 General discussion of our results
In the following sections, we discuss some general implications of advances in research into decision-making for the development of intelligent decision support
systems to put our results in context. Decision support systems that are based
on classic decision-making research (the research into rationally choosing between options) risk becoming irrelevant as a result of the naturalistic paradigm
of decision-making research [53]. For problem instances in command and control
that we consider to be more wicked than tame (see Section 2.2.1), it seems as if
intelligent decision support needs to become something else than systems for automating the generation of solutions, given the subordinate role of solutions (such
as plans) when managing wicked problems. Despite this, much decision support
research still considers classical utility-optimization support as the primary aim for
DSS research [230].
Some of the recent advances in decision-making research may be difficult to
operationalize as guidelines for decision-support systems (see Sections 2.1.3 and
2.1.4), and new appreciations of the products of work in command and control
may be challenging to transform into the metrics for performance (see Section
2.1.5). Similarly, ontologies as a basis for intelligent support systems risk becoming
irrelevant if they are based on restricted domains and are restricted to formalisms
that are primarily computable, and do not represent the concept of knowledge as
used by human beings (see Section 2.8.2).
7.4.1 Critiquing and Naturalistic Decision Making
The first stage of our first case study involved the creation of a critiquing system,
that is, creating an application for counteracting possible errors in human decisionmaking. The concept of critiquing systems relies on the classical decision-making
notion of choosing rationally between alternatives [235] and the possible human
errors that can arise from our limitations in perception and reasoning (see Section
2.7). However, with the advent of naturalistic decision making, decision making
studies no longer considered individual decision makers’ biases in choosing between options as relevant for understanding the conditions for, and outcomes of,
decision processes [53]. Although the research field concerned with intelligent decision support may not use the term critiquing with the same meaning as it was
originally intended, it refers to a manner in which a specific class of decision support systems—expert systems—were modified and extended to suit the needs of
decision makers based on theories of human cognition. This is what we in this
thesis advocate that other types of intelligent decision support systems must also
do.
7.4.2 Design Criteria
The design criteria for intelligent decision support that we have discussed in this
thesis are neither new or sufficient for describing necessary properties of support
systems based on the techniques examined in this thesis. For example, Jones and
104
7.4. General discussion of our results
Jacobs [145] propose to use the acronym SOFIA for describing desirable features
of systems for cooperative problem-solving, where SOFIA stands for Situated,
Open, Flexible, Interactive Automation. Substituting transparency for Flexible,
open-endedness for Open, and event-driven feedback for Interactive Automation,
our criteria match those proposed by Jones and Jacobs well. Also, their requirement for support systems to be Situated is intended to mean “aware of the context
of work”, which is specifically what we advocate for Semantic Desktop systems
in study 2. The design criteria are not entirely sufficient for the production of intelligent support systems for command and control as they do not, for example,
consider general usability criteria [201], something that is outside the scope of this
thesis.
However, the contribution of the design criteria put forth and tested in our
studies comes from their use in the context of reasoning about affordances and
about limitations of intelligent support systems for command and control. Transparency can be understood as the degree to which a user is able to grasp the affordances and limitations of the underlying support technology, open-endedness defines
the mechanism by which a system manages the inherent limitations of the reasoning engine it is built on, and event-driven feedback is argued to make the user aware of
what the system is able to react on and thereby improving the user’s understanding
of the system’s reasoning process. We maintain that only by properly understanding the contextually relevant affordances and limitations of each particular support
technology will we be able to produce support tools that demonstrate transparency.
Apart from the technical systems having relevant affordances, one could also
argue that there would have to be a good fit between tasks, individual competencies and technology, as proposed by Goodhue and Thompson [114]. One potential
issue with fit as defined by Goodhue and Thompson would be that their concept
of fit did not relate any evaluation factors positively to technology for non-routine
tasks. One would then either have to assume that less structured, non-routine
tasks are ill-suited for decision support (see Section 2.3), or one could merely note
that the concept of fit as specifically described by Goodhue and Thompson may
be difficult to use as a guidance in decision support systems design until positively
correlated factors of fit between non-routine, complex decision making tasks and
intelligent decision support systems can be established.
7.4.3 Methodological issues
In case studies 1 and 2, the evaluation method used was analytical, in the sense
that we reasoned about system properties by comparing descriptions of work in
the domain to the technical properties of the systems used and developed. An
evaluation of the effects tools have on work practices would require an appreciation of what effects can be, what they should be, and how they can be measured.
In Section 2.1.5.1, we describe the general problems inherent in measuring team
performance in knowledge-intensive tasks such as decision making at higher levels
in an organization. As a compromise between emphasizing the internal validity at
the expense of external validity in laboratory experiments, and emphasizing the
external validity at the expense of internal validity in field studies[51], researchers
105
7. Summary and discussion
have used hybrid environments for simulating events and monitoring user actions
while retaining organizational structures and social interactions (e.g., [221, 259]).
One of the challenges in combining design science methods and empirically
driven studies of qualitatively new command and control systems lies in understanding the interplay between tasks and technology. The most common methods
for evaluating fit, as proposed by Goodhue and Thompson [114] and Zigurs and
Buckland [270], stand in contrast to the needs of systems designers when they
develop systems intended to replace entire organizational structures in command.
Cummings [71] suggests that, the design of decision support systems for revolutionary command and control1 presents a challenge best met by augmenting traditional task analysis methods (e.g., [123, 22, 39]) with an analysis of that which can
be assumed stable, even after the introduction of new technology—social norms,
organizational culture and ethical issues—and further, with that which can be simulated in advance. The second part, simulation of new domain aspects, could be
conducted through simulated environments similar to those used in the training exercise in case study 2 (chapter 5) and the simulated information warfare scenario
studied in case study 3 (chapter 6). Qualitatively new tools for the domains studied in this thesis could in the same way be introduced in simulated environments
to study them in controlled environments.
However, evaluating the impact of technology on performance would still
present a challenge, as user-evaluations of performance are the most common
forms of performance evaluation in command and control, and as user-evaluated
measurements of fit with respect to task performance do not substitute for objective
measures of the quality of decision outcomes even for simple tasks [113].
7.4.4 Alternative approaches
One can approach the design problem of creating intelligent decision support either from the domain of work and select existing technologies that best fit the domain as characterized by task descriptions [270], or reason about how features of
current technical systems may be developed and incorporated into organizational
settings [230].
Information management presents an example domain in which one could consider alternative approaches to support tasks performed by people. For example,
if the task is to find information which is related in a collection of documents and
maintain an overview of the most current information in them, then “smart folders” which act as live searches and that present a set of documents based on content
filters could probably be suitable target technologies to use. If the task is to have
an intuitive interface for performing common tasks that involve searching several
databases, web sites or applications, commercial personal assistant software such
as Siri [238] could probably be useful in many such workflows.
Technologies for supporting commanders may be conceptualized using an ontology from computer science. According to such an ontology, systems can be
1 I.e., domains where both command structures, user competencies, hardware, and sensors are in
the process of being defined
106
7.5. Future work
described as agent-based systems [258], Markov decision processes [2], multiple decision criteria analysis systems [29], or case-based reasoning systems [173].
However, one could also relate to other properties of systems, such as the ability
to bring information to others directly, and share information and collaborate at
the same physical location, provide at least some functionality in the absence of
electrical power et cetera.
In Weiser’s vision of ubiquitous computing [262], computers play a part in
our lives, not as central artefacts that we must consider consciously, but as objects
that are ready-at-hand without any conscious thought [131]. In his critique against
technologically obtrusive environments such as virtual reality, he made the argument that the best kind of support is the one we do not even have to think about
to understand [263]. One way of supporting planners, using Weiser’s ideas, would
therefore be to augment simple, paper-based tools, that are simple to understand
and already display many qualities essential for command work, with additional
features found in online C2 systems (see, e.g., [252]).
Some issues with command and control systems may not even be related to
the underlying technological concepts employed, but rather with general usability
issues, or cultural, practical, organizational and social aspects related to general
system acceptability [201]. To properly understand how a system contributes to
a context of work may require analysis of high levels of organizational culture, of
user attitudes towards technology, and of lower technical levels that are concerned
with system responsiveness, integration in users’ workflow and a system’s fit with
user tasks. Although such analyses may be unfeasible in many system development projects, an appreciation of the existence and importance of these factors
that shape acceptance and utility may help system designers consider alternative
options for how to use existing support technologies and how to devise and deploy
qualitatively new ones.
7.5 Future work
All three case studies provide opportunities for further studies. The ComPlan combined planning tool could be used as a conceptual design vehicle in constructing
plan support tools for overcoming the practical difficulties involved in using current information systems for planning. For example, the system depicted in Figure 5.2 offered such a plethora of graphical representations in one single view that
the participants observed during the training exercise struggled to make sense of
where individual units were, and the units’ constraints on movement and action
that were encoded in the support tool. The flexible management of constraints in
ComPlan could provide inspiration for considering alternative options for using
constraints in planning systems.
The Planning Desktop would have to be integrated in a technically more mature environment, such as NEPOMUK [120], to be properly evaluated. In addition, the mapping between visual components and ontological objects would have
to be made accessible to and configurable by users for realistic use scenarios. Finally, the text clustering technology demonstrated as part of the Workflow Visu107
7. Summary and discussion
alizer in case study 3 could be integrated as a component in ESDA tools already
in use to provide us with a better understanding of how hypothetical message and
observation clusters, and other filtering tools for team workflows that were embedded in the Workflow Visualizer, contribute to the understanding of teamwork.
7.5.1 Future research questions
In the design of intelligent decision support, we have described how researchers
from different traditions2 have reasoned about the desired properties of intelligent
decision support systems, decision support systems, and information systems in
general, from the perspective of their respective methodological and epistemological traditions.
Although these different traditions stipulate different validations of knowledge
and indeed different definitions of what constitutes knowledge, at least as material
for scholarly papers, we recognize the merits of each approach in advancing our
understanding of how intelligent decision support, and we maintain that there is
a wealth of opportunities for cross-pollination in the particular case studies analyzed. With an improved agility in design and ability to reflect on the systems we
build, we argue that the art of constructing intelligent systems can significantly
further our understanding of how we interact with, understand, and may best use
intelligent decision support. Balancing requirements for relevance and rigor when
determining the performance of intelligent support techniques, both positive and
negative results for the theoretical performance of autonomous problem-solving
systems can be valuable as foundations for the design of decision support systems
or cognitive models of decision making.
There are several possible research questions appropriate when applying design science-oriented research to study the role of AI techniques in complex decision
domains such as military command and control:
• How can the existing technology for automatically solving problems be
leveraged, modified, or extended to fit as support for humans managing complex problems?
Critiquing systems designers identified new modes for using knowledgebased problem-solving agents to compensate for incomplete and brittle domain knowledge in expert systems. Similarly, we might ask:
• How can the design space for other AI technologies such as learning systems
and automated planners be extended to maximize the benefits of each technology for complex, naturalistic decision-making settings while managing
the inherent technological constraints?
There are also needs for addressing issues that have implications for designscience oriented research and which could be studied using methods from cognitive
science and information systems research:
2 design science-oriented, system-building computer scientists, or social science-oriented
information-systems researchers.
108
7.5. Future work
• Do design criteria for intelligent decision support systems discriminate between
different approaches to using specific kinds of intelligent decision support or
are all uses of a certain technology considered to be similar according to
the criteria, and do the criteria make particular recommendations for decisionmaking in complex, naturalistic settings?
• Can intelligent agents that encode cognitive models of human behavior be
used reliably as instrumentation of external validity with respect to how well
they represent human reasoning, or do they merely demonstrate the internal
validity of models, that is, that the computational models underlying them are
computationally sound in some formal sense? Furthermore, do they actually
translate into a suitable basis for constructing support to human decision
makers? The lure of artificial intelligence may, in the words of Hutchins,
“lead us to attempt to create more and more intelligent agents rather than
more powerful task-transforming representations.” [140, p. 171].
109
8 Conclusions
Implementations of intelligent decision support systems for command and control
are of great importance to a variety of stakeholders, but the outcome of implementing and deploying each system is uncertain due to the multiplicity of purposes for
which systems are built, methods used to design and evaluate them, assumptions
regarding their function, and possible impacts that are implicitly or explicitly defined. Intelligent decision support may be considered as
• technological investments that may lead to commanders being efficient, less
prone to errors, and capable of assessing a wider array of intelligence data, or
that may lead to commanders being overloaded with information and bogged
down with the intrinsics of information systems instead of communicating
intent and coordinating actions;
• systems that change the nature of command and control in that they enable commanders at the same level of command to communicate and coordinate with one another directly and efficiently through a complete situational
overview, or systems that introduce anarchy and confuse commanders by
muddling accountability in command; or
• instruments to implement models of human cognition, which can serve as
verification and concretization mechanisms of cognitive models of human
behavior, or instruments acting as means to substitute intransigent, incomplete and potentially flawed algorithms for the richness of human thought.
The relationship between statements about human cognition and guiding principles for intelligent decision support systems are at the heart of this thesis. We
111
8. Conclusions
Table 8.1: The affordances and constraints of the three intelligent decision support categories studied in this thesis.
Affordances
Constraints
ComPlan
Management of hard
constraints and graphical presentations of
violations of soft constraints on tasks in a
user plan.
Hard constraints must be stated for
the propagation of constraints and
precise goal formulations are required for critiquing of task structure.
The
Planning
Desktop
Semantically
coupled
event-based feedback
and visualization on the
use of domain-specific
entities in documents.
Machine-identifiable patterns are required in the structure or content of
desktop products. Semantic Desktop architecture with knowledge representation in both ontology languages and program code is required.
The
Workflow
Visualizer
Visualization and exploration of patterns in
communication data
Requirements of dataset size and
composition to make reliable predictions, unknown precision and relevance of classification and clustering.
have attempted to do two things: (1) to bring current definitions of human reasoning and problem management to bear on the design and concrete implementation
of three intelligent decision support systems, and (2) maintain a close relationship
with authentic use scenarios in which these systems are to be deployed.
Specifically, we stated two research questions in the beginning of this thesis:
• What affordances and constraints do decision support systems based on AI techniques
have that are relevant for the support of commanders?
• Based on these affordances and constraints, how can AI techniques be incorporated as
support tools for commanders to maximize the utility of support tools?
In Table 8.1 we summarize the affordances and constraints of the three intelligent decision support systems studied. These are the main features, of the three
types of support systems, that have a direct relevance to user tasks in dynamic command and control settings.
The first case study elicited three design criteria that were based on the characterization of affordances of AI planning systems in relation to managing military
operations and civilian crisis management. In planning such operations, the most
central human task is to understand the environment, including the constraints
and possibilities for action. AI planners are generally good at making use of a formal domain model of preconditions, effects and task decompositions, but require
112
a formally sound and complete description of the domain to produce action sequences. Based on those observations and taking into account the characteristics
of a concrete, tactical planning scenario, we argued that the three design criteria
transparency, graceful regulation and event-based feedback could be used to direct the development of planning support for command and control. We claim that planning
support systems based on AI techniques can contribute significantly to planners’
appreciation of a situation while overcoming limitations in their formal model requirements if the system primarily adheres to the design criteria suggested. We
also provide an instantiation—ComPlan—as a demonstration of how these design
criteria can be interpreted.
In case study 2, we studied information management in a semantic desktop environment and had the same basic research questions with regard to affordances
and constraints. The affordances of a Semantic Desktop for staff officers pertain
to the use of desktop information—for example, information from word processors used for producing plan information—and the presentation of this information to human planners with representations familiar to those human planners. We
demonstrate practically how domain-specific semantic desktop applications can be
developed to support the management of the domain-specific concepts used during
planning, when concepts are scattered in several desktop resources, concurrently
accessed and edited by multiple members of staff. Taken together, semantic desktop systems with domain-specific harvesting and interaction components can provide significant benefits through unified access to domain concepts used in military
planning.
The last study, which looks into machine learning approaches to analyze C2
communications, gave insights into how clustering techniques can be used to support communication analysis. Based on a characterization of C2 research and an
empirical investigation of classification precision in C2 scenarios, we argued that
the critical affordance of machine learning techniques was to expose features extracted
as part of the pattern building process rather than to make autonomous classifications
useful for participants. The evaluation identified several key use cases for the
Workflow Visualizer in C2 studies which suggest that text clustering for exploring hypothetical patterns in communications offers a viable approach to adapting
automatic machine learning to support C2 researchers.
In general, the dynamics and interplay between science of human cognition,
and basic and applied technological advances present design-oriented research
with significant challenges. As a result, we maintain that the evaluation of intelligent decision support systems should include those critical affordances and limitations that are related to prospective users, grounded in relevant theory on the
application domain, and we provide three concrete examples of affordances and
limitations for the intelligent support technologies studied in this thesis. With a
better knowledge of affordances and limitations relevant to application domains,
support systems designers will be in a better position to construct tools that can
make commanders significantly better at their job of leading military as well as in
crisis management operations.
113
Bibliography
[1]
Wil van der Aalst and Kees Max van Hee. Workflow management: models,
methods, and systems. Cambridge, MA, USA: MIT Press, 2002.
[2]
Douglas Aberdeen, Sylvie Thiébaux, and Lin Zhang. “Decision-theoretic
military operations planning”. In: Proceedings of ICAPS 2004. 2004.
[3]
Mark S. Ackerman. “The Intellectual Challenge of CSCW: The Gap
Between Social Requirements and Technical Feasibility”. In: HumanComputer Interaction 15 (2000), pp. 179–203.
[4]
Advancing open standards for the global information society (OASIS).
Open Document Format for Office Applications (OpenDocument) v1.1. Feb. 2007.
url: http://www.oasis-open.org/specs/#opendocumentv1
.1.
[5]
D. S. Alberts, J. J. Gartska, and F. P. Stein. Network Centric Warefare: Developing and Leveraging Information Superiority. Washington, DC: National Defense University Press, 2000.
[6]
David S. Alberts and Richard E. Hayes. Understanding Command and Control. CCRP Publication Series, 2002.
[7]
D.S. Alberts and R.E. Hayes. Power to the Edge: Command, Control in the Information Age. CCRP Publication Series, 2003.
[8]
Pär-Anders Albinsson and Johan Fransson. “Representing Military Units
Using Nested Convex Hulls – Coping with Complexity in Command and
Control”. In: Proceedings of the First Swedish-American Workshop on Modeling
and Simulation. Orlando, USA, Oct. 2002.
115
BIBLIOGRAPHY
[9]
J. Allen et al. “The TRAINS Project: A Case Study in Building a Conversational Planning Agent”. In: Journal of Experimental and Theoretical AI 7.1
(1995).
[10]
James Allen and George Ferguson. “Human-Machine Collaborative Planning”. In: Proceedings of the Third International NASA Workshop on Planning and
Scheduling for Space. Houston, Texas, 2002.
[11]
David Anderson et al. “Human-Guided Simple Search”. In: Proceedings
of the Seventeenth National Conference on Artificial Intelligence. Austin, Texas,
2000.
[12]
Stephen J. Andriole. Handbook of Decision Support Systems. TAB Books Inc.,
1989.
[13]
Stephen J. Andriole and Stanley M. Halpin. “Information Technology for
Command and Control”. In: IEEE Transactions on Systems, Man & Cybernetics 16.6 (Dec. 1986), pp. 762–765.
[14]
Michael Argyle. The Social Psychology of Work. London, UK: The Penguin
Press, 1972.
[15]
Stefan Arnborg et al. “Information Awareness in Command and Control:
Precision, Quality, Utility”. In: Proceedings of the International Conference on
Information Fusion. 2000.
[16]
David Arnott and Graham Pervan. “A Critical Analysis of Decision Support Systems Research”. In: Journal of Information Technology 20.2 (2005),
pp. 67–87.
[17]
Anders Arpteg. “Intelligent Semi-Structured Information Extraction:
A User-Driver Approach to Information Extraction”. PhD thesis.
Linköping, Sweden: Department of Computer and Information Science,
Linköping university, 2005.
[18]
John Arquila and David Ronfeldt. Swarming: The Future of Conflict. Tech.
rep. RAND, National Defense Research Institute, 2000.
[19]
Ching man Au Yeung, Nicholas Gibbins, and Nigel Shadbolt. “Userinduced links in collaborative tagging systems”. In: Proceeding of the 18th
ACM conference on Information and knowledge management (CIKM’09). Hong
Kong, China: ACM, 2009, pp. 787–796. isbn: 978-1-60558-512-3.
[20]
Paolo Avesani, Anna Pereni, and Francesco Ricci. “Interactive Case-Based
Planning for Forest Fire Management”. In: Applied Intelligence 13 (2000),
pp. 41–57.
[21]
Franz Baader et al., eds. The Description Logic Handbook. Cambridge University Press, 2003.
[22]
Chris Baber. Cognitive Task Analysis: Current use and practice in the UK Armed
Forces and elsewhere. Tech. rep. Human Factors Integration Defence Technology Centre, 2005.
116
BIBLIOGRAPHY
[23]
Olle Bälter and Candace L Sidner. “Bifrost inbox organizer: giving users
control over the inbox”. In: NordiCHI ’02: Proceedings of the second Nordic conference on Human-computer interaction. Aarhus, Denmark: ACM Press, 2002,
pp. 111–118. isbn: 1-58113-616-1.
[24]
Jakob E. Bardram. “Plans as Situated Action: An Activity Theory Approach to Workflow Systems”. In: Proceedings of the fifth European Conference
on Computer-Supported Cooperative Work. 1997.
[25]
John A. Bateman. “On the relationship between ontology construction
and natural language: a socio-semiotic view”. In: International Journal of Human–Computer Studies 43.5/6 (1995), pp. 929–944.
[26]
Allen W. Batschelet. Effects-based operations: A New Operational Model? Tech.
rep. US Army War College, 2002.
[27]
Marcel A. Becker and Stephen F. Smith. “Mixed-Initiative Resource Management: The AMC Barrel Allocator”. In: Proceedings of the 5th International Conference on Artificial Intelligence Planning and Scheduling (AIPS-2000).
Breckenridge, Colorado, USA, 2000.
[28]
P. van Beek, R. Cohen, and K. Schmidt. “From plan critiquing to clarification dialogue for cooperative response generation”. In: Computational
Intelligence 9.2 (1993), pp. 132–154.
[29]
Micheline Bélanger and Adel Guitouni. “A Decision Support System for
CoA Selection”. In: Proceedings of the 5th International Command and Control
Research and Technology Symposium. 2000.
[30]
Izak Benbasat, David K. Goldstein, and Melissa Mead. “The Case Research Strategy in Studies of Information Systems”. In: MIS Quarterly 11.3
(Sept. 1987), pp. 369–386.
[31]
Izak Benbasat and Robert W. Zmud. “Empirical Research in Information
Systems: The Practice of Relevance”. In: MIS Quarterly 23.1 (Mar. 1999),
pp. 3–16.
[32]
T. Berners-Lee, J. Hendler, and O. Lassila. “The Semantic Web”. In: Scientific American 284.5 (2001), pp. 28–37.
[33]
Ludwig von Bertalanffy. “An Outline of General System Theory”. In: The
British Journal for the Philosophy of Science 1.2 (Aug. 1950).
[34]
Meurig Beynon, Suwanna Rasmequan, and Steve Russ. “A new paradigm
for computer-based decision support”. In: Decision Support Systems 33.2
(2002), pp. 127–142.
[35]
Marie Bienkowski. “Demonstrating the Operational Feasibility of New
Technologies”. In: IEEE Expert 10.1 (Feb. 1995), pp. 27–33.
[36]
Tristan Blanc-Brude and Dominique L. Scapin. “What do people recall
about their documents?: implications for desktop search tools”. In: IUI ’07:
Proceedings of the 12th international conference on Intelligent user interfaces. New
York, NY, USA: ACM Press, 2007, pp. 102–111.
117
BIBLIOGRAPHY
[37]
Harvey Blume. “The Digital Philosopher”. In: The Atlantic Monthly (Dec.
1998).
[38]
Jim Blythe and Yolanda Gil. “A Problem-solving method for Plan Evaluation and Critiquing”. In: proceedings of the International Knowledge Acquisition
Workshop. 1999.
[39]
C.A. Bolstad et al. “Using goal directed task analysis with Army brigade
officer teams”. In: Proceedings of the 46th Annual Meeting of the Human Factors
and Ergonomics Society. 2002, pp. 472–476.
[40]
George Boolos, John P. Burgess, and Richard C. Jeffrey. Computability and
logic. Cambridge University Press, 2002.
[41]
A. Boukhtouta et al. “A survey of military planning systems”. In: Proceedings of the 9th International Command and Control Research and Technology Symposium (ICCRTS). 2004.
[42]
John Boyd. “A discourse on winning and losing”. Maxwell Air Force Base,
AL: Air University Library Document No. M-U 43947 (Briefing slides).
1987.
[43]
Berndt Brehmer. “Dynamic decision making: Human control of complex
systems”. In: Acta Psychologica 81.3 (Dec. 1992), pp. 211–241.
[44]
Berndt Brehmer. “The Dynamic OODA Loop: Amalgamating Boyd’s
OODA Loop and the Cybernetic Approach to Command and Control”.
In: Proceedings of the 2005 Command and Control Research and Technology Symposium. 2005.
[45]
Leonard A. Breslow and David W. Aha. NaCoDAE: Navy Conversational
Decision Aids Environment. Tech. rep. AIC-97-018. Washington, DC, USA:
Navy Center for Applied Research in Artificial Intelligence (NCARAI),
1998.
[46]
R. Brown. Group Processes - Dynamics Within and Between Groups. Cambridge,
Massachusetts: Blackwell, 1993.
[47]
Joel Brynielsson and Stefan Arnborg. “Bayesian Games for Threat Prediction and Situation Analysis”. In: Seventh International Conference on Information Fusion. 2004.
[48]
Mark H. Burstein and Drew V. McDermott. “Issues in the Development of Human-Computer Mixed-Initiative Planning Systems”. In: Cognitive Technology: In Search of a Humane Interface. Ed. by B. Gorayska and
J.L.Mey. Elsevier Science B.V., 1996, pp. 285–303.
[49]
Vannevar Bush. “As we may think”. In: Atlantic Monthly (July 1945).
[50]
Yuhan Cai et al. “Personal information management with SEMEX”. In:
SIGMOD ’05: Proceedings of the 2005 ACM SIGMOD international conference on
Management of data. Baltimore, Maryland: ACM Press, 2005, pp. 921–923.
isbn: 1-59593-060-4.
118
BIBLIOGRAPHY
[51]
Donald T. Campbell and Julian C. Stanley. Experimental and QuasiExperimental Designs for Research. Boston: Houghton Mifflin Company,
1963.
[52]
Janis A. Cannon-Bowers, Eduardo Salas, and Sharolyn Converse.
“Shared Mental Models in Expert Team Decision Making”. In: Individual
and group decision making: current issues. Ed. by N. John Castellan. Hillsdale,
N.J.: Lawrence Erlbaum Associates, 1993, pp. 221–246.
[53]
Janis A. Cannon-Bowers, Eduardo Salas, and John S. Pruitt. “Establishing the Boundaries of a Paradigm for Decision-Making Research”. In: Human Factors 38.2 (1996), pp. 193–205.
[54]
Rafael Capurro. “Hermeneutics and the phenomenon of information”. In:
Metaphysics, Epistemology and Technology. Research in Philosophy and Technology.
Vol. 19. Elsevier Science Publishers Ltd., 2000, pp. 79–85.
[55]
Arthur K. Cebrowski and John J. Garstka. “Network-Centric Warfare:
Its Origin and Future”. In: U.S.Naval Institute Proceedings 124.1 (1998),
pp. 28–35.
[56]
Anup Chalamalla et al. “Identification of class specific discourse patterns”.
In: CIKM ’08: Proceeding of the 17th ACM conference on Information and knowledge
management. Napa Valley, California, USA: ACM, 2008, pp. 1193–1202.
isbn: 978-1-59593-991-3.
[57]
Alan F. Chalmers. What Is This Thing Called Science? second edition. Open
University Press, 1982.
[58]
Peter Checkland and Sue Holwell. Information, Systems and Information Systems—making sense of the field. Chichester, UK: John Wiley & Sons, 1998.
[59]
Jim Q. Chen and Sang M. Lee. “An exploratory cognitive DSS for strategic decision making”. In: Decision Support Systems 36 (2003), pp. 147–160.
[60]
Adam Cheyer, Jack Park, and Richard Giuli. “IRIS: Integrate. Relate.
Infer. Share.” In: Proceedings of the Semantic Desktop and Social Semantic Collaboration Workshop (SemDesk-2006). Athens, GA, USA, Nov. 2006.
[61]
Aaron V. Cicourel. “The Integration of Distributed Knowlede in Collaborative Medical Diagnosis”. In: Intellectual Teamwork: Social and Technological
Foundations of Cooperative Work. Ed. by J. Galegher, R.E. Kraut, and C.
Egido. Hillsdale, New Jersey: Lawrence Erlbaum Associates, 1990.
[62]
William J. Clancey and Reed Letsinger. “NEOMYCIN: reconfiguring a
rule-based expert system for application to teaching”. In: Proceedings of the
7th international joint conference on Artificial intelligence. 1981.
[63]
Delwyn N. Clark. “A literature analysis of the use of science management
tools in strategic planning”. In: The journal of the operational research society
43.9 (1992), pp. 853–870.
[64]
Carl von Clausewitz. On War. Hertfordshire, England: Wordsworth Editions, 1997.
119
BIBLIOGRAPHY
[65]
Workflow Managment Coalition. Workflow Process Definition Interface - XML
Process Definition Language (XPDL). 2002. url: http://www.wfmc.org/
xpdl.html.
[66]
Robin Cohen et al. “What is Initiative?” In: User Modeling and User-Adapted
Interaction 8 (1998), pp. 171–214.
[67]
Nancy J. Cooke et al. “Measuring Team Knowledge: A Window to the
Cognitive Underpinnings of Team Performance”. In: Group Dynamics: Theory, Research and Practice 7 (2003), pp. 179–199.
[68]
G. Cortellessa et al. “User Interaction with an Automated Solver - The
Case of a Mission Planner”. In: PsychNology 2.1 (2004), pp. 140–162.
[69]
Martin van Creveld. Command in War. eleventh edition. Cambridge, Massachusetts: Harvard University Press, 2003.
[70]
Mona J. Crissey et al. “How Modeling and Simulation Can Support
MEDEVAC Training”. In: Proceedings of the 2nd Swedish-American Workshop
on Modeling and Simulation. 2002.
[71]
Mary L. Cummings. “Designing Decision Support Systems for Revolutionary Command and Control Domains”. PhD thesis. School of Engineering and Applied Science, University of Virginia, 2004.
[72]
H. Cunningham et al. “GATE: A framework and graphical development
environment for robust NLP tools and applications”. In: Proceedings of the
40th Anniversary Meeting of the Association for Computational Linguistics. 2002.
[73]
Douglas R. Cutting et al. “Scatter/Gather: A cluster-based approach to
browsing large document collections”. In: Proceedings of the 15th Annual International ACM SIGIR Conference on Research and Development in Information
Retrieval. 1992.
[74]
Laura A. Dabbish et al. “Understanding email use: predicting action on a
message”. In: CHI ’05: Proceedings of the SIGCHI conference on Human factors in
computing systems. Portland, Oregon, USA: ACM, 2005, pp. 691–700. isbn:
1-58113-998-5.
[75]
Jim Davidson and Alex Pogel. “Tactical Agent Model Requirements for
M&S-based IT→C2 Assessments”. In: The International C2 Journal (2010).
[76]
Mike Dean and Guus Schreiber, eds. OWL Web Ontology Language Reference.
Feb. 2004. url: http://www.w3.org/TR/2004/REC- owl-ref20040210/.
[77]
Scott Deerwester et al. “Indexing by latent semantic analysis”. In: Journal
of the American Society for Information Science 41.6 (1990), pp. 391–407.
[78]
S. W. A. Dekker and D. D. Woods. “MABA-MABA or Abracadabra?
Progress on Human-Automation Co-ordination”. In: Cognition, Technology
& Work 4.4 (2002), pp. 240–244.
[79]
Sidney W. A. Dekker and David D. Woods. “To Intervene or not to Intervene: The Dilemma of Management by Exception”. In: Cognition, Technology & Work 1.2 (Sept. 1999), pp. 86–96.
120
BIBLIOGRAPHY
[80]
Stephen Dill et al. “SemTag and seeker: bootstrapping the semantic web
via automated semantic annotation”. In: WWW ’03: Proceedings of the 12th
international conference on World Wide Web. Budapest, Hungary: ACM, 2003,
pp. 178–186. isbn: 1-58113-680-3.
[81]
Xin (Luna) Dong and Alon Halevy. “A Platform for Personal Information
Management and Integration”. In: Proceedings of the Second Biennial Conference on Innovative Data Systems Research. 2005.
[82]
Paul Dourish et al. “Freeflow: mediating between representation and action in workflow systems”. In: CSCW ’96: Proceedings of the 1996 ACM conference on Computer supported cooperative work. Boston, Massachusetts, United
States: ACM, 1996, pp. 190–198. isbn: 0-89791-765-0.
[83]
Hubert Dreyfus. “Why Heideggerian AI Failed and how Fixing it would
Require making it more Heideggerian”. Memo on the state of AI. 2006.
[84]
Hubert Dreyfus and Harrison Hall, eds. Heidegger: A Critical Reader. Basil
Blackwell, 1992.
[85]
Magdalini Eirinaki and Michalis Vazirgiannis. “Web mining for web personalization”. In: ACM Transactions on Internet Technology 3.1 (2003), pp. 1–
27.
[86]
Kathleen M. EisenHardt. “Building Theories from Case Study Research”.
In: Academy of Management Review 14.4 (1989), pp. 532–550.
[87]
Mica R. Endsley. “Situation Awareness Analysis and Measurement”. In:
ed. by Mica R. Endsley and D. J. Garland. Lawrence Erlbaum Associates,
2000. Chap. Theoretical Underpinnings Of Situation Awareness: A Critical Review.
[88]
Mica R. Endsley. “Toward a Theory of Situation Awareness in Dynamic
Systems”. In: Human Factors 37.1 (1995), pp. 32–64.
[89]
Max D. Engelhart et al. Taxonomy of Educational Objectives: Book 1 Cognitive
Domain. Ed. by Benjamin S. Bloom. Longman, 1956.
[90]
Elliot E. Entin et al. Inducing Adaptation in Organizations: Concept and Experiment Design. Tech. rep. A768564. Naval Postgraduate School, California
Graduate School of Operational and Information Sciences, June 2004.
[91]
Henrik Eriksson. “An Annotation Tool for Semantic Documents”. In:
The Semantic Web: Research and Applications. Vol. 4519. Innsbruck, Austria:
Springer Berlin/Heidelberg, June 2007, pp. 759–768.
[92]
Henrik Eriksson. “The Semantic Document Approach to Combining Documents and Ontologies”. In: International Journal of Human–Computer Studies 65.7 (2007), pp. 624–639.
[93]
Kutluhan Erol, James Hendler, and Dana S. Nau. “UMCP: A Sound and
Complete Procedure for Hierarchical Task-Network Planning”. In: Proceedings of the 2nd International Conference on AI Planning Systems (ICAPS).
1994, pp. 249–254.
121
BIBLIOGRAPHY
[94]
Oren Etzioni et al. “Open information extraction from the web”. In: Commun. ACM 51.12 (2008), pp. 68–74. issn: 0001-0782.
[95]
Juan Fdez-Olivares et al. “Bringing users and planning technology together: Experiences in SIADEX”. In: Proceedings of the Sixteenth International Conference on Automated Planning and Scheduling (ICAPS 2006). 2006.
[96]
George Ferguson and J. Allen. “Mixed-Initiative Dialogue Systems for
Collaborative Problem Solving”. In: Proceedings of the AAAI Fall Symposium
on Mixed-Initiative Problem Solving Assistants (FS-05-07). Arlington, Virginia,
2005, pp. 57–62.
[97]
George Ferguson, James Allen, and Brad Miller. “TRAINS-95: Towards a
Mixed-Initiative Planning Assistant”. In: Proceedings of the Third Conference
on Artificial Intelligence Planning Systems (AIPS-96). Edinburgh, Scotland,
1996, pp. 70–77.
[98]
Paul Feyerabend. Against method : outline of an anarchistic theory of knowledge.
Verso, 1993.
[99]
Frederico T. Fonseca and James E. Martin. “Toward an Alternative Notion of Information Systems Ontologies: Information Engineering as a
Hermeneutic Enterprise”. In: Journal of the American Society for Information
Science and Technology 56.1 (2005), pp. 46–57.
[100]
K. Forbus, J. Usher, and V. Chapman. “Sketching for Military Courses
of Action Diagrams”. In: Proceedings of Intelligent User Interfaces Conference.
Miami, Florida, 2003.
[101]
Claes Fornell and Fred L. Bookstein. “Two Structural Equation Models:
LISREL and PLS Applied to Consumer Exit-Voice Theory”. In: Journal
of Marketing Research 19.4 (1982), pp. 440–452.
[102]
Johan Fransson and Pär-Anders Albinsson. “Communication Visualization - An Aid to Military Command and Control Evaluation”. In: Human
Factors and Ergonomics Society Annual Meeting Proceedings. 2001, pp. 590–594.
[103]
Thomas Franz, Steffen Staab, and Richard Arndt. “The X-COSIM integration framework for a seamless semantic desktop”. In: K-CAP ’07: Proceedings of the 4th International Conference on Knowledge Capture. Whistler, BC,
Canada: ACM, 2007, pp. 143–150. isbn: 978-1-59593-643-1.
[104]
Jolene Galegher. “Technology for Intellectual Teamwork: Perspectives
on Research and Design”. In: Intellectual Teamwork: Social and Technological Foundations of Cooperative Work. Ed. by J. Galegher, R.E. Kraut, and C.
Egido. Hillsdale, New Jersey: Lawrence Erlbaum Associates, 1990, pp. 1–
20.
[105]
Liqiang Geng et al. “Discovering Structured Event Logs from Unstructured Audit Trails for Workflow Mining”. In: Foundations of Intelligent Systems. Ed. by Jan Rauch et al. Vol. 5722. Lecture Notes in Computer Science. Springer Berlin / Heidelberg, 2009, pp. 442–452.
122
BIBLIOGRAPHY
[106]
Melinda T. Gervasio, Wayne Iba, and Pat Langley. “Case-Based Seeding
for an interactive Crisis Response Assistant”. In: Case-Based Reasoning Integrations: Papers from the 1998 Worksop. AAAI Technical report WS-98-15. AAAI
Press, 1998, pp. 61–66.
[107]
Gerd Gigerenzer and Daniel G. Goldstein. “Reasoning the Fast and Frugal Way: Models of Bounded Rationality”. In: Psychological Review 103.4
(1996), pp. 650–669.
[108]
Yolanda Gil. “Plan Representation and Reasoning with Description Logics”. Course literature in CS541 - Artificial Intelligence Planning. 2004.
[109]
Yolanda Gil and Jim Blythe. “PLANET: A shareable and reusable ontology for representing plans”. In: AAAI 2000 workshop on representational issues
for real-world planning systems. Austin, Texas, 2000.
[110]
Yolanda Gil et al. “A Knowledge Acquisition Tool for Course of Action
Analysis”. In: Proceedings of the Fifteenth Annual Conference on Innovative Applications of Artificial Intelligence. Acapulco, Mexico, Aug. 2003, pp. 43–50.
[111]
Hassan Gomaa and Douglas B.H. Scott. “Prototyping as a Tool in the
Specification of User Requirements”. In: Proceedings of the 5th International
Conference on Software Engineering (ICSE). 1981.
[112]
Daniel R. Gonzalez. Evolution of CTAPS and the Air Campaign Planning Process. RAND, 1996.
[113]
Dale L. Goodhue, Barbara D. Klein, and Salvatore T. March. “User evaluations of IS as surrogates for objective performance”. In: Information &
Management 38.2 (Dec. 2000), pp. 87–101.
[114]
Dale L. Goodhue and Ronald L. Thompson. “Task-Technology Fit and
Individual Performance”. In: MIS Quarterly 19.2 (June 1995), pp. 213–
236.
[115]
Shirley Gregor. “The Nature of Theory in Information Systems”. In: Management Information Systems Quarterly 30.3 (2006), pp. 611–642.
[116]
Shirley Gregor and David Jones. “The Anatomy of a Design Theory”. In:
Journal of the Association for Information Systems 8.5 (2007), pp. 312–335.
[117]
Dik Gregory. “Delimiting Expert Systems”. In: IEEE Transactions on Systems, Man & Cybernetics 16.6 (Dec. 1986), pp. 834–843.
[118]
Larry Ground, Alexander Kott, and Ray Budd. “A Knowledge-Based Tool
for Planning of Military Operations: the Coalition Perspective”. In: Proceedings of the Second International Conference on Knowledge Systems for Coalition
Operations. 2002.
[119]
Tudor Groza et al. “SALT - Semantically Annotated LaTeX for scientific publications”. In: Proceedings of the 4th European Semantic Web Conference
(ESWC 2007). Innsbruck, Austria, 2007.
[120]
Tudor Groza et al. “The NEPOMUK Project - On the way to the Social
Semantic Desktop”. In: Proceedings of I-SEMANTICS 2007. Graz, Austria,
2007.
123
BIBLIOGRAPHY
[121]
Guidelines for Operational Planning (GOP) – Guideline document for NATO countries. North Atlantic Treaty Organisation. 2000.
[122]
Volker Haarslev, Ralf Möller, and Michael Wessel. “Querying the Semantic Web with Racer + nRQL”. In: Proceedings of the Workshop on Description
Logics 2004. 2004.
[123]
JoAnn T. Hackos and Janice C. Redish. User and Task Analysis for Interface
Design. Wiley, 1998.
[124]
Sture Hägglund. “Introducing expert critiquing systems”. In: The knowledge
engineering review 8.4 (1993), pp. 281–284.
[125]
Lewis Edwin Hahn, ed. The Philosophy of Hans-Georg Gadamer. Open Court
Publishing Company, 1997.
[126]
Stanley M. Halpin. The Human Dimensions of Battle Command: A Behavioral
Perspective on the Art of Battle Command. Research Report 1696. U.S. Army
Research Institute for the Behavioral and Social Sciences, 1996.
[127]
Ulf Hamberg. NBF- Förmågan att se på andra sidan kullen eller ”Kejsarens nya
kläder” (”Network Based Defense” (NBD): The ability to see the other side of the
hill, or ”The Emperor’s New Clothes”). Tech. rep. Försvarshögskolan, Militärvetenskapliga institutionen, Krigsvetenskapliga avdelningen, 2010.
[128]
Joseph Hassell, Boanerges Aleman-Meza, and I. Budak Arpinar. “The
Semantic Web – ISWC 2006”. In: Lecture Notes in Computer Science.
Springer Berlin/Heidelberg, 2006. Chap. Ontology-Driven Automatic Entity Disambiguation in Unstructured Text.
[129]
C. C. Hayes, A. D. Larson, and U. Ravinder. “Weasel: A Mixed-Initiative
System to Assist in Military Planning”. In: Proceedings of the IJCAI-2005
Workshop in Mixed-Initiative Planning and Scheduling. Edinburgh, Scotland,
2005.
[130]
Barbara Hayes-Roth and Frederick Hayes-Roth. “A Cognitive Model of
Planning”. In: Cognitive Science 3 (1979), pp. 275–310.
[131]
Martin Heidegger. The question concerning technology and other essays. Torchbooks, 1969.
[132]
Alan R. Hevner et al. “Design Science in Information Systems Research”.
In: MIS Quarterly 28.1 (2004), pp. 75–105.
[133]
Erik Hollnagel. Barriers and accident prevention. Ashgate, 2004.
[134]
Erik Hollnagel and Andreas Bye. “Principles for modelling function allocation”. In: International Journal of Human-Computer Studies 52 (2000),
pp. 253–265.
[135]
Erik Hollnagel and David A. Woods. Joint cognitive systems : foundations of
cognitive systems engineering. Boca Raton, Florida, USA: CRC Press, 2005.
[136]
Wolfgang Holzinger, Bernhard Krüpl, and Marcus Herzog. “Using ontologies for extracting Product features from Web Pages”. In: Proceedings
of the 5th Internation Semantic Web Conference. 2006.
124
BIBLIOGRAPHY
[137]
Ian Horrocks. “DAML+OIL: a Description Logic for the Semantic Web”.
In: IEEE Data Engineering Bulletin 25.1 (2002), pp. 4–9.
[138]
Ian Horrocks, Peter F. Patel-Schneider, and Frank van Harmelen. “From
SHIQ and RDF to OWL: The Making of a Web Ontology Language”. In:
Web Semantics: Science, Services and Agents on the World Wide Web 1.1 (2003),
pp. 7–26.
[139]
Jih-Jeng Huang, Gwo-Hshiung Tzeng, and Chorng-Shyong Ong. “Multidimensional data in multidimensional scaling using the analytic network
process”. In: Pattern Recognition Letters 26.6 (May 2005), pp. 755–767.
[140]
E Hutchins. Cognition in the Wild. MIT Press, 1995.
[141]
Susan G. Hutchins et al. Enablers of Self-Synchronization for Network-Centric
Operations: Design of a Complex Command and Control Experiment. Tech. rep.
Monterey, CA: Naval Postgraduate School, 2001.
[142]
Northrup Fowler III, Stephen E. Cross, and Chris Owens. “The ARPARome Knowledge-Based Planning and Scheduling Initiative”. In: IEEE
Expert 10.1 (Feb. 1995), pp. 4–9.
[143]
Antoine Henri (Baron) de Jomini. The Art of War. Greenhill, 1996.
[144]
P. M. Jones. “Cooperative Support for Distributed Supervisory Control:
Issues, Requirements, and an Example from Mission Operations”. In: Proceedings of the ACM International Workshop on Intelligent User Interfaces. Orlando, USA, 1993, pp. 239–242.
[145]
Patricia M. Jones and James L. Jacobs. “Cooperative problem solving
in human-machine systems: theory, models, and intelligent associate systems”. In: IEEE Transactions on Systems, Man & Cybernetics: Part C: Applications and Reviews 30.4 (2000), pp. 397–407.
[146]
Josiah R. Collens Jr. and Bob Krause. Theater Battle Management Core System Systems Engineering Case Study. Tech. rep. Lockheed Martin Integrated
Systems and Solutions, 2005.
[147]
Alexander Kalloniatis and Iain Macleod. “Formalization and Agility in
Military Headquarters Planning”. In: The International C2 Journal 4.1
(2010).
[148]
Jihie Kim and Jim Blythe. “Supporting Plan Authoring and Analysis”.
In: Proceedings of the 2003 International Conference on Intelligent User Interfaces.
Miami, FL, USA, 2003.
[149]
Arthur E. Kirkpatrick, Bistra Dilkina, and William S. Havens. “A Framework for Designing and Evaluating Mixed-Initiative Optimization Systems”. In: Proceedings of the IJCAI-2005 Workshop in Mixed-Initiative Planning
and Scheduling. Edinburgh, Scotland, 2005.
[150]
Gary A. Klein et al., eds. Decision Making in Action: Models and Methods. Ablex
Publishing corporation, 1993.
[151]
Richard Klimoski and Susan Mohammed. “Team Mental Model: Construct or Metaphor?” In: Journal of Management 20.2 (1994), pp. 403–437.
125
BIBLIOGRAPHY
[152]
Raymond Kosala and Hendrik Blockeel. “Web mining research: a survey”.
In: SIGKDD Explorations 2.1 (2000), pp. 1–15. issn: 1931-0145.
[153]
Alexander Kott, Robert Rasch, and Kenneth D. Forbus. “AI on the Battlefield: An Experimental Exploration”. In: Proceedings of the 14th Conference
on Innovative Applications of Artificial Intelligence. 2002.
[154]
Alexander Kott, Victor Saks, and Albert Mercer. “A new technique enables
dynamic replanning and rescheduling of aeromedical evacuation”. In: AI
Magazine 20.1 (1999), pp. 43–53.
[155]
Alexander Kott et al. “Toward practical knowledge-based tools for battle
planning and scheduling”. In: Proceedings of the Eighteenth National Conference
on Artificial Intelligence. Edmonton, Alberta, Canada, July 2002, pp. 894–
899.
[156]
N. Kushmerick. “Wrapper Induction for Information Extraction”. PhD
thesis. University of Washington, 1997.
[157]
N. Kushmerick et al. “Activity-centric email: A machine learning approach”. In: Proc. American Nat. Conf. Artificial Intelligence. 2006.
[158]
Nicholas Kushmerick and Tessa Lau. “Automated email activity management: An unsupervised learning approach”. In: Proceedings of the Conference
on Intelligent User Interfaces. 2005.
[159]
Curtis B Langlotz and Edward H. Shortliffe. “Adapting a consultation system to critique user plans”. In: Developments of Expert Systems (1984), pp. 77–
94.
[160]
Patrik Larsson and Arne Jönsson. “Automatic handling of Frequently
Asked Questions using Latent Semantic Analysis”. In: Proceedings of the IJCAI Workshop on Knowledge and Reasoning in Practical Dialogue Systems. 2009.
[161]
Joel S. Lawson Jr. “Command control as a process”. In: IEEE Control Systems Magazine (1981).
[162]
Paul E. Lehner. “On the Role of Artificial Intelligence in Command and
Control”. In: IEEE Transactions on Systems, Man & Cybernetics 16.6 (Dec.
1986), pp. 824–833.
[163]
Ola Leifler. “Combining Technical and Human-Centered Strategies for
Decision Support in Command and Control — The ComPlan Approach”.
In: Proceedings of the 5th International Conference on Information Systems for Crisis Response and Management. Brussels, Belgium, May 2008.
[164]
Ola Leifler and Henrik Eriksson. “A Model for Document Processing in
Semantic Desktop Systems”. In: Proceedings of I-KNOW ’08, The International
Conference on Knowledge Management. Graz, Austria, Sept. 2008.
[165]
Ola Leifler and Henrik Eriksson. “Analysis tools in the study of distributed
decision-making: a meta-study of command and control research”. In: Cognition, Technology & Work (2011).
126
BIBLIOGRAPHY
[166]
Ola Leifler and Henrik Eriksson. “Domain-specific knowledge management in a Semantic Desktop”. In: Proceedings of I-KNOW ’09, The International Conference on Knowledge Management. Graz, Austria, Sept. 2009.
[167]
Ola Leifler and Henrik Eriksson. “Message Classification as a basis for
studying command and control communications - An evaluation of machine learning approaches”. In: Journal of Intelligent Information Systems
(2011).
[168]
Ola Leifler and Henrik Eriksson. “Text-based Analysis for Command and
Control Researchers — The Workflow Visualizer Approach”. In: Cognition,
Technology & Work (2011).
[169]
Ola Leifler and Johan Jenvald. “Critique and Visualization as decision
support for mass-casualty emergency management”. In: Proceedings of the
Second International ISCRAM Conference. Ed. by Bartel Carle and B. van de
Walle. Brussels, Belgium, Apr. 2005.
[170]
Ola Leifler and Johan Jenvald. “Simulation as a tool for problem detection in rescue operation planning”. In: Proceedings of the Conference on Modeling and Simulation for Public Safety: SimSafe 2005. Ed. by Peter Fritzon. The
Programming Environments Laboratory, Department of Computer and
Information Science, Linköping University. May 2005.
[171]
Ola Leifler et al. “Developing Critiquing Systems for Network Organizations”. In: Proceedings of IFIP 13.5 Working Conference on Human Error, Safety
and Systems Development. Ed. by Philippe Palanque. Toulouse, France, Aug.
2004.
[172]
John Levine, Austin Tate, and Jeff Dalton. “O-P3 : Supporting the Planning Process using Open Planning Process Panels”. In: IEEE Intelligent Systems 15.5 (2000), pp. 52–62.
[173]
Shu hsien Liao. “Case-based decision support system: Architecture for
simulating military command and control”. In: European Journal of Operational Research 123.3 (2000), pp. 558–567.
[174]
William M. Mace. “James J. Gibson’s Strategy for Perceiving: Ask Not
What’s Inside Your Head, but What Your Head’s Inside of”. In: Perceiving,
Acting, and Knowing: Toward and Ecological Psychology. Ed. by Robert Shaw
and John Bransford. 1977.
[175]
Pattie Maes. “Agents that reduce work and information overload”. In:
Communications of the ACM 37.7 (1994), pp. 30–40. issn: 0001-0782.
[176]
John C. Mallery, Roger Hurwitz, and Gavan Duffy. “Hermeneutics”. In:
Encyclopedia of Artificial Intelligence. New York, NY, USA: John Wiley &
Sons, 1987.
[177]
Salvatore T. March and Gerald F. Smith. “Design and natural science research on information technology”. In: Decision Support Systems 15 (1995),
pp. 251–266.
127
BIBLIOGRAPHY
[178]
Christofer J. Matheus, Mieczyslaw M. Kokar, and Kenneth Baclawski. “A
Core Ontology for Situation Awareness”. In: Proceedings of the 6th International Conference on Infromation Fusion. Cairns, Queensland, Australia, July
2003.
[179]
Luke K. McDowell and Michael Cafarella. “Ontology-driven Information
Extraction with OntoSyphon”. In: Proceedings of the 5th Internation Semantic
Web Conference. 2006.
[180]
Raul Medina-Mora et al. “The action workflow approach to workflow
management technology”. In: CSCW ’92: Proceedings of the 1992 ACM conference on Computer-supported cooperative work. Toronto, Ontario, Canada: ACM,
1992, pp. 281–288. isbn: 0-89791-542-9.
[181]
David J. Mendonça. “Decision Support for Improvisation in Response
to Extreme Events: Learning from the response to the 2001 World Trade
Center attack”. In: Decision Support Systems 43.3 (2007), pp. 952–967.
[182]
David J. Mendonça and David A. Wallace. “A Cognitive Model of Improvisation in Emergency Management”. In: IEEE Transactions on Systems,
Man & Cybernetics 37.4 (2007), pp. 547–561.
[183]
Perry L. Miller. “Medical Plan-Analysis: The ATTENDING System”. In:
Proceedings of the 1983 International Joint Conference on Artificial Intelligence.
1983.
[184]
Thomas E. Miller, Laura G. Militello, and Jennifer K. Heaton. “Evaluating Air Campaign Plan Quality in Operational Settings”. In: ARPI 1996
Proceedings. 1996.
[185]
Tom Mitchell. Machine Learning. McGraw Hill, 1997.
[186]
Preben Mogensen. “Towards a Provotyping Approach in Systems Development”. In: Scandinavian Journal of Information Systems 4 (1992), pp. 31–
53.
[187]
Susan Mohammed, Richard Klimoski, and Joan R. Rentsch. “Measurement of Team Mental Models: We Have No Shared Schema”. In: Organizational Research Methods 3.2 (2000), pp. 123–165.
[188]
Oskar Morgenstern and John Von Neumann. Theory of Games and Economic
Behavior. Princeton University Press, May 1944.
[189]
Magnus Morin. “Multimedia Representations of Distributed Tactical Operations”. PhD thesis. Institute of Technology, Linköpings universitet,
2002.
[190]
Bonnie M. Muir. “Trust between humans and machines, and the design
of decision aids”. In: International Journal of Man-Machine Studies 27 (1987),
pp. 527–539.
[191]
Alice M. Mulvehill and Joseph A. Caroli. “JADE: A Tool for Rapid Crisis
Action Planning”. In: Proceedings of the 1999 Command and Control Research and
Technology Symposium. 1999.
128
BIBLIOGRAPHY
[192]
Héctor Muñoz-Avila et al. “HICAP: An interactive case-based planning
architecture and its application to noncombatant evacuation operations”.
In: Proceedings of the Ninth Conference on Innovative Applications of Artificial Intelligence. Orlando, USA: AAAI Press, 1999, pp. 879–885.
[193]
Karen L. Myers. “Abductive Completion of Plan Sketches”. In: Proceedings
of the Fourtenth National Conference on Artificial Intelligence (AAAI97). AAAI
Press, 1997.
[194]
Karen L. Myers. “Domain metatheories: Enabling User-Centric Planning”. In: Proceedings of the AAAI-2000 Workshop on Representational Issues for
Real-World Planning Systems. 2000.
[195]
Karen L. Myers, Peter A. Jarvis, and Thomas Lee. “CODA: Coordinating
Human Planners”. In: Proceedings of the 6th European Conference on Planning
(ECP-01). Toledo, Spain, 2001.
[196]
Karen L. Myers et al. “A Mixed-initiative Framework for Robust Plan
Sketching”. In: Proceedings of the 13th International Conference on Automated
Planning and Scheduling. Trento, Italy, 2003.
[197]
Karen L. Myers et al. “PASSAT: A User-centric Planning Framework”. In:
Proceedings of the Third International NASA Workshop on Planning and Scheduling
for Space. Houston, Texas, Oct. 2002.
[198]
Danish Nadeem and Leo Sauermann. “From Philosophy and MentalModels to Semantic Desktop research: Theoretical Overview”. In: Proceedings of I-Semantics ’07. Ed. by Tassilo Pellegrini and Sebastian Schaffert.
2007, pp. 211–220.
[199]
D.S. Nau et al. “SHOP2: An HTN Planning System”. In: Journal of Artificial Intelligence Research 20 (2003), pp. 379–404.
[200]
Ian M. Neale. “First generation Expert systems: a review of knowledge
acquisition methodologies”. In: The knowledge engineering review 3.2 (1988),
pp. 105–145.
[201]
Jakob Nielsen. Usability Engineering. San Francisco, CA, USA: Morgan
Kaufmann Publishers Inc., 1993. isbn: 0125184050.
[202]
Joseph D. Novak and Alberto J. Cañas. The Theory Underlying Concept Maps
and How to Construct Them. Tech. rep. Cornell University, 1982.
[203]
J. Orasanu and T. Conolly. “The Reinvention of Decision Making”. In:
Decision Making in Action. Ed. by G. A. Klein et al. Norwood, New Jersey:
Ablex Publishing corporation, 1993.
[204]
Mihir Parikh, Bijan Fazlollahi, and Sameer Verma. “The Effectiveness of
Decisional Guidance: An Empirical Evaluation”. In: Decision Sciences 32.2
(2001), pp. 303–331.
[205]
Mats Persson. “Visualization of Information Spaces for Command and
Control”. In: ROLF 2010 – The Way Ahead and The First Step. Stockholm:
Gotab Erlanders, 2000.
129
BIBLIOGRAPHY
[206]
Ross Pigeau and Carol McCann. “Re-conceptualizing command and control”. In: Canadian Military Journal 3.1 (2002), pp. 53–64.
[207]
Ross Pigeau and Carol McCann. “Taking Command of C2 ”. In: Towards
a Conceptual Framework for Command and Control. Ed. by Ross Pigeau and
Carol McCann. Defence and Civil Institute of Environmental Medicine,
2000.
[208]
Suzanne D. Pinson, Jorge Anacleto Louçã, and Pavlos Moraitis. “A distributed decision support system for strategic planning”. In: Decision Support Systems 20 (1997), pp. 35–51.
[209]
Mike Pool et al. “Evaluating Expert-Authored Rules for Military Reasoning”. In: Proceedings of K-CAP 03. 2003.
[210]
Stephen Potter, Austin Tate, and Gerhard Wickler. “Using I-X Process
Panels as Intelligent To-Do Lists for Agent Coordination in Emergency
Response”. In: Proceedings of the Third International ISCRAM Conference. Ed.
by Bartel Van de Walle and Murray Turoff. Newark, NJ, USA, 2006.
[211]
Eric Prud’hommeaux and Andy Seaborne, eds. SPARQL Query Language for
RDF. W3C. 2008. url: http://www.w3.org/TR/2008/REC-rdfsparql-query-20080115/.
[212]
Gregg Rabideau et al. “Iterative Repair Planning for Spacecraft Operations using the ASPEN System”. In: Proceedings of the International Symposium on Artificial Intelligence, Robotics and Automation for Space. 1999.
[213]
Dnyanesh Rajpathak and Enrico Motta. “An Ontological Formalization
of the Planning Task”. In: Proceedings of the International Conference on Formal
Ontology in Information Systems. 2004.
[214]
Robert Rasch, Alexander Kott, and Kenneth D. Forbus. “Incorporating
AI into Military Decision Making: An Experiment”. In: IEEE Intelligent
Systems 18.4 (2003), pp. 18–26.
[215]
Jens Rasmussen. “Deciding and Doing: Decision-Making in Natural Contexts”. In: Decision Making in Action: Models and Methods. Ed. by Gary A. Klein
et al. Ablex Publishing corporation, 1993.
[216]
L. Reeve and H. Han. “Survey of semantic annotation platforms”. In: Proceedings of the 2005 ACM symposium on Applied computing (2005), pp. 1634–
1638.
[217]
Horst W. J. Rittel and Melvin M. Webber. “Dilemmas in a General Theory
of Planning”. In: Policy Sciences 4.2 (1973), pp. 155–169.
[218]
Magnus Rosell. “Text Clustering Exploration - Swedish Text Representation and Clustering Results Unraveled”. PhD thesis. KTH School of Science and Communication, 2009.
[219]
Arun Ross and Anil Jain. “Information Fusion in Biometrics”. In: Pattern
Recognition Letters 24.13 (Sept. 2003), pp. 2115–2125.
[220]
Karol G. Ross et al. “The Recognition-Primed Decision Model”. In: Military Review (Aug. 2004), pp. 6–10.
130
BIBLIOGRAPHY
[221]
Robert C. Rubel. “War-Gaming Network-Centric Warfare”. In: Naval War
College Review 54.2 (2001), pp. 61–74.
[222]
Mehran Sahami et al. “A bayesian approach to filtering junk e-mail”. In:
Proceedings of the AAAI-98 Workshop on Learning for Text Categorization. 1998.
[223]
Magnus Sahlgren and Jussi Karlgren. “Terminology mining in social media”. In: Proceeding of the 18th ACM conference on Information and knowledge management (CIKM’09). Hong Kong, China: ACM, 2009, pp. 405–414. isbn:
978-1-60558-512-3.
[224]
Leo Sauermann et al. “Semantic Desktop 2.0: The Gnowsis Experience”.
In: Proceedings of the International Semantic Web Conference (ISWC 2006). Ed.
by I. Cruz. Vol. 4273. Lecture Notes in Computer Science. Springer Verlag, 2006, pp. 887–900.
[225]
Donald A. Schön. Educating the Reflective Practitioner. Jossey-Bass, 1987.
[226]
Charles R. Schwenk. “Cognitive Simplification Processes in Strategic
Decision-Making”. In: Strategic Management Journal 5.2 (1984), pp. 111–
128.
[227]
Yuval Shahar, Silvia Miksch, and Peter Johnson. “Artificial Intelligence
in Medicine”. In: vol. 1211/1997. Lecture Notes in Computer Science.
Springer Berlin/Heidelberg, 1997. Chap. A task-specific ontology for the
application and critiquing of time-oriented clinical guidelines, pp. 51–61.
[228]
Yuval Shahar, Silvia Miksch, and Peter Johnson. “The Asgaard Project:
a task-specific framework for the application and critiquing of timeoriented clinical guidelines”. In: Artificial Intelligence in Medicine 14.1–2
(1998), pp. 29–51.
[229]
Lawrence G. Shattuck and David D. Woods. “Communication Of Intent
In Military Command and Control Systems”. In: The Human in Command:
Exploring the Modern Military Experience. Ed. by C. McCann and R. Pigeau.
241 Borough High Street, London: Kluwer Academic/Plenum Publishers,
2000, pp. 279–292.
[230]
J.P. Shim et al. “Past, present, and future of decision support technology”.
In: Decision Support Systems 33.2 (2002), pp. 111–126.
[231]
Edward H. Shortliffe et al. “Computer-based consultations in clinical therapeutics: Explanation and rule acquisition capabilities of the MYCIN system”. In: Computers and Biomedical Research 8.4 (1975), pp. 303–320.
[232]
Barry G. Silverman. Critiquing Human Error – A Knowledge Based HumanComputer Collaboration Approach. London: Academic Press, 1992.
[233]
Barry G. Silverman. “Survey of expert critiquing systems: Practical and
theoretical frontiers”. In: Communications of the ACM 35.4 (1992), pp. 106–
127.
[234]
Barry G. Silverman and Gregory Wenig. “Engineering expert critics
for cooperative systems”. In: The knowledge engineering review 8.4 (1993),
pp. 309–328.
131
BIBLIOGRAPHY
[235]
Herbert A. Simon. “A Behavioral Model of Rational Choice”. In: The Quarterly Journal of Economics 69.1 (1955).
[236]
Herbert A. Simon. “Rational Decision-Making in Business Organizations”. In: The American Economic Review 69.4 (1979), pp. 493–513.
[237]
Herbert A. Simon and Allen Newell. Human Problem Solving. Prentice Hall,
1972.
[238]
Siri: Your Virtual Personal Assistant. Siri, Inc. 2010. url: http://siri.
com/.
[239]
Evren Sirin et al. “Pellet: A practical OWL-DL reasoner”. In: Journal of
Web Semantics 5.2 (2007).
[240]
Benjamin D. Smith et al. “Automated Planning for the Modified Antarctic
Mission”. In: IEEE Aerospace Conference. Mar. 2001.
[241]
Stephen F. Smith, David W. Hildum, and David R. Crimm. “Comirem: An
Intelligent Form for Resource Management”. In: IEEE Intelligent Systems
20.2 (2005), pp. 16–24. issn: 1541-1672.
[242]
Stephen F. Smith, Ora Lassila, and Marcel Becker. “Configurable, MixedInitiative Systems for Planning and Scheduling”. In: Advanced Planning
Technology (1996).
[243]
Diane H. Sonnenwald and Linda G. Pierce. “Information behavior in dynamic group work contexts: interwoven situational awareness, dense social networks and contested collaboration in command and control”. In:
Information Processing and Management 36 (2000), pp. 461–479.
[244]
Steffen Staab and R. Studer, eds. Handbook on Ontologies. International
Handbooks on Information Systems. Springer Berlin/Heidelberg, 2004.
[245]
Pieter Jan Stappers and John M. Flach. “Visualizing cognitive systems:
Getting past block diagrams”. In: Proceedings of the 2004 IEEE Conference on
Systems, Man & Cybernetics. 2004.
[246]
Paul Stockwella et al. “Use of an automatic content analysis tool: A technique for seeing both local and global scope”. In: International Journal of
Human-Computer Studies (2009).
[247]
Lucy A. Suchman. Human-Machine Reconfigurations. Plans and Situated Actions. 2nd edition. Cambridge University Press, 2007.
[248]
Lucy A. Suchman. Plans and situated actions – The problem of human-machine
communication. Cambridge University Press, 1987.
[249]
Claes Sundin and Henrik Friman, eds. ROLF 2010 – The Way Ahead and The
First Step. Stockholm: Gotab Erlanders, 2000.
[250]
Claes Sundin and Henrik Friman, eds. ROLF 2010 - A Mobile Joint Command
and Control Concept. Gotab Erlanders, 1998.
[251]
William Swartout and Yolanda Gil. “EXPECT: Explicit Representations
for Flexible Acquisition”. In: Proceedings of the Ninth Knowledge Acquisition for
Knowledge-Based Systems Workshop (KAW’95). AAAI. Banff, Canada, Feb.
1995.
132
BIBLIOGRAPHY
[252]
Tomas Sylverberg et al. “Drawing on Paper Maps: Robust On-line Symbol
Recognition of Handwritten NATO Symbols using Digital Pen and Mobile Phone”. In: Proceedings of The Second International Conference on Pervasive
Computing and Applications (ICPCA07). 2007.
[253]
Juha Takkinen. “From Information Management to Task Management in
Electronic Mail”. PhD thesis. Linköping studies in Science and Technology, 2002.
[254]
Austin Tate, Jeff Dalton, and John Levine. “Generation of Multiple Qualitatively Different Plan Options”. In: Artificial Intelligence Planning Systems.
1998, pp. 27–35.
[255]
Austin Tate et al. Co-OPR: Design and Evaluation of Collaborative Sensemaking
and Planning Tools for Personnel Recovery. Tech. rep. Open University Knowledge Media Institute, 2006.
[256]
George Tecuci. “Training and Using Disciple Agents: A Case Study in the
Military Center of Gravity Analysis Domain”. In: AI Magazine 24.4 (2002),
pp. 51–68.
[257]
Peter Thunholm. “Military Decision Making and Planning: Towards a
New Prescriptive Model”. PhD thesis. Stockholms universitet, 2003.
[258]
David Traum et al. “Negotiations over Tasks in Hybrid Human-Agent
Teams for Simulation-Based Training”. In: Proceedings of the Second Joint
Conference on Autonomous Agents and Multiagent Systems. Melbourne, Australia, July 2003, pp. 441–448.
[259]
Jiri Trnka and Johan Jenvald. “Role-Playing Exercise – A Real-Time Approach to Study Collaborative Command and Control”. In: The International Journal of Intelligent Control and Systems 11.4 (2006), pp. 218–228.
[260]
A. Valente, Yolanda Gil, and William Swartout. INSPECT: An Intelligent
System for Air Campaign Plan Evaluation based on EXPECT. Tech. rep. USC –
Information Sciences Institute, 1996.
[261]
Martin L. Van Creveld. The Art of War : War and Military Thought. Martin
van Creveld. ill. (some col.) ; 28 cm. London: Cassell, 2000.
[262]
Mark Weiser. “Hot Topics: Ubiquitous Computing”. In: IEEE Computer
(Oct. 1993).
[263]
Mark Weiser. “The world is not a desktop”. In: Interactions (Jan. 1994),
pp. 7–8.
[264]
Lijie Wen, Jianmin Wang, and Wil M. P. van der Aalst. “A novel approach
for process mining based on event types”. In: Journal of Intelligent Information Systems 32 (2009), pp. 163–190.
[265]
Norbert Wiener. “Cybernetics”. In: Bulletin of the American Academy of Arts
and Sciences 3.7 (1950), pp. 2–4.
[266]
Terry Winograd and Fernando Flores. Understanding computers and cognition
: a new foundation for design. Norwood, New Jersey, USA: Ablex Publishing
corporation, 1986.
133
Paper I
[267]
Ian H. Witten and Eibe Frank. Data mining : Practical Machine Learning Tools
& Techniques, Second Edition. Morgan Kaufmann Series in Data Management Systems. Morgan Kaufmann Publishers Inc., 2005.
[268]
Rogier Woltjer. “Functional Modeling of Constraint Management in Aviation Safety and Command and Control”. PhD thesis. Linköping, Sweden:
Linköping studies in Science and Technology, 2009.
[269]
Rogier Woltjer. “On how constraints shape actions”. MA thesis. SE-58183
Linköping, Sweden: Graduate School for Human-Machine Interaction,
Division of Quality and Human-Systems Engineering, Department of Mechanical Engineering, Linköpings universitet, 2005.
[270]
Ilze Zigurs and Bonnie K. Buckland. “A Theory of Task/Technology Fit
and Group Support Systems Effectiveness”. In: MIS Quarterly 22.3 (Sept.
1998), pp. 313–334.
134
Fly UP