This paper describes a prototype model for exploring counterterrorism issues related to the recruiting effectiveness of organizations such as al Qaeda. The prototype demonstrates how a model can be built using qualitative input variables appropriate to representation of social-science knowledge, and how a multiresolution design can allow a user to think and operate at several levels - such as first conducting low-resolution exploratory analysis and then zooming into several layers of detail. The prototype also motivates and introduces a variety of nonlinear mathematical methods for representing how certain influences combine. This has value for, e.g., representing collapse phenomena underlying some theories of victory, and for explanations of historical results. The methodology is believed to be suitable for more extensive system modeling of terrorism and counterterrorism.
KEYWORDS: Defense and security, Information technology, Medicine, Databases, Weapons, Strategic intelligence, Space operations, Chemical elements, Biological research, Chemical analysis
Looking at the battlespace as a system of systems is a cornerstone of Effects-Based Operations and a key element in the planning of such operations, and in developing the Commander's Predictive Environment. Instead of a physical battleground to be approached with weapons of force, the battlespace is an interrelated super-system of political, military, economic, social, information and infrastructure systems to be approached with diplomatic, informational, military and economic actions. A concept that has proved useful in policy arenas other than defense, such as research and development for information technology, addressing cybercrime, and providing appropriate and cost-effective health care, is foresight. In this paper, we provide an overview of how the foresight approach addresses the inherent uncertainties in planning courses of action, present a set of steps in the conduct of foresight, and then illustrate the application of foresight to a commander's decision problem. We conclude that foresight approach that we describe is consistent with current doctrinal intelligence preparation of the battlespace and operational planning, but represents an advance in that it explicitly addresses the uncertainties in the environment and planning in a way that identifies strategies that are robust over different possible ground truths. It should supplement other planning methods.
High-level decision makers face complex strategic issues and decision support for such individuals needs to be topdown, and to use representations natural to their level and particular styles. Decision support should focus on objectives; uncertainties, which are often both large and deep; risks; and how to do well despite the uncertainties and risks. This implies that decision support should help identify flexible, adaptive, and robust strategies (FAR strategies), not strategies tuned to particular assumptions. Decision support should also have built-in zoom capability, since decision makers sometimes need to know the underlying basis for assessments in order to review and alter assumptions, and to communicate a concern about details that encourages careful work. These requirements apply to both strategic planning (e.g., force planning in DoD or the Services) and operations planning (e.g., a commander's war planning). This paper discusses how to meet the requirements and implications for further research and enabling technology.
This paper is the summary of a recent RAND study done at the request of the U.S. Defense Modeling and Simulation Office (DMSO). Commissioned in recognition that the last decade's efforts by DoD to achieve model "composability" have had only limited success (e.g., HLA-mediated exercises), and that fundamental problems remain, the study surveyed the underlying problems that make composability difficult. It then went on to recommend a series of improvement measures for DMSO and other DoD offices to consider. One strong recommendation was that DoD back away from an earlier tendency toward overselling composability, moving instead to a more particularized approach in which composability is sought within domains where it makes most sense substantively. Another recommendation was that DoD needs to recognize the shortcomings of standard software-engineering paradigms when dealing with "models" rather than pure software. Beyond this, the study had concrete recommendations dealing with science and technology, the base of human capital, management, and infrastructure. Many recommendations involved the need to align more closely with cutting edge technology and emerging standards in the private sector.
Analytical organizations have long recognized the desirability of hierarchical families of models. Although good model hierarchies have existed from time to time, practice has often fallen short of the ideal. Further, given changes in the nature of warfare, as well as the advent of new theories and technologies, it is time to rethink the entire issue of model families, to set fresh ambitions, and to go about constructing good mutually informed families of both models and analytically structured human war games. The paper offers initial suggestions about how to do so. It is intended, however, as a starting point for a fresh look by the community-i.e., as a stimulus to a more extended debate.
As we increase our reliance on mediated communication, it is important to be aware the media's influence on group processes and outcomes. A review of 40+ years of research shows that all media-videoconference, audioconference, and computer-mediated communication--change the context of the communication to some extent, reducing cues used to regulate and understand conversation, indicate participants' power and status, and move the group towards agreement. Text-based computer-mediated communication, the “leanest” medum, reduces status effects, domination, and consensus. This has been shown useful in broadening the range of inputs and ideas. However, it has also been shown to increase polarization, deindividuation, and disinhibition, and the time to reach a conclusion. For decision-making tasks, computer-mediated communication can increase choice shift and the likelihood of more risky or extreme decisions. In both videoconference and audioconference, participants cooperate less with linked collaborators, and shift their opinions toward extreme options, compared with face-to-face collaboration. In videoconference and audioconference, local coalitions can form where participants tend to agree more with those in the same room than those on the other end of the line. There is also a tendency in audioconference to disagree with those on the other end of the phone. This paper is a summary of a much more extensive forthcoming report; it reviews the research literature and proposes strategies to leverage the benefits of mediated communication while mitigating its adverse effects.
Over the past half-century, the study of human decision making has evolved from dry philosophy into a diverse set of experimentally-tested, behavior-centered theories. However, the sheer volume of disciplines and sub-disciplines-and the often-esoteric debates that divide them-threaten to obscure the very real advances that have been made in modeling human decision making. This paper, giving preliminary analysis from a longer study,[1] begins to address the "so-what" factor in decision making theory, specifically as related to Air Force modeling, simulation, and decision-support needs. While a general consensus is forming on how humans make decisions (descriptive), there are still major conflicts on how humans should make decisions (normative), and by extension, how human decision making can be improved (prescriptive). The first half of this paper surveys modern decision science, focusing on two of the most influential sub-disciplines: the heuristics & biases paradigm (HBP) and the naturalistic paradigm (NP). The second half of this paper will attempt to sketch out a normative/prescriptive synthesis between the two schools, and chart implications for design of decision support.
KEYWORDS: Decision support systems, Probability theory, Surgery, Detection and tracking algorithms, Weapons, Systems modeling, Cognitive modeling, Defense and security, Data modeling
Human decisionmaking does not typically fit the classical analytic model, and the heuristics employed may yield a variety of biased judgments. These biases are often considered inherently adverse, but may be functional in some cases. Decision support systems can mitigate some biases, but often introduce others. “Debiasing” decision support systems entails designing DSS to address expected biases, and to preclude inducing new ones. High-level C2 decisionmaking processes are poorly understood, but these general principles and lessons learned in other fields are expected to obtain. A notional air campaign illustrates potential biases in a commander’s judgment during planning and execution, and the role of debiasing operational DSS.
KEYWORDS: Defense and security, Control systems, Systems modeling, Data modeling, Computer simulations, Mathematical modeling, Computing systems, Weapons of mass destruction, Logic, Complex systems
The advent of concepts such as effects-based operations and decision dominance has led to renewed interest in the modeling of adversaries. This think-piece discusses some of the issues involved in conceiving and implementing such models. In particular, it addresses what behaviors may be of interest, how models might be used in high-level decision support, alternative conceptual models, and possible simple implementations. It also touches on issues of multiresolution, multiperspective modeling (MRMPM), modularity, and reusability.
Models of adversaries' reasoning can be constructed to inform development of adaptive strategies, including strategies that include effects-based operations. Such models can apply to individual leaders or to groups that one seeks to influence. This paper describes an approach to building such models. The results are top-down, highly structured, driven by theory, designed with multiresolution methods that permit zooming in on issues, and suitable for use in high-level decision meetings. The models have been qualitative and non-automated, but the methodology could usefully be incorporated into a more general computer-supported decision-support environment, where it would supplement other tools for decision support.
A metamodel is relatively small, simple model that approximates the behavior of a large, complex model. A common and superficially attractive way to develop a metamodel is to generate data from a number of large-model runs and to then use off-the-shelf statistical methods without attempting to understand the models internal workings. This paper describes research illuminating why it is important and fruitful, in some problems, to improve the quality of such metamodels by using various types of phenomenological knowledge. The benefits are sometimes mathematically subtle, but strategically important, as when one is dealing with a system that could fail if any of several critical components fail. Naive metamodels may fail to reflect the individual criticality of such components and may therefore be quite misleading if used for policy analysis. Na*ve metamodeling may also give very misleading results on the relative importance of inputs, thereby skewing resource-allocation decisions. By inserting an appropriate dose of theory, however, such problems can be greatly mitigated. Our work is intended to be a contribution to the emerging understanding of multiresolution, multiperspective modeling (MRMPM), as well as a contribution to interdisciplinary work combining virtues of statistical methodology with virtues of more theory-based work. Although the analysis we present is based on a particular experiment with a particular large and complex model, we believe that the insights are more general.
KEYWORDS: Data modeling, Visualization, Systems modeling, Visual process modeling, Visual analytics, Mathematical modeling, Statistical analysis, Process modeling, Excel, Statistical modeling
Multi-resolution/multiple-perspective modeling (MRMPM) is a powerful methodology for developing flexible, adaptive models that can be applied in diverse areas of study. It is useful for addressing the needs of decision makers at different organizational levels, as well as of users who think about phenomenological processes at varied levels of resolution or from different points of view. The desire for MRMPM has many implications for modeling environments. We discuss attractive attributes of modeling environments and how they enable MRMPM.
This paper describes mission-system planning (MSP) and mission-system analysis (MSA). It relate their needs to two frontier subjects: multi resolution, multi perspective modeling (MRMPM) and exploratory analysis. After a brief explanation of mission-system planning, I describe an application: the mission of halting a mechanized invasion force with long-range fires such as fighter and bomber aircraft. The application involves defining the relevant system, decomposing it analytically, and assessing overall system effectiveness over a relevant scenario space. The appropriate decomposition depends on one's point of view and responsibilities, and may have both hierarchical and network aspects. The result is a need for multiple levels of resolution in each of several perspectives. Evaluation of system capabilities then requires models. Strategically useful mission-system evaluation requires low-resolution (highly abstracted models), but the validity and credibility of those evaluations depends on deeper work and is enhanced by the ability to zoom in on components of the system problem-to explore underlying mechanisms and related capabilities in more detail. Given success in such matters, the remaining challenge is to find reductionist ways in which to display and explain analysis conclusions and motivate decisions. This also requires abstraction, the soundness of which can be enhanced with appropriate tools for data analysis of results from the exploratory work across scenario space.
We have used detailed, entity-level models to simulate the effects of long-range precision fires employed against an invader. Results show that these fires are much less effective against dispersed formations marching through mixed terrain than against dense formations in open terrain. We expected some loss of effectiveness, but not as much as observed. So we built a low resolution model (PEM, or PGM Effectiveness Model) and calibrated it to the results of the detailed simulation. PEM explains analytically how various situational and tactical factors, which are usually treated only in complex models, can influence the effectiveness of these fires. The variables we consider are characteristics of the C4ISR system (e.g., time of last update), missile and weapon characteristics (e.g., footprint), maneuver pattern of the advancing column (e.g., vehicle spacing), and aggregate terrain features (e.g., open versus mixed terrain).
KEYWORDS: Calibration, Data modeling, Stochastic processes, Systems modeling, Monte Carlo methods, Statistical analysis, Defense and security, Mathematical modeling, Weapons, Lithium
Exploratory analysis examines the consequences of uncertainty--not merely by standard sensitivity methods, but more comprehensively. It is particularly useful for gaining a broad understanding of a problem domain before dipping into details. Although exploratory analysis can be accomplished with models of many types, it is facilitated by multiresolution, multiperspective modeling (MRMPM) structures. Moreover, a knowledge of related design principles facilitates the characterization of more normal models in terms that permit exploratory analysis. This paper describes the connections and notes that, with current and emerging personal computer tools, MRMPM methods are becoming practical.
We have developed and used families of multiresolution and multiple-perspective models (MRM and MRMPM), both in our substantive analytic work for the Department of Defense and to learn more about how such models can be designed and implemented. This paper is a brief case history of our experience with a particular family of models addressing the use of precision fires in interdicting and halting an invading army. Our models were implemented as closed-form analytic solutions, in spreadsheets, and in the more sophisticated AnalyticaTM environment. We also drew on an entity-level simulation for data. The paper reviews the importance of certain key attributes of development environments (visual modeling, interactive languages, friendly use of array mathematics, facilities for experimental design and configuration control, statistical analysis tools, graphical visualization tools, interactive post-processing, and relational database tools). These can go a long way towards facilitating MRMPM work, but many of these attributes are not yet widely available (or available at all) in commercial model-development tools--especially for use with personal computers. We conclude with some lessons learned from our experience.
In this paper we review motivations for multilevel resolution modeling (MRM) within a single model, an integrated hierarchical family of models, or both. We then present a new depiction of consistency criteria for models at different levels. After describing our hypotheses for studying the process of MRM with examples, we define a simple but policy-relevant problem involving the use of precision fires to halt an invading army. We then illustrate MRM with a sequence of abstractions suggested by formal theory, visual representation, and approximation. We milk the example for insights about why MRM is different and often difficult, and how it might be accomplished more routinely. It should be feasible even in complex systems such as JWARS and JSIMS, but it is by no means easy. Comprehensive MRM designs are unlikely. It is useful to take the view that some MRM is a great deal better than none and that approximate MRM relationships are often quite adequate. Overall, we conclude that high-quality MRM requires new theory, design practices, modeling tools, and software tools, all of which will take some years to develop. Current object-oriented programming practices may actually be a hindrance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.