# List of Figures, x List of Tables, xii List of Tables, xii

Emergent Tendencies in Multi-Agent-based Simulations using Constraint-based Methods to Effect Practical Proofs over Finite Subsets of Simulation Outcomes

OSWALDO RAMON TERAN VILLEGAS

A thesis submitted in partial fulfilment of the requirements of the Manchester Metropolitan University for the degree of Doctor of Philosophy

Centre for Policy Modelling the Manchester Metropolitan University 2001

Table of Contents

List of Figures, x List of Tables, xii List of Tables, xii Abstract, xiii Declaration, xiv The Author, xv Acknowledgements, xv 1 Chapter 1 - Introduction 1

1.1 1.2 1.3 1.4 Background, 1 Aims of the Thesis, 3 Limitations of the Study, 5 Outline of the Thesis, 7

2

Chapter 2 - Simulation and Modelling, 9

2.1 2.2 Introduction, 9 Fundamental Notions, 10 Object, 10 System and Process, 10 Subject, 11 Agent, 12 Model, 13 Synthesis, 13

2.2.1 2.2.2 2.2.3 2.2.4 2.2.5 2.2.6 2.3

Zeigler’ s Formal Representation of a System, 14 Zeigler’ s Levels of Specification of Systems, 16 Example: MAS-based Simulation, 19 ii

2.3.1 2.3.2

2.3.3 2.4 2.5 2.6

Morphism between Systems. Simplifying, 19

Homomorphism and the Idea of Metarules, 24 Systems Exhibiting Structural Change, 26 Approximation: Loosening the Morphism Criterion for Validation,

Aggregation, and Alignment of Models, 27 2.6.1 2.6.2 2.7 2.8 2.9 2.10 In Traditional Simulation, 27 In Simulation of Structural Change, 32

Verification and Validation of a Simulation, 33 Event-Driven Simulation, 33 Simulation of Systems with Variable Structure, 34 Towards Alternative Methods for Analysing the Dynamics of Simulations of

Complex Systems, 37

3

Chapter 3 - Proving Theorems in Computational Models, 40

3.1 3.2 Introduction, 40 Proving: an Ancient Problem, 42 Basic Concepts and (Logical) Model Search/Construction, 44

3.2.1 3.3 3.4

Simulation, Logic-programming, and Theorem-proving, 45 Seminal Work: Towards a Procedure for Checking Unsatisfiability of a Set of

Clauses in First-Order Logic, 48 3.4.1 3.4.2 3.5 Interpretations, 49 Herbrand Universe and Herbrand’ s Theorem, 49 to Theorem-proving: Syntactic (clausal)and Semantic

Approaches

(interpretation)-based Searches, 50 3.5.1 3.5.1.1 3.5.1.2 3.5.1.3 3.5.1.4 Syntactic (clausal)-based Inference Procedures, 50 Robinson’ s Principle for First-Order Logic (Chang and Lee), 51 Semantic Resolution, 52 Hyperresolution, 53 The Set of Support Strategy, 54 iii

3.5.1.5 3.5.1.6 3.5.1.7 3.5.2 3.5.2.1 3.5.2.2 3.6

Linear Resolution, 54 Strategies or Heuristics, 55 Forward-chaining Search, 58 Semantic (interpretation)-based Theorem-provers, 59 Tableaux, 60 Prolog-based Logical Models, 62

Classification of Theorem-provers, 63 Criteria, 63 Bonacina’ s Taxonomy of Theorem-prover Strategies, 64 Additional Considerations about Bonacina’ s Taxonomy: Classifying

3.6.1 3.6.2 3.6.3

Generation of Trajectories in a Simulation, 65 3.7 3.8 Constraint-based Search, 66 Meta-Level Reasoning and Proving Tendencies, 68

4

Chapter 4 - Understanding Phenomena and Simulation Dynamics, 69

4.1 4.2 4.3 Introduction, 69 Subject’ s Bounded Rationality and the Subject-Object Dichotomy, 70 Objective Causes of Complexity: Levels of Complex Systems, 73 Level 1: Matter, Inanimate Systems, 76 Level 2: Adaptive Systems: Living Organisms, 76 Level 3: Self-aware Systems or Systems able to Reason, 78 Level 4: The Level of Meta-beings, 80

4.3.1 4.3.2 4.3.3 4.3.4 4.4 4.5

Subjective Notion of Complexity, 81 Emergent Tendencies, 82 Objectivistic Notion of Emergence of Tendencies, 82 Subjectivist Notion of Emergence of Tendencies, 83 Emergence of Tendencies: a Trade-off between Subjective and Objective

4.5.1 4.5.2 4.5.3 Factors, 88

iv

4.6

Tendencies in a Simulation, 88

5

Chapter 5 - Mapping the Envelope of Simulation Trajectories via a

Constraint-based Exploration, 89

5.1 5.2 5.3 5.4 5.5 5.6 Introduction, 89 Factors Driving the Dynamics of a Simulation, 91 Enveloping Outputs in Simulation Trajectories, 91 (Logical) Model-Constrained Exploration of Simulation Trajectories, 95 (Logical) Model Exploration for Proving the Necessity of a Tendency, 96 Modeller Beliefs and Proving a Tendency, 100 A Review of the Concepts of Explaining, Understanding, and Proving,

5.6.1 100 5.6.2

A Proof of a Tendency in a Simulation Theory and a Subject’ s,

Knowledge 103 5.6.3 Model, 103 5.7 5.8 Sources of Constraints: Bounds of the Searched Space of Trajectories, 105 Towards an Efficient Constraint-based Search in MAS: Transformation of Interaction between a Subject’ s Cognitive Model and a Simulation

MAS into (Logical) Model Constraint-based Models, 105 5.9 Implementation of a Method for a Constraint-based Search of Tendencies in

MAS, 107 5.9.1 5.9.2 5.9.3 Using SDML and Declarative Programming Paradigm, 107 SDML’ s Inference Mechanism, 108 An Efficient Translation of a MAS-based Model into a Single Database-

Rulebase (DB-RB) Pair, 118 5.9.3.1 5.9.3.2 5.9.3.3 5.9.4 First Step. Revealing Dependencies, 118 A Problem Appears: The Growth of the Space of Searched Data, 121 Dealing with this Difficulty: Unwrapping the Rules, 122 Overview of the System, 124

v

5.9.5 5.10

Speeding up of the Simulation, 125

Morphism and Valid Translation of Simulation Models, 125 Translating a MAS-based Model into a (Logical) Model Constraint-based

5.10.1 Model, 125 5.10.2

Translating Models from an Architecture Offering a (Logical) Model-

based Exploration into other Architecture Characterised by a Syntactic-based Exploration, 127

6

Chapter 6 - Transforming MAS to Improve Efficiency of Constraint

Logic-programming, 129

6.1 6.2 Introduction: A Hierarchy of MAS Architectures, 129 Programming and Experimenting in MAS to Gain Advantage of High-Level

Simulation Paradigm – Higher Architectural Level, 131 6.3 Translation to Constraint-based Paradigm for Systematic Exploration of

Possible Logical Models – Intermediate Architectural Level, 131 6.4 Possible further Translation for Attempting Syntactic Proofs – Lower

Architectural Level, 132 6.4.1 6.5 Comparing the Architectures, 134

Modelling Process using Architectural Transformations, 135

7

Chapter 7 - A Case Study: A Simple Trader-Distributor Model, 137

7.1 7.2 Introduction, 137 MAS Model using the Strictly Declarative Simulation Language (SDML), 139 List of the Most Relevant Predicates used in the Model, 140 Outline of the Most Relevant Rules used in the Model, 142

7.2.1 7.2.2 7.3

Exploration of Simulation Trajectories using the MAS-based (Simulation)

Model, 147 7.4 7.5 7.6 Envelopes of a Tendency in a Subspace of the Simulation Trajectories, 148 A First Attempt to Prove in OTTER, 149 Facilities for Proving Tendencies into SDML, 152 vi

7.7 152 7.8

A first Attempt to Prove in SDML: Resembling the Experiments in OTTER,

Drawbacks of this Implementation: A More Efficient Implementation is

Needed, 153 7.9 An Efficient Implementation in SDML: a (Logical) Model Constraint-based

Architecture, 153 7.9.1 Towards an Automatic Translation of a MAS-based Model into a

Constraint-based Model, 154 7.10 7.11 7.12 Comparing the Traditional and the Efficient MAS-based Implementations, 155 Proving Necessity and Understanding of an Emergent Tendency, 156 Comparison with other Approaches, 159 Theorem-provers: OTTER, 159 Proving in MAS: DESIRE, 160 Constraint Logic-programming, 160 MetateM, 161

7.12.1 7.12.2 7.12.3 7.12.4

8

Chapter 8 - Some Implications of this Research, 163

8.1 8.2 Introduction, 163 Discussion about the Conditions of how this Thesis works in relation to other

Methods, 164 8.2.1 How this Methodology would work in Simulation Platforms Different

from SDML, 164 8.2.2 How Realistic is it to Implement an Automatic Platform for Translating

and Proving Tendencies in a Simulation Model at Present?, 164 8.2.3 Trade-off between Complexity and Usefulness of the Techniques

Proposed in this Thesis, 165 8.2.4 Complexity of a Constraint Exploration of Simulation Trajectories in

Some Applications, 166 8.2.4.1 Robot Agents (as different from computational agents), 166

vii

8.2.4.2 8.2.4.3 8.2.5

ALife and Microsimulation, 166 Event-driven Simulation, 167 A More Practical Notion of Emergence: Considering Subjective Aspects

in Addition to the Objective ones, 167 8.2.6 Enveloping Tendencies: A New Approach for Characterising Simulation

Outputs, 168 8.2.7 8.3 8.4 8.5 8.6 Enveloping Outputs in Simulations of Chaotic Systems, 168

Implications for the Modelling of Complex Systems, 169 Implications for the Social Simulation Community, 170 Implications for the MAS Community, 172 Bringing Ideas from other Areas of Research: The Conception of Emergent

Tendencies, 173 8.7 8.8 Implications for Policy Analysis, 176 Proving Tendencies in MAS-based Models and Constraint Logic-

programming, 177

9

Conclusion, 178

9.1 Further Work, 180

10

Appendix 1 – Code of the Programmes, 181

10.1 10.2 OTTER Code, 181 SDML Code, after Unwrapping Rules, 192 Module Model, 192 Module Meta, 209 Module Prover, 218

10.2.1 10.2.2 10.2.3

11

Appendix 2 - Dependency Graphs, 246

11.1 11.2 11.3 For Module Model (after Splitting), 246 For Module Meta (after Splitting), 249 For Module Prover (after Splitting), 250

viii

11.4

For the whole Model before Splitting, 258

12 13 14 267 15 16 17

Appendix 3 - Set of Rules before and after Splitting, 261 Appendix 4 - Runs/Result Tables, 265 Appendix 5 - Estimation of Speeding-up Gained from Unwrapping Rules,

Appendix 6 - Complexity of the Search, 269 Appendix 7 - Mapping the Envelope of Social Simulation Trajectories, 274 Appendix 8 - Determining the Envelope of Emergent Agent Behaviour via

Architectural Transformation, 290 18 References, 305

ix

List of Figures

Figure 2.1. Basic notions in Zeigler’ s formalism: input values x1, x2; system states q1, q2; output volues y1, y2; output function λ; and transition function δ , changing initial state q1 into a new state q2, 15 Figure 2.2. Graphic representation of a morphism for example 2.1, 16 Figure 2.3. Graphic representation of a homomorphism, 20 Figure 2.4. Morphism between systems S and S’under a similar observation frame, 21 Figure 2.5. Homorphism between systems S and S’(likewise A, B, A’ , and B’ ; also H is a real matrix), 23 Figure 2.6. Graphical representation of several control levels where structural change takes place, 36 Figure 3.1. Graphic representation of the relation among the concepts: Language, Signature, Theory, Axioms of a theory, and Theorems in a theory, 45 Figure 3.2. A tableau for example 3-6, 61 Figure 4.1.Increasing objectivism in subjects’beliefs, 72 Figure 4.2. Emergence of tendencies for a subject observing a system, 85 Figure 4.3. A new language L3 is used by the subject and the emergent tendency is no longer considered emergent, 86 Figure 5.1. Theory given by simulation trajectories, 96 Figure 5.2. Representation of a simulation theory in terms of the simulation trajectories, and of these in terms of agents’ choices (for a single parameter-setting and assuming there are two agents), 97 Figure 5.3. A model constraint-based exploration of the dynamics of a simulation, 98 Figure 5.4. Interaction between a subject’ s cognitive model and a simulation model, 104 Figure 5.5. Transformation of a MAS into a single database-rulebase pair, 106 Figure 5.6. Rulebase dependencies for the 2nd example, 111 Figure 5.7. Rulebase dependencies for the 3r example (Case 1), 112 Figure 5.8. Database for the 3r example (Case 1), 113 Figure 5.9. Rulebase dependencies for the 3r example (Case 2), 114 Figure 5.10. Rulebase dependencies for the 3r example (Case 3), 115 Figure 5.11. SDML’ s inference mechanism, 117

x

Figure 5.12. Illustrating how agents and time levels become explicit in the new architecture, 119 Figure 5.13. Showing origin of rule dependencies for the rule for prices. Dependencies are due only to the iterative character of the rule, 120 Figure 5.14. Revealing dependencies, e.g., agent ?T price-setting at time ?i in accordance with the rule in the right side of Figure 5.12 (?T and ?lastIter create new dependencies), 121 Figure 5.15. The growth of the space of searched data. Notation: price i, j denotes the price of Trader i at iteration j, 122 Figure 5.16. Splitting of rule for prices, 123 Figure 5.17. Above, ‘ unwrapping’ of dependencies is shown. Below, the data-space searched by the rule-setting for a trader is illustrated, 123 Figure 5.18. Overview of the efficient implementation, 124 Figure 6.1. Sequence of modelling architectures, 130 Figure 6.2. Interactive use of the architectures by a modeller, 136 Figure 7.1. An overview of the model, 140 Figure 7.2.The whole procedure of revealing dependencies and unwrapping rules 154 Figure 7.3. Tendency observed in a trajectory, 158 Figure 8.1. The proposed platform for a hierarchy of simulation architectures that can be used for different purposes, e.g., for generating alternative simplified models, 171 Figure 8.2. Declarative programming and social simulation communities’ tools, and recent moves towards common platforms, 177 Figure 15.1. Boolean circuit for the target problem, 270

xi

List of Tables

Table 4.1. A summary of part of Heylighen’ s hierarchy of systems’complexity, 75 Table 6.1. A Comparison of the Architectures, 134 Table 12.1. Comparing the number of rules in the MAS-based and in the constraint-based architectures, 264 Table 13.1. Runs/Result Tables, 265

xii

Abstract

This thesis suggests a methodology for studying complex systems. This method is intended to be particularly useful for searching and proving tendencies (whether considered emergent or not) in those systems whose dynamics seem to be strongly dependent on the system’ s components’ interaction (such as social systems). These systems are commonly simulated in Multi-Agent Systems (MAS). It begins by examining the formal notions of simulation, modelling, and theoremproving. Then it reviews some notions of complexity and proposes a notion of the emergence of tendencies as based on the trade-off between subjective and objective factors of complexity. It next moves on to investigate the dynamics of a system via a platform consisting in a (logical) model constraint-based exploration of the dynamics of a simulation. This platform is suggested for systematically exploring the subspace of simulation trajectories associated with a range of parameters of the model, a range of choices of the processes (e.g., agents’ choices), and the logic of the simulation program. Following this, we suggest using this architecture in addition to the higher architectural level given by a MAS and an even lower level, a syntactic constraint-based architecture, as complementary means to investigate aspects of the dynamics of a MAS simulation. The proposed methods are compared with other approaches for exploring the dynamics of a simulation. In particular, differences in terms of the notions of morphism among models, the generality of the conclusions, and the measures of behaviour that each approach allows are emphasised. In addition, enveloping the simulation outputs is proposed as an alternative to statistical summaries. This seems to be especially convenient for studying complex systems and for analysing outputs in case of applying theorem-proving techniques. This model constraint-based architecture is applied to a MAS-based model exemplifying a typical interaction trader-distributor. A tendency is identified in the MASbased model and then a constrained proof is performed in the model constraint-based architecture. Afterwards, some implications of this thesis for related areas of science are reviewed. Finally, the appendices include an analysis of the complexity of this model constraint-based exploration of trajectories and two papers particularly relevant to the social simulation and the MAS communities.

xiii

Declaration

No portion of the work contained in this thesis has been submitted in support of an application for another degree or qualification of this or any other university or other institution of learning.

xiv

The Author

The author obtained a first degree in Systems Engineering at the University of the Andes in Venezuela in 1992. Then he started lecturing at the Department of Operation Research of the Systems Engineering School and took a Master in Applied Statistics at the same University. He is a Member of the Centre for Simulation and Modelling (CESIMO: Centro de Simulación y Modelado) (http://cesimo.ing.ula.ve/) of that university. He has some involvement in the areas of Simulation and Numerical Analysis. His research at CESIMO includes Simulation Methodologies and Simulation of structural change following Carlos Domingo’ s ideas.

Acknowledgements

I would like to thanks my director of studies, Bruce Edmonds, for his tolerance, patience, and wise guidance; my supervisor Scott Moss for his assistance and stimulating discussions; and the other members of the CPM team: Steve Wallis, Juliette Rouchier, Richard Taylor, and Olivier Barthelemy for their support in many ways. I would also like to acknowledge the encouragement and assistance received from CESIMO’ s members, especially Giorgio Tonella, Carlos Domingo, and Jacinto Dávila. Similarly, I would like to thanks CONICIT (the Venezuelan Governmental Organisation for promoting Science), the University of Los Andes in Venezuela, and the Faculty of Management and Business of the Manchester Metropolitan University, for supporting the reserach reported in this thesis.

For my parents, brothers, sister, nephews, and for Jasmina.

xv

1

Chapter 1 - Introduction

1.1 Background

Computer simulation has been widely used for studying and understanding systems. It is of particular importance in the analysis of the dynamics of a simulation. For example, discovering emergent phenomena and demonstrating their existence in social simulations have been recognised as key factors in the understanding of social systems (Nigel et al., 1998; Conte et al., 1997; Edmonds et al., 1998). Already simulation tools such as Multi-Agent System (MAS)-based Simulation have proved valuable in the analysis of socio-economic domains where the agents have bounded rationality (Edmonds et al., 1998; Edmonds, 1999b, 1999c). Different simulation architectures, methodologies, and techniques have been used in these studies. Some simulation communities have been interested in studying ‘ complex systems’ and ‘ composite complex systems’ . A MAS-based simulation consists of generating in a computer the dynamics of a complex system as resulting from the interaction of simpler agents. The idea is to resemble complex behaviour observed in empirical systems as the product of the interaction of individuals. For example, a social modeller might be interested in generating certain aspects of overall behaviour observed in a society via the interaction of simple models of individuals (e.g., people). Now, the social simulation community is going beyond bounded rationality to explore how ‘ social artefacts’ , such as roles, norms, and organisational design, place bounds on social cognition (Carley et al., 1998). This experimentation is potentially useful, for example: for detecting gaps, inconsistencies, or errors in organisational theories (Moss, 1998; Axtell, 1996); for providing insight into organisational behaviour (Carley et al. , 1998); for testing, verifying, and improving models of artificial social agents (Moss, 1998; Moss et al., 1998a); for investigating the interdependence between social factors and agents’ cognitive models (Carley et al., 1998), and for aligning models (Mihavics et al. , 1996; Axtell, 1996). Apart from the social simulation community, other examples of research communities for which studying and analysing the dynamics of the simulation are important are: ALIFE (Langton, 1989; http://alife.org/), and microsimulation applications in traffic (Nagel et al., 1998 and 2000). Usually a MAS-based simulation exhibits very complex dynamics – in this fact lies its promise but it also means that its analysis might also be difficult. Behaviour difficult to capture by a modeller but key for understanding empirical complex systems is resembled

in the simulation in the hope of obtaining hints to improve such an understanding from the simulation experimentation. Nevertheless, commonly it is still little understood how such significant dynamics appear from the agents’ interaction. To develop methodologies for helping in understanding the dynamics of a MAS-based simulation becomes a crucial task. As will be seen in Chapter 3, there are two means by which a modeller can seek to understand the dynamics of computer simulations: through design and through observation of the dynamics using a post hoc analysis. In MAS, careful design procedures based on well-understood formal models of agent behaviour can help a modeller to understand the behaviour of individual agents and, in special cases, larger parts of MAS. However, understanding the behaviour of interacting groups of autonomous agents by formal design methods has its limitations, and even the most carefully designed MAS can exhibit behaviour unforeseen by its designers as the simulation dynamics become subtle over the state transitions and large number of data manipulations. Additionally, noted as factors associated with this difficulty are the facts that systems simulated in a MAS are of a high level of complexity and that a modeller (in this case, the observer) has bounded rationality (see Chapter 3). An observer’ s bounded rationality is associated with limited capabilities for capturing and processing information from the observed system (e.g., the target system or the simulated one). Because of this the second way in which a simulation can be analysed, namely by inspecting the dynamics in a post hoc analysis, becomes the most used approach for studying MAS. Among the existing methodologies for doing such post hoc analysis are: scenario analysis (Domingo et al., 1996b), where one single MAS trajectory at a time is examined and analysed, and the Monte Carlo approach (Zeigler, 1976), where the MAS is repeatedly run and statistics are collected about general trends over a sample of trajectories. However, neither of these traditional ways of analysing computer dynamics is satisfactory for MAS-based Simulations (MABS). On the one hand, it is usually prohibitive in terms of computational resources to explore all simulation trajectories using single scenario analysis. Moreover, its use is restricted to certain ‘ arbitrarily’ and subjectively chosen trajectories. Scenario analysis can help by exposing possibilities for the dynamics of the simulated system. Its value for training the modeller or the policy analyst about ‘ possible future worlds’ has been recognised in simulation applications related to policy analysis (Axtell et al., 1996; Wack, 1985; Domingo, 1996b). Nevertheless

2

the fact that arbitrarily unexplored trajectories are left aside makes difficult the generalisation of conclusions and the analysis’ s application in wider theory. On the other hand, in Monte Carlo techniques the analysis of the simulation is made using a sample of the trajectories. Consequently, an arbitrary (in this case the arbitrariness is random) exploration of trajectories is also carried out. The use of statistics supports only probabilistic generalisations but not mathematically (or logically) valid proofs. Hence the usefulness of these techniques in theoretical applications is limited - they do not give definitive answers but only probabilistic ones. Consequently there is a need to advance the search for new computational methodologies and techniques that allow a more rigorous exploration of the dynamics of simulations of complex systems. 1.2 Aims of the Thesis

The main aim of this thesis is to help in filling the gap apparent in the need for better computational methods for studying MAS dynamics, as was pointed out in the last section. It is of particular interest to develop a methodology for proving emergent tendencies in Multi-Agent Systems. The following sub-goals of the thesis can be stated as part of an effort for developing methodologies and techniques to help in the exploration and proof of emergent tendencies in simulation of Multi-Agent Systems. o To develop a methodological approach for exploring and proving emergent tendencies in computational models The intention is to show that it is possible to prove theorems about social simulation models. The idea is to prove that emergent tendencies are (or are not) general under certain conditions when varying certain factors of the simulation dynamics. This methodology will allow a modeller to elaborate conclusions about the behaviour observed in the dynamics of a simulation theoretically stronger than those that traditional explorations of simulation trajectories are able to support (e.g., Monte Carlo techniques permit one to infer only probabilistically valid conclusions; for more details about traditional approaches and their drawbacks, see Chapter 2 and especially sections 2.6 and 2.10). o To develop some techniques for implementing such a methodology The purpose is to apply the methodology in a particular simulation example and to give an indication of how it could be automatically implemented. Appropriate techniques for implementing the methodology in a particular simulation language would be developed. 3

The language SDML for writing MAS is chosen for such implementation because of the relevance of MAS-based models for studying complex systems in, e.g., social and economic systems, and because this language is suitable for implementing formal proofs (e.g., it is a declarative language with a well-grounded underlying logic). These advantages give SDML advantages over many other existing simulation languages, where usually only traditional explorations of simulation trajectories are allowed (e.g., traditional event-driven simulation languages, e.g, SLAM; see Pritsker, 1995, for more details with reference to this point; this argument is better explained in Chapter 2). o To investigate the trade-off between complexity and the usefulness of these techniques Factors and criteria for this trade-off will be identified relative to a conception of complexity. This conception will be strongly linked to a notion of what are considered as emergent tendencies in complex systems. In addition, the computational complexity of the task which the approach in this thesis suggests, i.e., proving in a computational model, will be investigated and presented (see Appendix 6 - Complexity of the Search). This investigation allows a discussion of the trade-off between computational complexity and factors related to the usefulness of the technique. On the other hand, difficulties for implementing the technique and modelling approach (MAS-based and constraint-based simulation and proving) in a particular language (SDML) will be examined. Further, how different such difficulties would be in case of modelling in other simulation languages will also be discussed. In particular, for analysing simulation outputs, enveloping of tendencies will be proposed as an alternative to central measures and other statistical summaries. The idea of using an envelope is to present an alternative to statistical summarises and to overcome some of their drawbacks for studying complex systems (for example, that pointed out by Crutchfield, 1992, p. 35; aspects of this trade-off will be discussed in section 8.2). o To develop a particular case study An abstract simulation based on a ‘ need problem’ will be focused upon. The idea is to apply the developed methodology and techniques in a typical model of an empirical system. The intention is to use this typical example to show that the developed methodology and techniques effectively work. Consequently, it must be possible to arrive at stronger conclusions about behaviour in the dynamics of the model than those possible when using traditional methods; i.e., conclusions must find stronger support in 4

the theory of the simulation model than in those allowed by existing techniques (this case is presented in Chapter 7). o To review the conception of emergent tendencies Conceptions of emergent phenomena useful for understanding emergent tendencies in computational models will be reviewed. Of special interest will be criteria for testing whether certain tendencies can be considered as emergent by an observer (e.g., by a modeller). The intention is to achieve a more ‘ practical’ notion of the emergence of tendencies, one more practically useful for modelling complex systems. Existing modelling approaches like those named above as well as modelling in different areas of research (e.g., in physics) traditionally consider the emergence of tendencies only from an objectivist point of view. They are usually linked to objective notions of the complexity of a system. However, it also seems convenient, especially when modelling certain complex systems such as human systems, to examine the relevance of subjective factors for a tendency to be considered as emergent by a modeller. These factors seem to be sourced on a modeller’ s bounded rationality. The idea is to consider both the objective and subjective factors and to reflect on their trade-off (Chapter 4 is specially dedicated to examining these aspects; a further discussion about the usefulness of this concept is presented in section 8.2.5). 1.3 Limitations of the Study

This thesis is focused on (and limited to) the computational modelling of computer systems. The terminology and concepts are intended with respect to this domain. This bias applies throughout the thesis (except in the discussion in sections 4.2-4.3 and 8.6) and should be held in mind when reading the discussion involving highly polemical terms such as ‘ complexity’and ‘ emergence of tendencies’ . This thesis is born out of the following disciplines: Social simulation. This is one of the areas where a need for better methodologies for analysing and understanding better the dynamics of computational programs has been identified (Conte et al., 1997; Edmonds et al.,1999). Multi-agent systems (MAS), in particular their use for simulation of complex systems conceived as made up from the interaction of sub-systems with certain autonomy (Weiss, 1999).

5

Automatic reasoning and theorem-proving, specifically their application for analysing the dynamics of computational programs. In the application to be considered in this thesis, it will be useful for studying the dynamics of simulation programs; in particular, the interest is in model-based exploration of simulation programs (Wos, 1988). Artificial Intelligence. This discipline of research includes modelling using MAS, Automatic Reasoning, and other sub-disciplines of research to be assessed in this thesis (Stuart, 1995). Computational modelling of complex systems. It includes areas of research relevant to this thesis, such as modelling of social systems and, in general, Multi-Agent-based Simulation (Casti, 1992; Holland, 1998). Methodologies to be developed in this thesis will be useful not only in social systems but also in different sub-areas of computational modelling of complex systems. Systems theory, more specifically notions of systems from an evolutionary perspective, as is conceived by the group working in Principia Cybernetics at the Center Leo Apostol in Belgium (http://pespmc1.vub.ac.be/CLEA/). Heylighen’ s ideas will be accorded special importance (Heylighen, 1992 and 2000c; http://pespmc1.vub.ac.be/HEYL.html). Broadly speaking, the example model will be a representation of a trader-distributor relationship. However, in order to facilitate the development of a technique for exploring and proving in the dynamics of the simulation, a more abstract and limited model will be considered. The case study to be developed will be typical but not necessarily representative of a particular empirical system. This will remove unnecessary timeconsuming activities during the research, for example, data collection and validation of the model. traders and distributors in the model will be simplified to agents whose main activities are price- and sales-setting, and price-imitating, in the first case; and order-setting in the second case. It is hoped to apply this technique in future studies using more elaborate examples. It is foreseen that the methodology and techniques in this thesis will be applied to other areas of MAS-based Simulations, especially when studying complex systems. It is hoped they can be implemented automatically in some MAS. In order for this to be possible, these systems must have certain capabilities for exploring computational program dynamics such as those found in theorem-proving and constraint logic-programming (Moss et al., 1997; Abdennadher et al., 1995; Abdennadher, 1999; Frühwirth et al., 1992; Rainer et al., 1988; Marriott, 1998).

6

1.4

Outline of the Thesis

First, in Chapter 2, a review of basic concepts and the formal aspects of simulation and modelling are presented. We begin by considering basic concepts in classical systems theory in order to set a first ‘ world view’ for the thesis. Then Zeigler’ s foundational theory for simulation and modelling will be examined. Afterwards other aspects of simulation (which Zeigler’ s theory does not consider), such as structural change and modelling in Multi-Agent Systems, will be considered. Following this, concepts related to the formal use of simulation results including validation, verification, and simplification of models are examined. At the end of the chapter, the main drawbacks of traditional post hoc analysis of simulation outputs for studying complex systems are pointed out and theorem-proving is suggested as a promising area for alternative approaches. Following, in Chapter 3, formal aspects of theorem-proving are reviewed, including the most common approaches: the model-based (semantic) approaches and the resolutionbased (syntactic) ones. A comparison of these methods is presented. Chapter 3 also includes references to OTTER as an example of a successful theorem-prover. Its more relevant characteristics are described later in the chapter (OTTER is used in the implementation of the case study in Chapter 7). Chapter 4 offers a review of some conceptions with respect to understanding simulation dynamics. Special attention is given to the concept of emergent tendencies. Some definitions of this, using different ideas of complexity and the notion of bounded rationality, are presented. Special emphasis is placed on a trade-off between the objective aspects (grounded in an observed system) and the subjective factors (grounded in an observer’ s language). Chapters 5 and 6 present a methodological approach to the study of complex systems using Multi-Agent Systems. Chapter 5 suggests a model constraint-based exploration of tendencies in a subspace of simulation trajectories. Compared to existing approaches, this method allows a modeller to apply stronger notions of morphisms for comparing models, to reach more general conclusions about the dynamics of a model, and to use other measures of behaviour than statistical summaries. This chapter also discusses aspects of how an analysis of the simulation outputs can help a modeller in better understanding aspects of the simulation and, by extrapolating, of the empirical system. In addition, it proposes enveloping tendencies as an alternative to traditional methods for studying simulation outputs. Then, Chapter 6 proposes a platform consisting of a hierarchy of architectures for understanding MAS-based models. The platform is: at the first level, the 7

MAS-based model; at the intermediate level, the model constraint-based architecture proposed in Chapter 5; and, at the lowest level, a syntactic constraint-based architecture. This platform is intended to assist a modeller in understanding the simulated complex system by showing complementary aspects of the dynamics of the model. Chapter 7 offers a case study where the methodology suggested in Chapter 5 is applied. The model consists of a MAS-based model of a typical trader-distributor interrelationship. Chapter 8 discusses some implications of this research in related areas such of modelling and simulation. It includes a discussion about how the methodology and techniques developed in the thesis may be implemented in other platforms and about the trade-off between the usefulness of such techniques and the complexity of their implementation. Conclusions are offered in Chapter 9. Finally, the appendices and a bibliographic review are presented. Appendices include a review of the speed-up achieved in the case study when the model is translated from the MAS to the constraint-based architecture (both in SDML), an examination of the computational complexity of the search proposed in this constraint-based architecture, and two papers relevant to the social simulation and MAS research communities.

8

2

Chapter 2 - Simulation and Modelling

2.1 Introduction

The aim of this thesis is to develop tools and a methodology helpful for understanding simulation of social systems and other complex systems. It is of special interest to apply such a methodology in the simulation of social systems using MAS. Given the difficulties for understanding a MAS-based model from its design, the method commonly used is to inspect the dynamics of the simulation in a post hoc analysis. Understanding of a simulation is closely related to simplification, validation, and alignment of models. They rest in a post hoc analysis of simulation trajectories, and in a mathematical notion of homomorphism. Because of the difficulties involved in evaluating the formal notion of homomorphism in simulation, weaker notions are used in the existing techniques. As will be seen, to circumvent these difficulties, the idea of approximation is used instead of that of strict morphism. Among the widely used techniques based on this notion of approximation are scenario analysis and Monte Carlo studies. However, as pointed out in the Introduction to this thesis, they both have serious drawbacks. It is our intention in this chapter to review existing formal simulation approaches and the notion of homomorphism used there, in order to make more explicit the drawbacks of these techniques and then illustrate the advantages of the approach to be proposed in this thesis. In particular, we seek to develop a methodology that allows a modeller to prove emergent tendencies in computer simulation of complex systems. This will lend support to the implementation of the strongest notion of morphism, more so than the weak one commonly used in the named existing techniques. As a starting point, in this chapter basic notions about simulation and modelling will be presented, to help, as said, in clarifying, for example, drawbacks in the existing techniques and desirable properties in alternative techniques. Then, in the next chapter, concepts related to the proving of theorems in computer models will be reviewed, in order to establish a background helpful for developing a methodology for proving tendencies in a simulation. In Chapter 4, before presenting a methodology for proving a theorem in a simulation in Chapter 5 notions related to the understanding of emergent tendencies in the dynamics of a simulation will be discussed. In the first part of this chapter, basic aspects of Systems Theory will be described as part of the framework (‘ world view’ ) of this thesis § 2.1). This will be used throughout the whole thesis, but it will be of particular value as a basic reference for discussing notions of understanding and emergence in Chapter 4, where this discussion and the development of a

‘ world view’ will be extended. Then, in the second part of the chapter, the fundamental theoretical aspects of simulation and modelling, starting from Zeigler’ s formalism and progressing to MAS, will be reviewed. In addition, simplification, integration and alignment of models will be discussed. These are important considerations when modelling complex systems. Finally, in section 2.10 some drawbacks of traditional simulation for modelling complex systems will be discussed. 2.2 Fundamental Notions

The basic notion for the ‘ world view’taken in this thesis will be presented as understood in classical systems theory (Heylighen, 1992; Ashby, 1964; Checkland, 1981; Checkland et al., 1999; Churchman, 1968; Domingo et al., 2000; Fuenmayor, 1986). 2.2.1 Object The basic entity identified in reality is called an object. Its basic characteristics or attributes, e.g., colour, size, are called properties. In this view, objects are one of the basic semantic units. This is one of the more basic notions of object, one widely used in computer science and logic. It is useful for describing more abstract concepts as made up of objects. However, its value is quite limited because processes (e.g., evolution) cannot be passively represented. So, a move towards a higher level of abstraction is necessary in order to identify a basic entity useful for abstracting and modelling processes. Such an entity is the system. More elaborated notions of ‘ object’ will be considered in Chapter 4. 2.2.2 System and Process In classical systems theory, a system can be thought of as a kind of object with additional properties. Among the special attributes a system presents are: ? A distinction between the whole, its components, and the interrelations among these components. A component can be a sub-system (a system in itself) or an object. For example, consider a model of a trader-distributor-consumer interaction. The model will represent the whole thing. Among the components are traders, distributors and consumers, and among the interrelations are orders from consumers to distributors and from distributors to traders; sales from traders to distributors and from distributors to consumers; and traders’and distributors’pricesettings. ? Overall properties of the system are different from components’properties. Such a system’ s properties enjoy some degree of independence from components’ 10

properties since they are difficult to explain apart from these. For example, among the overall properties of a model such as that described above are the dynamics of prices. Generally, tendencies about prices difficult to understand from the individual behaviour of components (e.g., traders, distributors, and consumers) appear. The term ‘ properties’ of a system will be used instead of ‘ end’ or ‘ purpose’ of a system (as used in some studies) in order to avoid vagueness in the discussion, as that is a less polemical concept. ? Dynamical structure, a function given by some laws of change over time, which specifies a system’ s behaviour. Dynamics introduces the fundamental notion of process. This is significant as it allows a modeller to describe processes as a system’ s dynamics. For example, the process of evolution as given in nature (a natural system) has been useful in different areas of research, most obviously in biology, but also in complex systems such as social systems. Even more, evolutionary theories have been elaborated in ways that intend them to be useful for approaching systems in general (Heylighen, 1992). Considering the example of a trader-distributor-consumer interaction, each component has a certain behaviour given by its ‘ goals’ in that environment (in such sort of ‘ market’ ). They have certain ‘ rules’ for buying, selling, and price-setting. These give the laws of behaviour of each component. Certain laws of behaviour of each component plus some additional laws, such as ‘ total distributor sales are equal to total consumer’ purchases’ , are useful for generating behaviour of the whole system. In this example, the additional rules might be in accordance with ‘ marketing theory’ or might simply have the purpose of achieving ‘ consistency’of the whole system. Other notions of systems found in the literature will be reviewed below. 2.2.3 Subject A subject is defined as a system with a vicarious mechanism (Heylighen, 1991a). A vicarious mechanism selects among alternative actions in order to satisfy a particular system’ s goal. This goal might be, e.g., surviving, or perhaps increasing economic benefits. In some subjects, e.g., human beings, this vicarious mechanism rests on genetics and internal models. The genetics provide useful information for surviving as they are passed from individuals to their descendants. This process is actuated for an internal mechanism, which is out of rational control by the subject. Some subjects enjoy a second mechanism at a higher level of complexity, internal models of their environment allowing them to take

11

choices. Some subjects, e.g., animals, benefit primarily from the first kind of mechanism. The vicarious mechanism is important as it allows the system to take actions in advance of other actions in the environment, in order to adapt in some degree by itself and decrease the likelihood of risky ‘ corrective’actions from its surroundings (see section 4.3). 2.2.4 Agent In this thesis, the term ‘ agent’ is used of a kind of system with certain characteristics. Different notions of agents can be found in the literature. For example, in researches in biological and social systems, an agent may be defined as an object with a vicarious mechanism, i.e., a subject; while for people working in physics, an agent may be any entity able to influence others in its surroundings. For instance, when studying the solar system, the earth as a body attracting and rejecting other bodies in the solar system will be an agent. This notion is shared in some modelling computer programs such as the simulation language SDML (Moss et al., 1998a), where an object might be any entity with a rulebase and a database. An agent in SDML is just an object interacting with other objects or an object useful for providing the context (environment) where other objects interact. It seems useful to check in a dictionary the most common conceptions of agent (Merrier-Webster; http://www.m-w.com/):

Main Entry: agent Pronunciation: 'A-j&nt Function: noun Etymology: Middle English, from Medieval Latin agent-, agens, from Latin, present participle of agere to drive, lead, act, do; akin to Old Norse aka to travel in a vehicle, Greek agein to drive, lead Date: 15th century 1: one that acts or exerts power 2 a: something that produces or is capable of producing an effect: an active or efficient cause b: a chemically, physically, or biologically active principle 3: a means or instrument by which a guiding intelligence achieves a result

Here, the notion of agent is linked to the notion of influencer. An influencer might be thought of as an object influencing elements in a broader system, but this object does not necessarily have to be considered as a component of the system. While an agent is defined in terms of its actions towards its surroundings, a sub-system is defined as part of the interacting elements defining a system. The conception of agent is more closely linked to the idea of an active entity influencing the environment a subject is studying, but it is not necessarily a key element for helping a subject’ s understanding. On the other hand, a

12

system is more closely related to the notion of evolving entity, which is of interest for a subject, as it helps him to model and understand relevant parts of its surroundings. 2.2.5 Model In general a model is a representation of an object (‘ realist position’ ) or of an idea (‘ idealist position’ ) that a subject has created. In the case of modelling physical systems, a realistic position is taken and there will be a relation of correspondence between the original entity in the environment and the model an agent has elaborated. However, when modelling social systems, which are of particular interest in this thesis, such a physical reference is not clear and the modelling process becomes more subtle as ideas from theoretical and abstract grounds are also appropriate to build the models. Nevertheless, in any case, a model is considered helpful (in practical terms) only if it is simpler or at least easier to manipulate than the corresponding entity it is modelling. So, a model might be an object, for example a picture, or something as elaborate as a system. People working in social simulation are interested in computational models of the internal representation or models a subject has of its environment in order to reason, make decisions, act, and adapt. Here, models are basically a representation of the subject’ s perceiving phenomena, taking decisions, and then acting. Even more, a model might be a representation of good action. A subject with an internal model might have several models of certain aspects of its environment, and will take actions in accordance with a chosen model. It in of interest in this thesis also to consider a subject’ s internal model, which may be conceived as a population of sub-models of its surroundings evolving along with perceived phenomena (experience) and reasoning. 2.2.6 Synthesis So far, a world view where a subject is a particular kind of object has been developed. It is an object that perceives and reasons about its environment. It does this to achieve its goals. It is also a system where ‘ evolutionary processes’ are going on. These processes may involve a population of developing models of its environment. A subject’ s internal model is useful for abstracting and modelling processes going on in its environment. On the other hand, the environment is also considered as a particular kind of object. It is and evolving object; e.g., it is an object with behaviour where not only quantitative but also qualitative changes take place. Finally, an agent is conceived of as a system with special properties. An agent can be seen either as a subject or as a sub-system part of the environment. This is in line with 13

MAS-based Simulation, where the model of the whole system comes up as the interaction of agents. Each of these agents is a component of the model (system), but each agent also has an internal model of its environment, and consequently, it can be conceived of as a subject. 2.3 Zeigler’ s Formal Representation of a System

In this section Zeigler’ s formalism will be recapitulated, though the examples and related discussion are not from him. The idea is to provide in this introductory chapter a formal description of a system as seen in simulation. Zeigler’ s seems to be one of the first and more successful descriptions of a system in simulation. A description of both a system’ s structure and a system’ s dynamics will be given. Originally, its basic formalism is intended to describe simulation models whose structure is fixed, which are common in simulations of systems in industry, e.g., queue systems. However, its ideas can be extended easily to MAS. In fact, each agent in a MAS can be described as a system in Zeigler’ s formalism and the entire MAS as a composite of a hierarchy of basic and composite agents. The main parts in his formalism are represented in Figure 2.1. There an input value x, and output value y, an internal state q at two time steps, and the transition from the former time1 to time2, are identified. The output is a function defined by an observer, e.g., something that the observer is interested in. The output function’ s domain is an internal state of the system. It is supposed there is a function λ generating the output x from the state of the system q. In addition, there is a change in the system over time (e.g., there is a process): the system’ s state and the input change over time. Time is introduced as the independent variable. More formally, Zeigler’ s (1976) notation for a system S is: S = < T, X, ? , Q, Y, δ , λ >. Where: T: Time base (T = Reals or T = Integers). X: Input value set (each input is a sequence of values) ? : Input segment set, subset of (X, T), ? = {w / w: < 0, τ > → X, τ ∈ T}. Q: State set. δ : State transition function. δ : Q x ? → Q. λ : Output function, λ : Q → Y. Y: Output value set. (there should exist a set of output segments{ρ / ρ: < 0, τ > → Y, τ ∈ T})

14

System

Input value x1 Transition function δ State q1

λ

Output value y1 (defined by a modeller )

time1

System

Input value x2 State q2

λ

Output value y1

time2

Time

Figure 2.1. Basic notions in Zeigler’ s formalism: input values x1, x2; system states q1, q2; output volues y1, y2; output function λ ; and transition function δ , changing initial state q1 into a new state q2 Example 2.1. Let us consider a linear system

q : a real vector of size n,

x : a real vector of size l,

y : a real vector of size k, q : (state) at time t2 would be given by the application of δ at time t1 (t2 > t1)

The set of instances for vectors x and y gives a sequence of values over time:

t+ 1 t t δ (q t , w t ) = q nx 1 = A ×q + B ×x

λ(q t ) = C ×q t = y t Where t represents t1, t+1 represents t2, and A, B, C are real matrices. More explicitly (see also Figure 2.2):

2 1 1 ?q1 ? ?a11 ... a1n ??q1 ? ?b11 ... b1l ??x1 ? ?... ? = ? ... ... ... ??... ? + ?... ... ... ??... ? ? ? ? ?? ? ? ?? ? ? ? ? ? ? ? ? ?? ? ... ... q a a q b b nn ?? n ? nl ??xl ? ? n ? ? n1 ? n1 2 2 ?y1 ? ?c11 ... c1n ??q1 ? ?... ? = ?... ... ... ??... ? ? ? ? ?? ? ? ?yk ? ? ? ?ck1 ... ckn ? ?? ?qn ? ?

t

t

t

t

t

15

Where: n = number of states qi; l = number of inputs xj; k = number of observation variables or outputs yp The set <T, X, ? , Q, Y, δ ,λ > is called the system structure. The subset <X, ? , Q, Y> gives the static structure and the rest of the specification, δ and λ , the dynamic structure. The dynamic structure gives the laws of change of states. Notice that there are no laws of change either for the static or for the dynamic structure, that is, this formalism does not consider systems with variable structure. Experimental Frame: Zeigler calls the experimental frame ‘ the limited set of circumstances under which the real system is to be observed or experimented with’ (Zeigler, 1976, p. 30). He associates an experimental frame with a subset of the inputoutput behaviour. More precisely, he formalises an experimental frame E as the subset < ? , Y, λ , V>E, where the first three components are as given above and V is a subset of Y determining the range of validity of the experimental frame. This subset of Y (i.e., V) contains the control variables of the experiment. The remaining variables in Y are those of interest in the experimentation (Zeigler, 1976, p. 298). Constraints are placed in accordance with the modeller’ s particular interest. In the example, a frame would be given by <? , y , λ , y >, where Y= y and λ are given as in the example, ? is ( y , T), where T is defined as the positive integers (e.g., 1, 2, … ) and V= y , e.g., there are not constraints. Sq/w S new q

t1

t2

Time (time base in Zeigler’ s notation)

Figure 2.2. Graphic representation of a morphism for example 2.1 2.3.1 Zeigler’ s Levels of Specification of Systems The idea is to have a hierarchy of systems’ descriptions by increasing levels of elaboration in the sense that the higher the level, the more detail there will be in the specification of the system (e.g., more aspects of his structure will be known). This would be valuable for comparing models’descriptions and to know the degree of specification a model has.

16

At a first level (level 0), a collection of inputs and outputs is known over time, but there is no idea about the relationship among them. In a second level, an input-output relation is known. Different outputs have been associated with the same input, and, vice versa, different outputs have been related to the same input. A third level of specification will be useful for associating an input with an output, that is, there is a function relating in a unique manner an input with an output. This level is the most commonly used in science, e.g., in mathematics and physics. Notice that still the notion of process is poor - the weak notion of process is in the output, which can be specified as a function over time. At this level simulation is not an important tool, as the output can be generated only as far as the input and the function input-output are known. This level is not rich enough for helping in those cases where the behaviour of the system is difficult to model via a function, but depends subtly on internal variables (e.g., the state) of the system (e.g., in a social system, or in a queue system). It is in these cases where simulation becomes helpful. In the fourth level (level three in the specification), it is supposed that a function for changing the internal state of the system is known. The specification of change over time of a system’ s state brings in clearly the notion of process. Other aspects of a system, such as the output, can be defined in terms of a system’ s internal state. In the higher level, e.g., at level four, a system is defined as the composite of interacting systems at level two. This level is useful to describe, for example, an agent in a MAS composite of more elemental sub-agents. In fact, it can be used to describe a hierarchy of agents, e.g., a MAS. More formally, from Zeigler (1976): Level 0: Observation Frame, S = < T, X, Y >. The sets of inputs and outputs are distinguished but it is not known how they interrelate. Level 1: Input/0utput (I/O) Relation Observation (IORO), S = < T, X, ? , Y, R>, (R ? ?

× (Y,T), where (w,ρ) ∈ R → (implies) dom((w) = dom (ρ )) . The relation between the

input and output sets is distinguished, but it is not possible to differentiate among the different outputs associated with one input and vice versa. Level 2: I/O Function Observation (IOFO), S = < T, X, ? , Y, F >, (f ∈ F → (implies) f ? ? × (Y,T) is a function, and if f = (w, ρ) then dom (w) = dom (ρ )). At this level, there is a function between input and output sets which permits one to differentiate between the different outputs associated with an input. It is granted by the knowledge of the initial state. Until this level, a move from a lower level to a higher one allows more predictability about the output, given the input, but still there is no knowledge of the system’ s states.

17

Level 3: I/O System Specification, S = < T, X, ? , Q, Y, δ , λ >. At this stage, the state set, the transition and output functions are known, but still there is no distinction of the system’ s components. Level 4: Coupling of Systems. Here a system is specified as a coupling of several subsystems or components. Four elements are given: the name of the sub-systems, the specification for each component at level 3, the influencers of each one (other components), and the interface function specifying the input of each component as a function of the outputs of its influencers and the input to the system. In Zeigler’ s formalism, it is supposed that the transition function is invariant over time (e.g., a system’ s dynamic structure is invariant); that is, whenever it is applied over the same initial state and input segment, the same output segment will be generated. Also, notice that the specification at level zero only gives a pair of the sets of inputs and outputs plus the idea that they are changing over time. There is no specification of structure. This seems to be one of the simplest specifications of a system that can be found in the literature. Heylighen also formulates specifications of very simple (plain) systems, for example, systems for which it is not necessary to specify their components or even their states (Heylighen, 1995). He brings in the example of an electron. An electron is considered as a system though no structure has been identified and only the notion of certain behaviour gives its identity. However, ‘ It has states given by positions, energy or momenta … and their evolution can be expressed by the Schr? dinger equation’ (Heylighen, 1995, http://pespmc1.vub.ac.be/papers/MST-ConVar.pdf, p. 4). In fact, this model will be at least at level three, as the transition function is known. Heylighen considers that a system is an entity suffering internal change while keeping stability in the sense that it shows differentiation with respect to the environment as an individual unit. He expands this notion, summarising a system as a ‘ constrained variety or constraint on variety’ . This idea can be used for describing systems of order zero: ‘ The total set of connections (Cartesian product of the set of possible inputs and set of possible outputs) might be interpreted as a maximal possible variety, the subset defining the system as a limited variety, to which actually occurring input-output transitions are constrained’(idem, p. 5). When modelling in traditional economics, the economic system is usually represented at level 4 but the economic agents (e.g., organisations or people) are simplified to level 2. In social simulation this system is modelled at a higher specification level, as the relevant

18

aspects of an economic subject with bounded rationality, namely, its internal (cognitive) model, are modelled at level 4. One interesting question is whether the simplification made in traditional economics is well justified in the dynamic economic environment of today. 2.3.2 Example: MAS-based Simulation MAS has been successfully applied to the modelling of complex systems described in terms of components. A MAS consists basically of a hierarchy of agents. Each agent can be described as a system according to Zeigler’ s formalism, as was briefly shown in the example given in section 2.6.1. The lower level of agents in a MAS, that of agents without sub-agents, will be described at level 3 in Zeigler’ s formalism (see paragraph 2.3.1). The other, higher levels (the containers) could be described at level 4. Agents change as their rulebase changes over time. A relevant example is the evolution of their cognitive models given via a set of rules. The change of rules over time allows more creative and subtle processes than those happening when only elimination and introduction of agents are implemented. In declarative programming, laws of behaviour are given as a set of rules, which are ‘ atomic’ components admitting to be modified, eliminated, created, and replaced. This is appropriate for evolving the dynamic structure of agents. 2.3.3 Morphism between Systems. Simplifying Now a criterion for comparing models will be reviewed. This comparison is based on the concept of experimental frame: ‘ equivalence’ among models is always relative to an experimental frame. This conception of equivalence is based on the mathematical concept of morphism. A weaker notion of model equivalence will be presented in section 2.6.1. This will be useful for simplification, validation, and alignment of models. Evaluating morphism consists in checking if the same output-input set a modeller is interested in (e.g., the experimental frame) is observed in two models. It might consist also in checking if a subset of the output a model offers under certain conditions (this gives an experimental frame) coincides with the correspondent outputs other models generate under the same conditions; in such a case it is said that there is a morphism between these two models. As a system’ s output depends on the system’ s state, also ‘ equivalence’ between the compared system’ s states has to be checked. In fact, a system S will be called ‘ equivalent’ to a second system S’(that is, a morphism exists from the second to the first system) if the state and output of system S can be accessed via a mathematical 19

transformation (the morphism function) from the state and output, respectively, of system S’at any time and under certain conditions of experimentation (e.g., under an experimental frame). Notions discussed in this section can be used to classify and assess homorphisms appropriately when using the methodology to be developed in Chapter 5 for comparing models. Notions of homomorphism associated with this methodology will be discussed in section 5.10. Consider now the notion of morphism as given by Zeigler. There is a morphism between S = < T, X, ? , Q, Y, δ , λ > and S’= < T’ , X’ , ?’ , Q’ , Y’ ,δ ’ ,λ ’ >) if there exist functions g, h, k such that (Zeigler, 1976):

1.g : ? ' → ? 2.h : Q ? onto ? ? → Q' , (Q ? Q) 3.k : Y ? onto ? ? → Y'

?q∈Q

__

__

__

and

? w '∈ ?

4.h(δ (q, g ( w' ))) = δ ' (h(q), w' ), transition function preservation.

5 .k ( λ( q )) = λ' ( h ( q ))

output function preservation.

Each evaluation of δ takes the state q and the input x at time t and generates the new state q for the next time step t+1. The output function λtakes the state at time t to generate the output, also, at time t. This is represented in Figure 2.3. S: (q , w) δ q q λ y

h

g δ ’

h

h λ ’ q’= = h (δ (q ,g (w’ ))) =δ ’( h ( q), w’ ) q’

k

S’ :

h(q) = q’, w’)

y’ = k(λ (q)) =λ ’ (h(q))

Figure 2.3. Graphic representation of a homomorphism

20

If S and S’have the same observation frame, that is: T = T’ , X = X’ , Y= Y’ , ? = ?’ , and Q is equal to Q, then g and k are the identity functions. The graph in Figure 2.2 becomes as shown in Figure 2.4, and equations 4 and 5 turn out to be:

4a.h(δ (q, w )) = δ ' (h(q), w ), 5a.λ(q) = λ' (h(q))

S’ is an image morphism of S, and the function h is called a homomorphism. If in addition h is one to one, it is said that there is an isomorphism between S and S’(or that S and S’are isomorphic). δ

S:

(q ,

w)

q

q λ y’ =λ (q) =λ ’ (h(q))

h

x =g(x) δ ’

h

h λ ’ λ ’

S’ : (h(q) = q’, w’)

q’= = h (δ (q ,w)) =δ ’( h ( q), w)

q’

Figure 2.4. Morphism between systems S and S’under a similar observation frame Zeigler (1984, p. 251): explains a valid simplification in these terms: ‘ We say that a lumped model [for us, S’ ] is a valid simplification of a base model [for us, S] in frame E [for us, an observation function λ ] if the two models are equivalent in E [e.g., a morphism from S to S’ exists] and the lumped model is simpler than the base model with respect to some measure of complexity.’ Example 2.3 Continuing with the example of the linear system, we define H as:

?h11 H =? ? ... ? ?hm 1

... ... ...

h1n ? ... ? ? hmn ? ?

q ’is the state of the system S’given as a vector of size m. It is supposed that the number of

21

states in S’is smaller than the number of states in S (that is m < n). So, at any t:

?h11 ... h1n ??q1 ? ?? ? q ' = H ×q = ? ? ... ... ... ??... ? ? ? ?hm1 ... hmn ? ?? ?qn ?

Or, more explicitly:

?q1 ? ?... ? ? ? ? ?qn ? ?

t+ 1

?a1 ... a1n ??q1 ? ?b11 ... b1l ??x1 ? ?? ? ? ?? ? =? ? ... ... ... ??... ? + ?... ... ... ??...? ? ?an1 ... ann ? ?? ?qn ? ? ? ?bn1 ... bnl ? ?? ?xl ? ?

t

t

t

?y1 ? ?c11 ... c1n ??q1 ? ?... ? = ?... ... ... ??... ? ? ? ? ?? ? ? ? ? ?? y c c ... kn ? ? n ? ? k1 ?qn ? ?

And for S’ :

t

δ ' ( q ' t , w ' t ) = q ' t + 1 = A '×q ' t + B '×x ' t

λ' ( q 't ) = C '×q ' t = y 't

The two systems have the same number of outputs and inputs. However, system S’ has a smaller number of states. Now, we can look for conditions to satisfy transition and output function preservation (4a. and 5a.; see also Figure 2.5)(here it is assumed time variables q , w , and x are evaluated at time t):

4a h(δ (q , w ' )) = δ ' (h(q ), w ), becomes H ×( A ×q + B ×x ) = δ ' ( H ×q , w ) H ×( A ×q + B ×x ) = A'×H ×q + B'×x H × A ×q + H ×B × x ) = A'×H ×q + B'×x 5a ?(q ) = λ' (h(q )), becomes C ×q = C' ×H ×q

22

A,B S: (q , w )

t t

q t+1

qt C

y = C ×q t = C’ xH x q t

H

I

A’,B’

t

H

H

t q’ = HxAx q t+HxBx x t xx t = A’ x Hx q t+B’

C’

S’ : (H x q t = q

,w t )

q ’t+1 =

Figure 2.5. Homorphism between systems S and S’(likewise A, B, A’ , and B’ ; also H is a real matrix) As is known from previous equations:

t+ 1 t t δ (q t , w t ) = qnx 1 = A ×q + B × x

λ(q t ) = C ×q t = y t Vectors q , x , and w are evaluated at time t. For any t and arbitrary vectors x and q, 4a, and 5a, are true if:

6.H × A = A'×H 7.H ×B = B ' 8.C = C '×H

Example 2.1. A case where n = 3, m = 2, k = 3, and l = 2 is:

?1 1 1 ? ?1 0 ? ?1 ? ? ? ? A = ?0 1 0 ?; B = ?0 1 ?; C = ? ?0 ? ? ? ?0 1 2 ? ? ?1 1 ? ? ?0 ?1 0 ? 10? ?1 1 ? ? ? ? A' = ? ?; B ' = ? ?; C ' = ?0 1 ? 0 2 1 2 ? ? ? ? ? ?0 1 ? ? ?1 0 0 ? H =? ? ?0 1 1 ? 0 1 1 0? 1? ? 1? ?

Example 2.2. Again a case where n = 3, m = 2, k = 3, and l = 2:

23

?1 ?1 0 ? ?1 1 1 ? ? ? ? ? A = ?0 1 0 ?; B = ?0 1 ?; C = ? ?1 ? ? ? ?0 ? ?1 1 ? ? ?0 1 2 ? ?1 1 ? 10? ?1 1 ? ? ? ? A' = ? ?; B ' = ? ?; C ' = ?1 0 ? 0 2 1 2 ? ? ? ? ? ?0 1 ? ? ?1 0 0 ? H =? ? ?0 1 1 ?

1 0 1

1? 0? ? 1? ?

It is easy to verify conditions 4 and 5. Systems S and S’have a similar number of inputs and outputs. The size of the space of states is different, three for S and two for S’ . We notice the exhaustive partition that function H makes in the space of states of S. The first component of q ’ depends on the first component of q , and the second component of q ’ depends on the remaining two and three components of q . As the relation among states is not one to one, this is not an isomorphism (see Figure 2.5). Assume a simplified system for S: S’= < T’ , X’ , ?’ , Q’ , Y’ ,δ ’ ,λ ’ >, is given. S’would be smaller in the sense that Q’ could be inferred from Q, but the opposite would not necessarily follow. Zeigler identifies at least 4 ways to simplify a model: 1. Dropping of components, descriptive variables, or relations. 2. Replacing one or more deterministically controlled variables by random ones. 3. Coarsening the range set of one or more descriptive variables. 4. Grouping components into blocks and aggregating the descriptive variables also into blocks. 2.4 Homomorphism and the Idea of Metarules

If S’ is a homomorphic image of S, the behaviour of (S’ )’is a subset of the behaviour of (S)’ . In fact, it is possible to generate any state and any output of S’ from S, using the homomorphism function h. Should S’ be a valid simplification in a given experimental frame, either of the models generates the behaviour of interest, but it would be cheaper in computational terms to simulate system S’ . Moreover, in this experimental frame, the behaviour of (S’ ) ’ is the behaviour of (S)’constrained. In declarative programming, where functions are given in terms of rules, S’ might be seen as S plus some constraining ‘ metarules’ (as behaviour of S’is accessible form S). This notion of metarules is used by

24

Holland (1998). In the example, those metarules will constrain the space of states of S, eliminating some sort of redundancy. In such an example a redundancy occurs when given both values q2 and q3 because only one value, the total q2 + q3, is sufficient for calculating the required output, as the simplified system S’shows. The metarule that would have to be added to S in order to get S’will imply this condition. Note that columns two and three of C’are identical. Such a condition would be expressed mathematically as:

(ci 2 = ci 3 , ? i ) ? ( y t i = ci1q1 + ci 2 (q 2 + q 3 ), ? i) ? ( y t i = ci1q1 + ci 2 qtotal , ? i )

t t t t t

The resulting output and the transition from two states differing only in the two components (q2, q3) will be similar (under the experimental frame defined above) if the sum of q2 and q3 gives the same result. In this particular example S’ is simpler than S, as the systems are of a similar nature (linear systems) but less memory is necessary to keep information about the states of S’ and less calculations are necessary to generate a simulation state transition - the size of the matrices and vectors involved are smaller than those in S. If S’and S had different natures, such a comparison might not be so straightforward. Here the measure of complexity is defined in terms of computational resources, namely in terms of the amount of memory necessary to keep a system’ s state and the number of manipulations required for a simulation step transition. Given an experimental frame (e.g., the sort of input-output behaviour of interest) and a transition function, the homomorphism might be defined in several ways corresponding to alternative manners of simplifying. However, given a system and an experimental frame, it might not be easy or even possible to find the ideal simplest system, S’ , of S under such a experimental frame (assuming it exists). A notion of satisfying seems to be convenient. In addition to the experimental frame and modeller’ s goal, available theories, methodologies, and modelling tools (e.g., for simulation) seem to be other factors involved in simplifying. This formal definition of system homomorphism has been useful in simulation. However, generally in practice this formal (mathematical) notion of homomorphism cannot be applied, as no homomorphism as seen in Zeigler’ s formalism exists and alternative weaker notions of homomorphism have to be used for comparing systems. This point will be addressed in more detail in section 2.6, where an illustrative example will be given.

25

2.5

Systems Exhibiting Structural Change

Call S(t) and S’ (t) the structures of systems S and S’respectively at time t. Extrapolating our notion of morphism for a system with static structure, S’is a simplification of S if S’ (t) is an image of S(t) under a suitable morphism. However, to require morphism equivalence at any time instant seems to be too strong, and even unrealistic in practical applications. A weaker notion of ‘ similar’ evolution among structures might be more convenient where evolution of the structures at different time instants is allowed. To account for this, more relaxed conditions for a valid simplification when the systems are under structural evolution would have to be looked for. The homomorphism function might change over time, e.g., h = h(t). If the function h(t) is given in terms of rules, its change over time can be implemented by modifying only some rules rather than by redefining the whole function h(t), as would be done in a procedural program. This gives a sort of modularity and flexibility. Consequently, to follow the structural evolution of two systems and express homomorphism between them seems to be easier in a declarative program than in a procedural one. MAS built in languages like SDML are an example. Structural change such as elimination and/or introduction of components of the system specified at level 4 (as seen above) or modification of components’ interrelations can be made in modularly. In addition, processes, e.g., dynamic laws of systems, can be modified by changing, aggregating, and/or eliminating rules. Also rules can be shared among agents and processes. This gives a sort of modularity at the level of process implementation useful for considering changes in the dynamical properties of a system. Rules could be seen as an object that might be created, shared, inherited, aggregated, and destroyed. Rules can also be grouped conveniently and reused in different modelling specifications (e.g., a module hierarchy in SDML). This sort of structural change, though far beyond what is allowed by known simulation languages in industry (e.g., SLAM II, Pritsker, 1995), is still very limited compared with structural change in empirical systems. In empirical systems structural change occurs at ‘ infinite’ levels. Even more, in highly evolved systems (or systems at a high level of complexity; see Chapter 4), this process might be creative (Domingo et al., 1996a).

26

2.6

Approximation: Loosening the Morphism Criterion for Validation, Aggregation, and Alignment of Models

Validation is a particular case of weak homomorphism between two systems. A sort of equivalence under a weaker criterion than the mathematical morphism defined above is applied to check agreement between two systems. A weak notion of homomorphism will be described in sub-section 2.6.1. It is supposed there is a bigger system S, e.g., reality, and the model or system to be validated S’ . S’ will be a valid model if it is a homomorphic image of S under the chosen weak homomorphism criterion. Aggregation of models consists in integrating several models representing complementary aspects of a system rather than each component. Simulating each of these components usually requires a lot of computational resources, and so it is convenient to find a way to simplify them. Again, the idea of a weak homomorphism is applied, first to simplify the components, and then to validate the new, ‘ bigger’ model resulting from that aggregation. Some researchers in social simulation have proposed creating a cascade of integrated models (Moss et al., 1998b). They describe it as a ‘ bottom up’ modelling procedure in contrast to the ‘ top down’ modelling procedure used in classical models of economics and climate change. They also argue that validation might be done against collected data in reality or ‘ domain experts’ . Finally, two models are aligned if there is a weak homomorphism among them, the idea being to compare models in different platforms, although one model is not necessarily simpler than the other. This notion is used in Axtel et al. (1996) for comparing two programs built independently for modelling a similar case. The criterion used in this work for checking homomorphism is weaker than the one given by Zeigler and discussed above (this criterion will be discussed in sub-section 2.6.2). 2.6.1 In Traditional Simulation Several factors are involved in the difficulties associated with proving homomorphism, among them are: lack of precision in measuring the phenomena, error propagation in the computer, and factors little understood in the empirical system an commonly modelled as random processes. Usually, it is not possible to cancel these restrictions, and the criterion of perfect agreement in a relation of homomorphism has to be replaced for one of approximation in practical applications. In this section the relaxation of the validation procedure for the case of simulations of models with fixed structure will be reviewed (Zeigler, 1976).

27

‘ The validation criterion becomes one of ‘ REASONABLE’ agreement between the systems’ outputs The notion of reasonable agreement is formalised in a concept of tolerance. Basically, it is defined as a metric (|| . ||), over the difference between the outputs (behaviour of interest) of the two systems, S and S’ . This difference is required to be smaller than a predefined tolerance. One of the most popular metrics is the square mean. To calculate this measure of agreement exactly, it is usually necessary to sum over all outputs of interest. Consequently, this requires a post hoc analysis of all trajectories. However, it is not usually possible to generate all the outputs of a system via examination of the simulation trajectories. Alternative looser simplification procedures are needed. Techniques where only a limited number of trajectories are analysed are associated with post hoc inspection of trajectories where weak notions of morphism are applied. Among the existing techniques for doing this looser post hoc analysis are scenario analysis and Monte Carlo techniques. Given certain drawbacks of these existing techniques, additional methods are an open area of research (see section 2.10). It is the main aim of this thesis to propose alternative methods, but yet practical, for analysing simulation dynamics, which rest in stronger notions of morphism than the weak ones to be described in this section (see Chapters 5 to 7). Different weak homomorphism criteria can be defined. Taking a phrase from Zeigler (1976, 330) (he uses f for || . ||): ‘ f picks out those aspects of the output CONSIDERED TO BE IMPORTANT FOR DETERMINING ACCEPTABLE AGREEMENT’ . As pointed out before, in Zeigler’ s simulation theory only quantitative change in the system’ s outputs is taken into account, this is also true when checking a weak homomorphism. Example 2.4. A simplification using a weaker homomorphism criterion Suppose there are two MAS-based models of a trader-distributor interaction. There will be two principal types of agents: Trader and Distributor. Each model is defined as the interaction of several traders and distributors. A distributor’ s main task is: to place orders to a trader. A trader’ s main tasks are price- and sales-setting, and price-imitating. The experimental frame will be to observe changes in prices, sales, and orders over time. Assume now that in a first model the agent trader contains sub-agents warehouses and trucks, where trucks are used for delivering goods to distributors and warehouses are employed to keep control of the input-output of goods. Now suppose that in the second model warehouses and trucks do not exist as agents. Data and rules kept in warehouses in the first model are now kept in the trader’ s database

28

and the rulebase respectively (e.g., now the level of goods can be represented as: goodLevel(Warehouse, Amount)). Trucks are just obviated and delivering is modelled using a delay function. The first model of trader needs to be specified at level four (as it has components), while the second model can be specified at level three (see the next diagram for an overview of the first model of a trader).

29

First Model of a Trader

Specification: < T, X, ? , Q, Y, δ ,λ >Trader;

where: < T: Time base, X: input value set, ? : input segment set, Q: state set, Y: output value set, δ : transition function, λ: output function >

T: simulation time (iterations), given by a sub-set of positive integers (e.g., 1, 2, 3, … , total number of iterations). X: inputs at iteration i are, e.g., orders placed by Distributors at this agent at the last iteration (e.g., i1), as well as prices and sales of other Traders at the last iteration. ? : (X, T). Q: among its states we have price, accumulated value of orders Distributors have made at this agent, its purchases, state of its sub-agents, e.g., level in warehouses (from now it is left implicit that this value is given at iteration i). Y: price, sales. δ : among them purchases and sales estimates, and price-setting. λ : according to the given experimental frame, prices and orders and probably some statistics or graphs about their behaviour.

Sub-agent Warehouse

Specification: < T, X, ? , Q, Y, δ , λ >Warehouse T: shared with Trader. X: Trader’ s purchase and amount the truck takes for delivering (this part of the truckswarehouses interaction). ? : (X, T). Q: level of goods. Y: it has been assumed no aspect of Warehouse’ s state is of interest for a modeller, but level of goods is accessed by the Trader. δ : new level of goods has to be calculated as a function of level, purchases, and sales at last iteration. λ : null function, as there is no interest in observing behaviour of the level of goods (Warehouse’ s state).

Sub-agent Truck

Specification: < T, X, ? , Q, Y, δ ,λ >Truck T: as given for Trader. X: data about delivering assigned by Traders, e.g., amount of goods to be carried, destination (buyer) and level of goods at Warehouse (part of the trucks-warehouses interaction). ? : (X, T) Q: a truck might be delivering, waiting for an order, or out of service. If delivering, the amount of goods carried and the state of the trip are specified. Y: there is no aspect of interest for the modeller. δ : as in the other agents, its states depend on previous states. For example, if the truck is in service and a delivering has been assigned by the Trader in the last iteration, then its new state will be delivering. λ : null function.

30

Other facts about this first model are: The input of a sub-agent (truck or warehouse) depends on the output of some of the other sub-agents (the influencers). In this example, the interaction between a warehouse and trucks serving it is simple: a truck takes produce from a warehouse in accordance with the level of this product, and a warehouse updates its level of product in accordance with the amount taken by the trucks. The states of warehouses and trucks are considered accessible by their container (trader). Now, let us see how a trader is modelled in the simplified model trader-distributor:

Second Model of a Trader Specification: < T, X, ? , Q, Y, δ ,λ >Trader;

T, X, ? , Y, λare defined as given in the previous model of Trader. s price, accumulated value of Q: among the variables given the state of a Trader we have good’ orders placed at it by distributors, the Trader’ s purchases, and variables representing states of its sub-agents in the first model, e.g., level of goods (this was previously kept in its sub-agents warehouses). δ : this function not only has to generate the transition for data that was in Trader’ s database in the previous model but also for updating data that previously was updated by its sub-agents, e.g., now delivering is calculated by using a delay function. That is, delivering is modelled as if a portion of the total amount to be delivered were sent each day. This might be implemented using an equation in finite-differences as those employed for numerically approximating and solving differential equations (e.g., Euler’ s method). As previously noted the data structure has to be modified in order to identify explicitly the missing gents (e.g., an field for identifying each warehouse has to be introduced in the predicates of warehouse ‘ level’ of good).

A simplification between these two models cannot be based on a homomorphism as it was originally defined, but only on a weaker criterion. Proofs of strict homomorphism might be more difficult than the simulation itself and quite unlikely to be achieved successfully. As previously stated, no strict homomorphism exists in practice. Let us assume that Monte Carlo techniques are chosen as the weak method for comparing the two models. Our experimental frame (see above) specifies interest in observing the behaviour of prices, sales, and orders. Consequently, the evaluation criterion has to be based on these outputs. The comparison may be made following these steps: 1. A random set of simulation trajectories for each model is generated. It can be implemented using different random seeds for calculating random variables for different trajectories. Random variables are used for modelling certain little-known aspects in the simulation model. For example, assume that for price-setting a trader 31

imitates other traders’ prices. Then, a random variable can be introduced for modelling a trader’ s choice of another trader. 2. Data of interest are collected from the generated trajectories. In this case data about sales, orders, and prices are gathered. 3. Then a statistical test (here the weak criterion for comparing the two models) is applied to check the hypothesis that the two samples (e.g., each pair of samples of orders, sales, and prices) are coming from a similar statistical distribution and so are statistically similar. Among the methods for making this comparison is the non-parametric Rank Sum Test (Hoel, 1966). Should the test prove successfully, it would be said that the models are equivalent in accordance with this weak notion of homomorphism. As the second model is simpler than the first one (notice that it has less components and less variables), there would also be a valid simplification. Notice the weakness of this procedure in that the conclusions valid for the explored subset of trajectories are extrapolated by using probabilities to the whole space of trajectories, even though unexpected things can happen at the randomly left trajectories. As previously stated, it is the purpose of this thesis to propose alternative ways for doing the searching for trajectories, and then, based on this, to define alternative notions of morphism. 2.6.2 In Simulation of Structural Change In simulations where the structure of the model undergoes qualitative change, the experimental frame is generally defined not only in terms of quantitative outputs but also in terms of the structural change of the system; that is, qualitative changes are an aspect of interest. For example, in a MAS-based simulation where agents are eliminated and introduced, the behaviour of interest might be the number of agents of a certain type appearing at the end of a simulation. Notice that in this case quantitative measures can be defined over the qualitative change. So, a way for doing the qualitative comparison is by defining quantitative measures over qualitative behaviour. Illustrations already exist in the literature, for instance, that given by Axtell et al. (1996) for the alignment of models. They compare the average number of stable regions observed in the simulation of a model of cultural transmission. In this experiment, the factors are the number of cultural attributes, the attribute’ s levels, and the size of the territory where the interaction happens. The statistical comparison is made only at one time instant of the simulation (at the end of it). Axtell et al. also mention an

32

alternative procedure using statistics: to observe if certain internal relationships among variables hold in both models. They call this comparison ‘ relational’ equivalence, and the statistical one ‘ distributional’ equivalence. 2.7 Verification and Validation of a Simulation

In a simulation processes of three types can be identified, (1) those programmed or explicit in the simulation design (which can be used for verification); (2) those not clearly given in the design but desirable in the model as well as being well known in the modelled system (which can be used for validation); and (3) those called open aspects, which are little understood in the target system and about which a simulation might give hints and help a modeller hypothesise about the behaviour of the model and, extrapolating, about the behaviour of the target system. For example, in an event-driven simulation of a queue system, the process of arrival of entities into the queue might have been programmed to follow a probabilistic distribution. This first sort of behaviour can be foreseen and so is not of interest for validation. This is an aspect to be verified. Verification can be achieved by comparing the design of the model with some formal specification. In fact, verifying a model consists in comparing the structure of the model with some specification given by a theory. On the other hand, in the event-driven simulation a modeller might notice certain behaviour about the size of the queues not directly given in the simulation design. Checking if the corresponding behaviour in the target system - which may supposedly be measured in some way (e.g., via a statistical sample of the behaviour of interest) coincides with behaviour in the simulation is what is called validation of the model. Finally, there should exist many other features a subject (a modeller) does not comprehend well in the dynamics of the modelled system and which he wishes to understand better by observing the simulation. For example, some special features called emergent tendencies are worthy of special attention in social simulations. 2.8 Event-Driven Simulation

Strategies in event-driven simulations are based on the use of a future event list (FEL). Each event is associated with: the time when it will be activated (activation time), possibly additional conditions for the event to be activated once the time-condition is satisfied, and the consequences of the event (namely changes to be made in the system’ s state and the scheduling of further events). Events in the list are ordered by their activation time. For example, in a queue system, the event ‘ arrival of an entity into the system’ will be 33

associated with the arrival time (activation time of this event), other conditions for such an arrival to happen, changes to be implemented in the state of the system (for example, to place the entity in the queue if the queue is smaller than a certain size), and, finally, the scheduling in the FEL of a new arrival. As can be seen from these examples, the FEL is used for guiding the state transitions over time. Size of time steps is taken as the difference between the time of activation of each pair of consecutive events in the FEL. Two of the most popular strategies in eventdriven simulation are simulation by events and simulation by processes. In event-driven simulation each event is programmed in a separate module and a main program executes that module corresponding to the next event in the FEL. In simulation by processes, the process an entity carries out is the key aspect. The program consists in a sort of net for the ‘ life-cycle’ of the entity. For example, in a queue system, the entities are those units queuing and their cycle of life is given by a succession of events: to arrive, to queue, to be served, to leave. The program consists of a net of nodes following the temporal order given by the entity’ s life-cycle, where the consequences of each event (as well as possible decisions the entity has to take or decisions of the system, e.g., gates) are programmed. An event fires as the entity reaches the associated node(s). So nodes are used to represent: entrance to the system, queue, service, output of the queue, and exit (destruction of the entities). Examples of languages where these strategies have been implemented are SIMAN, SLAM, GLIDER, and SPSS. This sort of simulation is common in industry. 2.9 Simulation of Systems with Variable Structure

The basic aspects of a system with variable structure and its simulation will be reviewed in conformity with the ideas of Domingo et al. (1996, 2000); Testa et al. (1999), and Heylighen (Heylighen, 1989, 1991, and 1995). Then speculation about simulating structural change according to Domingo’ s ideas and a MAS will be presented. Testa et al. consider it fundamental to recognise change at both levels: at the level of the static structure (what he calls form) and at the level of the dynamic structure (function). They call this change ‘ fluctuation’ . More precisely, they talk about ‘ fluctuation within a probabilistic range’ . They argue that there is a mutual dependence among the two changes and note that such an interdependence cannot be ordered either hierarchically or causally. They describe the processes of dissolvence and emergence as the two facets of the same coin where a whole (system) arises from its components (sub-systems). The first is the rise of new properties of the whole and the second the constraint of its components ‘ in a

34

synergic process of integration and re-organisation’ . Examples of biological systems are given. In Domingo et al. (2000), another concise idea of structural change is found, for here they affirm that ‘ in structural change systems change the behaviour, the components and the relations among them.’ They identify other processes leading structural change in addition to that of aggregation described by Testa et al. Among them are bottom-up and top-down changes. In the former case the process of structural change is driven from above - the whole system drives the constraint in, e.g., properties of the components and elimination or introduction of new components. In the latter case, the parts are responsible for directing that process. Moreover, they argue that the two processes can be present simultaneously. For example, a revolution can be a bottom-up change led by members (components or actors) of a state (e.g., groups, associations, people) creating changes in the whole system (the state). The opposite happens when there are economic associations among countries. Then the new rules for the economic interchange among the involved countries are elaborated at the top level and then ‘ imposed’ upon the lower levels of the system (the economic actors). However, the systems these examples refer to can present both kinds of changes over a time period of enough length. For example, in the case of a revolution, after the new political force takes power, it starts promoting changes from above (top-down) as it implements, for instance, its economic and educational policies. Domingo identifies two main problems for the simulation of structural change (SC): ? First, those related to the laws of change, that is, aspects related concerning when to change the structure and how the variation selection of structures will be given. ? Second, that of the level of aggregation. In nature, the aggregation of sub-systems seems to happen at many levels (perhaps at infinite levels) starting from small atomic levels possibly still unknown in physics. Obviously, its modelling in a computer cannot be attempted. In this device only certain levels of aggregation of structural change can be modelled (the relevant ones for certain purposes and in accordance with the available modelling tools). A graphical representation of several levels of control, where structural change is allowed, is given in Figure 2.6.

35

Other, higher meta-system levels

Meta-meta-system

Meta-system specification Specification procedures about ‘ meta changes’ . E.g., structural change. It will allow one to find out when the SC happens, how to implement the process of variation selection of structures (e.g., cross-over, mutation and propagation of rules in case of evolution of the dynamical structure), and the level of aggregation of SC.

System specification Zeigler’ s static specification: <T, X, ? , Q, Y> Zeigler’ s dynamic specification: <δ ,λ>

Figure 2.6. Graphical representation of several control levels where structural change takes place Example 2.5. MAS-based Simulation: A Weak Case of Simulation of structural change. In MAS some levels of aggregation are at the level of agents. Agents can be created, destroyed, and their rulebase modified. There are several levels of aggregation but the number of levels is finite. Moreover, these structures (agents) and levels of aggregation are fixed. Another level of aggregation is that of the cognitive model of the agent. Examples of agents’ internal models are presented in Moss et al. (1998a). how the problems averted to by Domingo have been handled in MAS. At the level of aggregation of agents, since structures are fixed, the first problem, (namely the process of variation selection) is solved in an easy way – it is simply obviated. At the level of cognitive models more elaborate procedures have been used: evolutionary programming. Here rules are evolved using genetic algorithms or genetic programming (Edmonds, 1998). These two examples of simulation of SC are part of two strategies Domingo et al. propose to solve the named problems. Structural change in the case of an agent can be a decision either of the environment or of the agent itself. For instance, a trader in a market might be eliminated by the economic environment (e.g., if his profits are 36 Here aggregation occurs at the level of rules. In the following pages we will present examples of

too low) or he may choose to leave the market. A cognitive model can suffer structural change when the agent undergoes a new experience1. Both Heylighen and Domingo recognise the importance of a meta-model when modelling structural change. The meta-model, or controller, would be responsible for solving the problems Domingo has indicated: namely, to decide when the change happens, to implement the process of variation selection of structures, and to choose the aggregation level (see also Heylighen, 1991b; Domingo et al., 1996a). Several aspects of their proposal coincide with the strategy followed in a technique to be presented in Chapter 5 and implemented in the example contained in Chapter 7. 2.10 Towards Alternative Methods for Analysing the Dynamics of Simulations of Complex Systems A post hoc analysis of a simulation consists in examining outputs obtained from the simulation (as has already been described in this chapter). This is the strategy used in many simulation studies to understand a simulation after it has occurred owing to difficulties in studying it from its design. Conclusions of this analysis can be extrapolated to the modelled ‘ real system’ (if it exists) under certain conditions (e.g., the morphism criterion discussed in previous sections). In this thesis a post hoc analysis is of special interest for assisting in explaining emergent tendencies in a simulation. Usually it is not possible to include all possible simulation trajectories in a post hoc analysis of a model-based simulation, as might be suggested by a strong notion of morphism like that of Zeigler introduced in section 2.3.3. An exhaustive generation of simulation trajectories in a simulation is generally impracticable owing to the limited computational resources available to a modeller. It might also produce too much information and make the analysis of the outputs difficult. Because of this, alternative approaches have been developed along the weak idea of morphism offered by the notion of approximation introduced in section 2.6. Among these alternative methods are scenario analysis and Monte Carlo techniques. Scenario analysis consists in analysing a modeller’ s selected subset of trajectories. On the other hand a Monte Carlo Analysis consists in analysing a randomly chosen subset of trajectories. In a Monte Carlo study, conclusions from a ‘ big enough’sample of trajectories are probabilistically extrapolated to unexplored trajectories (see section 2.6.1).

1

For more about a typology of SC, see Domingo et al. (2000).

37

Both of these methods leave unexplored trajectories. Neither of these explorations is exhaustive. In the first method the omitted trajectories depend on the ‘ subjective criteria’ a modeller applies to choose the trajectories to be explored and, in the second one, they are left randomly. Because of this the analysis is limited, as the conclusions cannot be generalised and applied in wider theory – only probabilistic generalisations are allowed (in the second method). On the other hand, probabilistic extrapolation is valid only if small changes in the settings of the simulation produce small changes in the outputs of the simulation – e.g., if outputs are continuous with the settings of the model. As this assumption is not valid in complex systems (e.g., in systems with chaotic behaviour), a Monte Carlo analysis of a simulation of a complex system might omit trajectories likely to happen in the ‘ real system’ where the behaviour of interest might be quite different to that observed in the explored ones. This arbitrariness in the chosen trajectories engenders difficulties for generalising conclusions when simulating complex systems. The dynamics of complex systems are usually very sensitive to changes in the properties of the components and, in general, to the evolution of the structure of the system itself, since small changes in the properties of the system might result in big changes in the outcomes of the system. For example, the behaviour of a chaotic system is very sensitive to changes in the initial conditions of the simulation; a slight change in the initial conditions may produce big alterations in the processes this system is undergoing (e.g., that system (in weather forecasting) defined as the surrounding of a geographical area determining the weather conditions of this area). Another example is a social system. In a social system chaotic behaviour is present because of its high level of complexity. It is sourced mainly in the high number of entities (humans) interacting in a society and in the complexity of each of these entities (see the discussion concerning modelling systems at high levels of complexity in Chapter 4, especially section 4.3.3). Similar susceptibility is expected to be present in the dynamics of a simulation of a complex system. For example, phenomena in a MAS-based simulation of a social system might be unexpectedly contingent on changes in the parameters of the model and the choices of the agents, so that small changes in parameter-settings or in agents’ decisions might result in significant changes in the simulation outputs. This inappropriateness of modelling complex systems by using classical modelling approaches based, e.g., on averages, has already been discussed by many thinkers. One of them, Crutchfield (see, e.g., Crutchfield, 1992), argues that in these systems, ‘ fluctuations dominate behaviour and 38

averages need not be centred in around the most likely value of its behaviour. This occurs for high complexity processes … since they have the requisite internal computational capacity to cause the convergence of observable statistics to deviate from the Law of Large Numbers’ (Crutchfield, 1992, p. 35, in the version of the paper available at http://www.santafe.edu/projects/CompMech/papers/SATTitlePage.html). In this statement, Crutchfield seems to be considering systems at a high level of complexity, e.g., those with a vicarious mechanism (whose ability and activity he refers to as ‘ internal computational capacity’ ), such as human systems. Aspects of these systems and their simulation are examined in section 4.3.3. All this creates doubts about the appropriateness of techniques that arbitrarily leave simulation trajectories unexplored as well as about the use of statistical summaries for studying simulation outputs. This thesis is aimed at developing alternatives to such existing methods, for studying complex systems: on the one hand, stronger notions than that of approximation for overcoming drawbacks in existing techniques but more practical than Zeigler’ s original notion of morphism (for this, ‘ constraint-based exploration of simulation trajectories’ and a ‘ platform of simulation architectures’ will be suggested), and, on the other hand, alternatives to the use of ‘ statistical summaries’ for analysing simulation outputs are needed (in this sense, the ‘ envelope of simulation trajectories’ will be proposed). Ideas not commonly used in simulation such as those implemented in theorem-proving and Automatic Reasoning (described in Chapter 3) will be introduced for exploring the dynamics of a system. A proof of a theorem in a simulation rests in a more exhaustive exploration of the dynamics of a simulation than that carried out in the traditional methods named above (see Chapter 5 and, in particular, section 5.5). A proof will allow a modeller, on the one hand, to explore a fragment of a simulation theory, and, on the other hand, it will permit the implementation of measures of behaviour other than averages in the explored simulation theory. As we are interested in studying emergent tendencies, we propose, as an alternative measure of behaviour, the envelope of the tendency in the explored fragment of the simulation theory. Hence, the methodology to be proposed should be a valid alternative to existing methods for analysing the simulation of a complex system. In addition, this methodology will be in line with Heylighen’ s call for computational mechanisms for helping a social agent (in this case a modeller) to cope with the complexity of the social system it inhabits (see section 4.3.4). 39

3

Chapter 3 - Proving Theorems in Computational Models

3.1 Introduction

One of the main purposes of this thesis is to develop a methodology to analyse and understand the dynamics of simulation in MAS. It is of interest to analyse regularities in the dynamics of a simulation not explicitly given in the simulation design and which are relevant for understanding the modelled system, despite being poorly comprehended by the modeller (these regularities will be called ‘ emergent’ ). As observed in section 2.10, existing methods for analysing the dynamics of simulations of complex systems have serious drawbacks and this thesis aims at looking for alternative approaches. As was shown in the previous chapter, those existing methods for investigating simulation behaviour allow one to draw only limited conclusions about the dynamics of the simulation in only very specific spaces of the simulation theory. Those methods explore only a single simulation trajectory at a time and use the notion of approximation for proving morphisms. It is the intention in this thesis to investigate about more powerful methods for analysing such dynamics, that will permit one to reach more general conclusions and to define stronger notions of morphism than those allowed when using the notion of approximation. Concretely, this thesis aims at developing methods for proving theorems about the dynamics of a simulation. As it is usually difficult to explore the whole dynamics of a simulation (e.g., about the whole simulation theory), this thesis seeks to find methods to prove theorems with respect to a sub-space of the space of the simulation dynamics and, correspondingly with respect to a fragment of the simulation theory. Though in this thesis a methodology for proving theorems in a simulation will be developed, this work is also exploratory in the application of theorem-proving strategies for proving tendencies in the dynamics of a simulation. There have been no previous developments in this line, as far as is know. Because of this, a review of a wide variety of existing methods, not only those directly related to the strategy to be proposed in this thesis, will be considered in this chapter. This review might be used for classifying: the theorem-proving strategy followed in the theorem-prover to be used to implement an example (e.g., OTTER), existing methods for simulation, and the theorem-proving strategy for proving theorems about the dynamics of a MAS to be presented in the next two chapters. The introduction to simulation given in the previous chapter, together with the review of formal aspects of logic-programming and theorem-proving to be presented in this chapter,

will provide the theoretical background for a move towards the methodological developments to be given in the next chapters of this thesis. The main intention of this chapter is to provide a review of the different strategies for theorem-proving. First, in section 3.2, basic conceptual aspects of proving theorems, using computational algorithms, will be shortly addressed. Then, the difficulties that have been found for finding a good and general computational algorithm to prove satisfiability/unsatisfiability of a formula will be briefly touched upon. Also, in this section, a basic conceptual framework related to proving theorems in a theory useful for the rest of the presentation will be offered. Then (in section 3.3), similarities/differences between exploring dynamics of a computational program in theorem-proving and in a simulation will be reviewed. The intention is to preview the most straightforward similarities between simulation runs and explorations in theorem-proving. Direct parallels between the search strategy called ‘ model exploration’ in theorem-proving and generation of trajectories in simulation will be found. Section 3.4 will discuss in brief seminal work for proving in computational models. The initial interpretation-based exploration of ‘ logical models’ in a theory, namely Herbrand’ s universe and Herbrand’ s theorem, will be addressed. Herbrand’ s theorem is the basis for the more efficient methods for checking falsity of a set of clauses in first-order logic. This theorem became the fundamental reference for developing resolution, the most widely used procedure for proving. This section provides the starting point for a more comprehensive review of theorem-proving strategies in section 3.5, whose methods are based on these two achievements of Herbrand. Having presented Herbrand’ s foundational work for theorem-proving in section 3.4, the main strategies for theorem-proving will be reviewed in section 3.5. It will include a consideration of the most popular search procedures and strategies in theorem-proving. Two main branches of theorem-proving approaches will be presented in accordance with the orientation of their inference procedure, which might be either predominantly semantic- or predominantly syntactic-oriented. Simultaneously, examples of theoremprovers using these procedures and strategies will be offered. First, the syntactic methods will be presented. These have been the most successful ones. In this group is the theoremprover OTTER. OTTER will be used to program the case study of this thesis and so will be an important reference as a source of ideas for developing the methodology to be presented in Chapters 5 and 6. This review may be used to describe the main aspects of OTTER and other theorem-provers found in the literature. However, neither the simulation method to 41

be proposed in the next chapter nor the common simulation methods correspond well to a syntactic-driven proof. Then, the other group of strategies, that consisting of the predominantly semantic-oriented, will be reviewed (in sub-section 3.5.2). In this group will be found the so-called model-based exploring approaches and, among them, tableaux. This sort of search has similarities with the method to be proposed in the next chapters, but again this method does not fit completely well in the sort of search described in this subsection. However, the given taxonomy, as also happens with any other given classification, is somewhat arbitrary, as most of the theorem-provers are in part semantically and in part syntactically oriented. Hence, it will be worth going further and analysing strategies for theorem-proving using additional criteria. To this purpose section 3.6 will be offered. In this sense, in sub-section 3.6.1 additional criteria for classifying theorem-provers are given. Then, using the criteria provided so far, the main aspects of Bonacina’ s (1998, 1999) classification are recapitulated in section 3.6.2. As, once again, neither the proving strategy to be developed in this thesis nor other simulation methods fit well in the given (Bonacina’ s) taxonomy, a further discussion will be presented in sub-section 3.6.3. Having opened the panorama of theorem-proving strategies, the main aspects of a new tendency in logic-programming - constraint logic-programming and its variant, rule-based constraint logic-programming - will be considered (see, e.g., Frühwirth et al., 1992). This new area of research seems to be relevant to achieving efficient methods for exploring dynamics in simulations. There, inference procedures using a constraint-driven search as an alternative to unification are implemented. It appears that there, novel and promising ideas for efficient implementations of exploration of behaviour in a simulation can be found. Finally, the usefulness of meta-reasoning for an efficient exploration of the dynamics in a computational program and, in particular, for implementing context-driven inference procedures will be highlighted. Context-oriented search seems to be convenient for exploring and proving theorems in the dynamics of a simulation. 3.2 Proving: an Ancient Problem

The problem consists in finding a proof procedure to verify the validity or inconsistency of a formula or a theorem with respect to a theory. In simulation, a theorem might represent a tendency in the dynamics of the simulation.

42

According to some authors (e.g., Chang and Lee, 1973), such a problem was first tried by Leibniz, then by Peano, and, in this century by Herbrand’ s school. Nevertheless, a general procedure for proving theorems in a theory was proved impossible by Church and Turing in 1936 (Chang and Lee, 1973). In any sufficiently expressive formal system, there will always be invalid formulas for which these procedures will never decide. For example, ‘ in certain cases the generation of clauses never stops, because no finite database satisfies all clauses, but a contradiction does not arise either. This is because of the ‘ undecidability of satisfiability’ (Stickel, 1988). However, there are procedures that are useful for attempting to prove certain valid formulas. In logic-programming, a method or program able to provide an answer (in this case, to the dichotomic question of whether a formula is satisfiable or not) within a finite delay is called an algorithm, while those that can provide and answer when such an answer is positive but there is no guarantee of getting an answer when it is negative, are called semialgorithms or procedures (Gochet et al., 1988). Similarly, it is said that a theory is decidable if it admits a decision algorithm and semi-decidable, or partially decidable, if it only admits a decision procedure. Methods for proving the validity of a theorem (e.g., resolution) for first-order logic are procedures (Gochet et al., 1988). In consequence, in this chapter the aim is to discuss appropriate procedures for proving in particular situations. As a preliminary step towards this goal, the main ways to check the validity of a theorem with respect to a computational theory will now be reviewed. According to Chang and Lee (1973, p. 45) there is no guarantee of finding any procedure to check the invalidity of an invalid formula. On the other hand, no general procedure exists for verifying the validity of a valid formula. However, it is always possible to find a specific procedure for checking the validity of a particular valid formula. The most important approach for proving validity of formulas in theorem-proving was given by Herbrand in the 1930s. The first algorithm for implementing Herbrand’ s method was given by Gilmore in 1960. Then, this algorithm was modified, the most important improvement being that given by Robinson. These algorithms verify the validity of a formula by detecting inconsistency of the negation of the formula. The important point here is that: if the formula is valid – and obviously such detection is not possible - the program will halt after a finite number of steps (Chang and Lee, 1973).

43

3.2.1 Basic Concepts and (Logical) Model Search/Construction In this thesis the phrase ‘ logical model’will be used to refer to the concept of model in the context of logic. This will help to prevent confusion, as the notion of model in logic and in logic-programming is different from the concept of model in modelling and simulation, which has been used until now in this thesis. First, some basic concepts will be defined in order to provide a context for the notion of model in logic-programming (Gochet et al., 1988). However, only some notions will be covered and other more basic concepts, such as predicate, clause, literal, function, term, constant, proposition, unification, and predicate, will be assumed as known. Seven terms that will be frequently used below and have a special significance for understanding the notion of theory (e.g., that implicit in a simulation) and proofs in a theory are now defined: Signature: collection of predicate and functional constants. Language: set of terms and formulas generated by the signature through the syntactical rules of predicate calculus. Structure: relative to a language is an interpretation of this language. Theory: relative to a language is a set of formulas of this language. Axioms of a theory: that set of formulas in a language defining the theory. Theorem in a theory: logical consequence of the axioms defining the theory. Logical model of a theory: a structure for which all formulas of the theory are valid. A logical model of a theory in predicate logic might be given with reference to ground terms of the language, that is, specifying whether each ground term is true or false. Then, the definition of interpretation given in the semantic specification of the language for variables and formulas can be used to determine whether a variable or a formula is valid under a certain logical model or under some interpretation. Alternative techniques for defining a logical model or a set of logical models are: specifying only the set of valid ground terms (the remaining ones are assumed to be false by default), or by a formula. Figure 3.1 illustrates the interrelationship among these concepts

44

Given a signature, terms and formulas are generated by using syntactical rules

Signature: sort of frame

Language

Theory: Subset of formulas. (a Formula = an axiom for this theory )

Axiom 1: A1 … … Axiom n: An

Theorem

{A1, … An}

Figure 3.1. Graphic representation of the relation among the concepts: Language, Signature, Theory, Axioms of a theory, and Theorems in a theory Example 3-1: Signature: collection of predicate and functional constants. Signature: P( , ), Q( , ), R( ) Language: any formula A ? B, where A can be P, Q, or R valued on certain variables, and ? can be any of the operators commonly used in the literature for defining a propositional first-order logic {v, ?, … } An example of a theory is S = {(1)~ P(x, y) v ~ Q(y,z) v R(x), (2) P(x,y) v R(y,z), (3) Q(x.y) v R(x)} A theorem for this theory is R(x). Assuming a propositional language, S = {(1)~ P v ~ Q v R, (2) P v R, (3) Q v R}: An interpretation (signature) would be {R = Q= true, P = false} The formula R can be used to represent the set of logical models M = {R = true, Q = V1, R = V2} where V1 and V2 are in the domain {false, true}. In general, a logical model search procedure for proving is an interpretation-guided exploration of logical models. An example of a logical model search using backwardchaining is tableaux. Also, forward-chaining logical model procedures have been implemented, for example event-driven simulation and partitioning of rules in some declarative MAS, as will be seen below. 3.3 Simulation, Logic-programming, and Theorem-proving

Logic-programming and theorem-proving can be applied to many research subjects. In this thesis its particular interest is its use for proving tendencies in the dynamics of a 45

simulation. To make clear the relation between simulation on the one hand and, logicprogramming (and especially theorem-proving) on the other, the relevant aspects of the dynamics of a simulation and inference procedures in logic-programming and theoremproving will be reviewed. The main similarity is found between what in simulation is called a ‘ trajectory’and what in logics is called the logical model of a theory. A trajectory of a simulation corresponds to a logical model or valid interpretation of a theory in logics. In fact, the generation of trajectories in a simulation can be seen as a particular case of logical model generation. It is a generation of logical models for the theory represented by the simulation program. However, it is not always required to know all valid facts in a logical model, as is the case in tableaux, in order to check theorems in a trajectory. A simulation implemented using declarative programming will be expressed in terms of clauses and there may exist alternative ways of generating logical models corresponding to the different possible simulation trajectories. A formula will be a theorem of the simulation program if it is true in all simulation trajectories. A semantic proof of certain regularity in a simulation can be implemented by generating all trajectories and observing whether the regularity holds in all of them (this is a ‘ logical model checking’ ). This assumes that, as is common in simulation, a finite-state structure representing all logical models exists. Simulation trajectories are usually generated following an event-driven or at discrete, equally spaced intervals of the independent variables. This is the general case, for example, in MAS and finite difference methods for solving differential equations. Its generation usually consists in a time- and/or space-driven generation of states starting from a certain initial state. This is a sort of forward-chaining inference. For instance, in the simulation of a differential equation, the simulation might follow a temporal and/or a spatial order. Consider, for instance, the case of a two-space heated plaque whose temperature at the internal points changes over time and is different for different points. The differential equation might be like this: ? T ?2T ?2T = α ( 2 + 2 ) ; where: T represents the temperature, x and y are the spatial ? t ?x ?y variables, and t represents time. These sorts of equations are solved using finite differences: the partial derivatives are replaced by finite difference approximations, then a set of finite values in the range of interest is chosen for the independent variables x, y, and t, and, finally, T is obtained for the chosen points of the independent variables step-by-step,

46

starting from the known values of T in the boundaries of the plaque. Should this flux be null, time is no longer a variable of interest, although it may still be interesting to know the plaque’ s temperature at each point; then the simulation steps would only be over the spatial edges. In this case the simulation steps follow only a spatial sequence. In the case of the simulation of processes involving humans (for example, simulation of a queue system in industry, in a road, or in a bank, or simulation of social interaction in a market), the dynamics is usually carried out following a temporal sequence. Similarly to what happens in simulation, many logic-based computational systems follow a forward-chaining strategy. Consequences are generated as the antecedent of inference rules match facts already existing in the database. This is a strategy also used in theorem-proving. Generations of trajectories over time or in temporal logical model exploration have been formalised in Temporal Logics. See, for example, Fisher et al.’s proposal of a clausal resolution procedure for discrete temporal logic (Fisher et al., 2000). Obviously, theorem-provers that use forward-chaining inference have similarities with simulation. A common strategy for proving in computational models, which is extendible to simulation, is to add the negation of the theorem into the database and then show that this causes a contradiction. A drawback of this sort of search is that many irrelevant facts (for the proof) may be generated before the contradiction appears. To handle this problem, heuristic strategies have been developed. More details about these procedures will be given below. In contrast to this, searches in logic-programming (e.g., in Prolog) are not usually forward-chaining but backward-chaining or goal-driven. A goal is managed as a question to be answered with respect to new goals or facts. Assume this is a goal at level 1. If this goal matches the head of a rule (consequent), the body (antecedent) of the rule (with the respective update of the variables according to the variable unifications) becomes the new goal in a next level (level 2). This new goal might consist in a conjunction of predicates. In this case, each of those predicates would be a sub-goal to be satisfied at the new level – thus, a new list of sub-goals is constructed. The next task is to check if each of these goals is satisfied. Then, one of them is taken as the goal to be treated. Once the head of a rule matches this sub-goal, then a new list of sub-goals at another level (level 3, given by the antecedent of the matching rule) is set. The search stops as soon as all resulting goals become true, that is, as soon as all the goals are shown to be facts.

47

If during that search new sub-goals have priority over previous target-goals to be tested (e.g., goals are listed in order LIFO: Last-In First-Out), the search is called depth-first. On the other hand, if new sub-goals are considered only after previous target goals (e.g., goals are arranged in order FIFO: First-In First-Out), the search is called breadth-first. Generally a depth-first search generates less data than a breadth-first-search. It is the search commonly implemented in logic-programming (e.g., in Prolog). Despite these differences in how the inference is driven, the inference procedures in both logic-programming and resolution-based theorem-provers are variants of the semantic and syntactical methods to be explored below. To summarise, in some aspects simulation is closer to logical model generation in logicprogramming and theorem-provers than to logic-programming languages such as Prolog, as those procedures and simulation usually follow forward-chaining inference procedures while Prolog’ s inference procedure works in backward-chaining. On the other hand, however, simulation also has similarities with certain backward-chaining search procedures like tableaux, as they both follow a logical model generation strategy. Accordingly, in principle it seems convenient to try proving in simulation using procedures already implemented in forward-chaining theorem-provers and in logical model generation. Nevertheless, syntactic backward-chaining methods appear attractive to lessen certain problems (e.g., the huge amount of accumulated data) appearing when implementing logical model generation and forward-chaining methods. A recently developed approach is constraint logic-programming, which seems promising because of its flexibility for driving the search by exploiting semantic information in the data rather than using only traditional unification. Aspects of this approach will be utilised in this thesis as the methodology to be proposed in Chapter 5 consists in certain constraint-based explorations of trajectories. 3.4 Seminal Work: Towards a Procedure for Checking Unsatisfiability of a Set of Clauses in First-Order Logic The first attempts for checking the unsatisfiability of a set of clauses before, in the next section, going on to those commonly used in theorem-proving will be presented in this section. It is common to attempt to prove the unsatisfiability of clauses. A theorem is proved by showing that its (classical) negation is unsatisfiable.

48

3.4.1 Interpretations The barest procedure for proving the falsity of a set of clauses S is to check if they are false under all possible interpretations of the theory. Evidently, there might be a huge number of possible interpretations making this task impossible. This is only realistic in a few cases and not in those of interest in this thesis. Notice that this procedure is not syntactic but semantic. Each interpretation for which a theory is valid is a logical model of the theory. Other approaches for proving based on interpretations will be reviewed in section 3.5. 3.4.2 Herbrand Universe and Herbrand’ s Theorem In order to improve the previous procedure, it seems convenient to find a sub-domain SD of all possible domains D of clause interpretations, so that, to know if a set of clauses S is unsatisfiable, it is necessary and sufficient to check the interpretations of S in SD. The first answer came from Herbrand and the sub-domain is called Herbrand universe of S (for more details, see Chang and Lee, 1973; Wos, 1988; Gochet et al., 1988). Chang and Lee (idem, p. 52) define it in this way: ‘ Let H0 be the set of constants appearing in S. If no constant appears in S, then H0 is to consist of a single constant, say H0 ={a}. For i = 0, 1, 2, … let Hi+1 be the union of Hi and the set of all terms of the form fn(t1, … , tn) for all n-place functions fn occurring in S, where tj, j = 1, … , n, are members of the set Hi. Then each Hi is called i-level n constant set of S, and H∞, or limi→ ∞ Hi, is called the Herbrand universe of S.’ Examples: a) if S = {P(a)}, then H0 = {a}; b) if S = {P(a), P(f(x)), Q(g(y))}, then H0 = {a}; H1={a, f(a), g(a)}; H2= {a, f(a), g(a), f(f(a)),f(g(a)), g(f(a), g(g(a))}; H2= {a, f(a), g(a), f(f(a)),f(g(a)), g(f(a), g(g(a)),f(f(f(a))),… }. Instances of a clause C obtained by replacing variables in the clause C by members of the Herbrand universe are called ground instances. In the second example above, P(g(a)) and P(f(g(f(a)))) are examples of ground instances of clause P(f(x)). However useful Herbrand universe is, it is usually hard to check all interpretations over it because it is still too big. Further steps toward more useful procedures were encouraged by Herbrand’ s theorem. Chang and Lee offer a version of it:

49

Herbrand’ s Theorem. A set of clauses S is unsatisfiable if and only if there is a finite unsatisfiable set S’of ground instances of clauses of S. The importance of the theorem is that it will be sufficient to have a finite set S’ of unsatisfiable ground instances of S to know that the whole set S is unsatisfiable. This theorem has become fundamental for developing resolution, the most widely used procedure for proving. The main difficulty for applying this theorem is the requirement of ground clauses. A resolution goes forwards; it has to be applied not only over grounded clauses but also over ungrounded ones (Chang and Lee, 1973). 3.5 Approaches to Theorem-proving: Syntactic (clausal)- and Semantic (interpretation)-based Searches In this section some of the main approaches in theorem-proving will be considered. First, those implementing syntactic-driven procedures based on clause manipulation will be outlined, followed by those that are semantic-driven since they are founded in logical model generation. Among the resolution methods and heuristics to be listed below, there were used in the examples presented in this thesis (see Chapter 7): hyperresolution, the set of support strategy (a particular case of semantic resolution), subsumption, fewest-literal preference strategy, and demodulators. Other, alternative methods currently used in theorem-proving or in declarative programming will be presented. 3.5.1 Syntactic (clausal)-based Inference Procedures The great advantage of resolution is that it can be applied over clauses having variables. Therefore, the size of the universe of clauses over which it works is usually smaller than when using Herbrand’ s universe while the number of interpretations to be considered is still the same. Actually, as resolution can be applied over clauses having variables, many or even an infinite number of interpretations can be considered at a time. Resolution-based theorem-proving has become a very important area of research since 1965, after the introduction of resolution by Robinson (Chang and Lee). Further refinements and search strategies have been developed, e.g.: hyperresolution (Robinson himself), semantic resolution (Slagle), the set of support strategy (Wos, Robinson, and Carlson), lock resolution (Boyer), and linear resolution (Loveland, 1970; Luckham, 1970) (more details will be given below). Many successful resolution-based theorem-provers have been developed. One to be used in this thesis is OTTER (see McCune, 1995; also see wwwunix.mcs.anl.gov/AR/otter/). 50

3.5.1.1 Robinson’ s Principle for First-Order Logic (Chang and Lee) The important point in Robinson’ s resolution principal is that it generates a new clause from two existing ones. The generated clause will replace the two existing clauses. Then, Robinson’ s resolution, also called binary resolution because it uses two clauses to generate the new one, can be seen as an inference rule (Wos et al., 1988). In fact, binary resolution can be applied in different forms; for example, it can be seen as a generalisation of the inference rule that gives (P or R) from: (P v Q) ∧ (~Q v R). Moreover, it can be considered as a generalisation of ‘ modus ponens’and ‘ syllogism’(Wos, 1988). Applying it iteratively, a chain of clauses can be generated. If the original set of clauses is inconsistent, then a contradiction occurs in the chain - this is called the completeness of the method. Binary resolution is complete. Obviously, completeness is a desirable attribute for resolution methods. If the set of clauses is satisfiable (there is no proof), binary resolution will halt once no more clauses can be generated. Nevertheless, complete proofs methods are only available for a restricted range of relatively inexpressive logics. The main advantage of binary resolution is that it is complete and that it does not put restrictions on either of the resolvents. So it is one of the most generally applicable resolution variants. However, it might be inefficient in the sense that other resolution methods might do certain proofs in shorter manner in terms of time or generated data. Moreover, other resolution procedures might be more recommendable in certain situations, for instance, in keeping with the characteristics of the clauses over which the proof is going to be executed. For example, when translating a simulation program into a theorem-prover, it may be convenient to see the original data of the simulation program (e.g., that given the initial state) as unit clauses and the rules of the simulation program as multi-unit clauses and then apply hyperresolution (see below) in the search so that the multi-unit clauses operate as the inference rules (see below and Wos, 1988). Sometimes, theorem-provers allow the user to choose the resolution method(s) to be applied. In the following some alternative resolution methods will be informally reviewed as some of them are used in the theorem-provers of interest in this thesis, e.g., in OTTER. Then, the different strategies used together with resolution for improving the search in a theorem-prover will be considered. These strategies are heuristics aiming to make the search shorter and computationally less expensive.

51

3.5.1.2 Semantic Resolution Semantic resolution is one of the first improvements over binary resolution in the search for methods to decrease the number of redundant clauses. The main idea is to divide the set of clauses S into two subsets S1 and S2, preventing resolution within clauses in a subset. This can be explained by using an example from Chang and Lee: Example 3-3. S = {(1)~ P v ~ Q v R, (2) P v R, (3) Q v R, (4) ~R} Applying binary resolution to (1) and (2), it is found that R1 = ~ Q v R, from R1 and (3) is obtained R2 = R, and from R2 and (4) it is concluded that the set is unsatisfiable. Binary resolution has been applied using clauses in the following order: (1), (2), (3), (4). The same result is achieved if binary resolution is applied in the order (1), (3), (2), (4). Though this new inference is clearly redundant, an automatic program might produce it if not prevented. Note that binary resolution might give even more redundant information; for example, from (1) and (4) a useless resolvent is obtained in the direction to prove the unsatisfiaility of S. Some of these redundant inferences are prevented by refinements of resolution procedures, as will be seen in this and in the coming sections. Using the interpretation I = {~ P, ~ Q, ~ R}, the set of clauses can be separated in two subsets: S1 the subset of satisfied clauses and S2 the subset of unsatisfied clauses. That is, S1 = {(1) ~ P v ~ Q v R, (4) ~ R}, S2 = {(2) P v R, (3) Q v R}. This prevents (1) being resolved against (4). However, (1) can still be resolved with (2) and (3), taking clauses in S in a different order. To prevent resolving clauses in different orders when the result is the same resolvent, an additional tactic is used: ordering of predicate symbols. Assume the order P > Q > R is used. Given priority to predicates of higher order, binary resolution will be applied in the sequence (1), (2), (3), (4). In fact, this ordering is arbitrary and semantic resolution can be considered as an inference rule applied simultaneously over several clauses, in this case over (1), (2), and (3). As (1) is used to match both (2) and (3), it is called the nucleus, while (2) and (3) are called satellites. Semantic resolution is complete (Chang and Lee, 1973). Chang and Lee give the following formal definition for semantic resolution. It is reproduced to add clarity in the exposition and because of its usefulness for defining other resolution variants to be set in the next sections. Note that Chang and Lee name the resolvent semantic clash (Chang and Lee, 1973):

52

‘ Let I be an interpretation. Let P be an ordering of predicate symbols. A finite set of clauses {E1, … , Eq, N}, q = 1, is called a semantic clash with respect to P and I (or PI-clash for short), if and only if E1, … , Eq (called the electrons) and N (called the nucleus) satisfy the following conditions: 1. E1, … , Eq are false in I. 2. Let R1 = N. For each i = 1, … , q, there is a resolvent Ri+1 of Ri and Ei. 3. The literal in Ei, which is resolved upon, contains the largest predicate symbol in Ei, I = 1, 2, … , q. 4. Ri+1 is false in I. 5. Rq+1 is called a PI-resolvent (of the PI-clash {E1, … , Eq, N}).’ Observe that if the nucleus is valid over the chosen interpretation, then the satellites are false, and vice versa. Also, it is important to notice that an application of semantic resolution as an inference rule does not need to have intermediate steps; that is, it can be thought of as applying simultaneously over the nucleus and the satellites. Here its usefulness is in reducing several inference steps into one step. This might also turn out to be a reduction in redundant data. Additional refinements have been proposed over semantic resolution, for example, ordering of clauses. Some of them have turned into other variants of resolution, such as lock resolution. Nevertheless, some of those procedures are not complete (e.g., semantic resolution with ordering of clauses) or have disadvantages when implemented automatically. In the coming sections, some of the more successful variants of semantic resolution will be examined. 3.5.1.3 Hyperresolution Positive (negative) hyperresolution is a special case of semantic resolution where the interpretation is chosen as that with all literals negative (positive). As a consequence, all satellites will contain only positive (negative) literals and the nucleus will contain at least one negative (positive) literal, and yield (if successful) a clause with only positive (negative) literals. Hyperresolution will be the resolution method chosen in OTTER for carrying out the exploration of the dynamics of the model. Example 3-4 Considers S as given in example 3-3, i.e., S = {(1)~ P v ~ Q v R, (2) P v R, (3) Q v R, (4) ~R}, (1) is the nucleus, (2) and (3) are the satellites, and I = {~P, ~Q, ~R}. It is a case of positive hyperresolution: literals in the interpretation are all negative, the nucleus has at

53

least a positive literal (R), all literals in the satellites are positive, and the resolvent is positive (R). 3.5.1.4 The Set of Support Strategy Though this is understood as a strategy, it is presented here because it can be considered as a special case of semantic resolution. The set of support strategy was proposed by Wos, Robinson, and Carson (Wos et al., 1965). It is a very successful strategy and has been adopted as the main strategy in many theorem-provers (e.g., OTTER). It consists in a division of the set of clauses to two groups, as in any other variant of semantic resolution. The first of these will be a set of valid clauses and the second a set of unsatisfiable clauses. There are several ways to make this division. For example, if we wanted to prove that the theorem T is valid in the theory given by the set for clauses {A1, A2, … ,An}, the aim would be to prove that A1? A1? A2? … ? An ? ~T is unsatisfiable. The first and second sets to be S1 ={A1, A2, … ,An} and S2 = { ~T } might have been chosen. In theorem-provers such as OTTER, the set of valid clauses, called set of support (‘ sos’ !), is chosen as the set of facts the user can easily recognise as valid and the other subset, called usable set, is chosen as the rest of the clauses including the negation of the theorem. For example, the sos might contain those facts given the initial state in a simulation and the usable set might be given by the remaining clauses in the original set of clauses S and the negation of the theorem. Clauses in the set of usable may be used as the nucleus for hyperresolution or/and as inference rules. In OTTER no inference is allowed using only clauses from the list usable –at least one clause should come from the list sos. The set of support strategy is complete (Chang and Lee, 1973). However, when combined with hyperresolution, ‘ completeness is not guaranteed’ (Wos et al., 1988, p. 436). In the example to be given in Chapter 7, the rules to run the simulation will be translated as clauses and placed, along with the negation of the theorem to be proved, in the set of usable. Data in the simulation language will be placed as ground instances in the sos. 3.5.1.5 Linear Resolution Linear resolution was proposed by Loveland (1970) and Luchham (1970). It later underwent several improvements (according to Chang and Lee, 1973): ‘ that given by

54

Kowalski and Kueher (1971) and that by Loveland (1972) which is more important for computational implementations.’ Linear resolution starts with two clauses (C0 and B0) given the resolvent (C1). Then C1 is resolved against another clause B1 to get C2. Afterwards, the new clause is resolved against clause B2 to get another resolvent C3. This procedure is continued until a contradiction or the empty clause appears (if the proof is successful). The basic characteristic of the ‘ chain’ of clauses generated by the linear resolution procedure is that the resolvent in one step is used as one of the clauses to be resolved upon in the subsequent step. Linear resolution is complete (Chang and Lee, 1973). The already described depth-first and breath-first searches are particular cases of linear resolution (Chang and Lee, 1973). Linear resolution style search has been popular in logicprogramming (e.g., depth-first search in Prolog) and in theorem-proving when using tableaux. It seems more closely related to backward-chaining search than to forwardchaining search. 3.5.1.6 Strategies or Heuristics In this subsection some of the more popular strategies for driving the inference procedure in (syntactic) theorem-provers and, in particular, implemented in OTTER will be discussed. The main role of these strategies is to choose, among the commonly huge amount of possible deductions and ‘ path searches’ , those that hopefully will lead to the proof with less data generation, or at least preventing as far as possible the generation of useless data. ? Deletion Strategy or Subsumption The idea is to eliminate copies of the same clause or clauses that are instances of other clauses. ‘ The clause A subsumes the clause B when there exists an instance A that is a subclause [an instance] of B’(Wos, 1988). Subsumption can be trivial or as complicated as may be wanted. A trivial case is P(x) subsuming P(a), where a is a constant and x is a variable. A more complicated case is P(x) subsumption of P(a) v Q(b), where a and b are constants and x is a variable. In this case, if P(x) is true for any x, P(a) is also true and then P(a) v Q(b) is true, whatever happens with Q(b). ? Fewest-literal Preference Strategy When using resolution procedures such as hyperresolution and UR-resolution, for different reasons it is convenient to have clauses with few literals. The main reason is that it is easier 55

to resolve the clause with others successfully. Another motive is that usually the theorem to be proved has few literals (generally one). Its advantages are clear, for instance, when using it along with hyperresolution and the set of support strategy. For example, in OTTER, when applying hyperresolution, clauses in the sos are used as satellites while those in the set of usable are used as the nucleus. The shorter the number of literals of those chosen as satellites, the easier it will be to apply hyperresolution. Some theorem-provers (e.g., OTTER) implement this strategy by giving each clause a weight proportional to the number of literals, after which clauses are ordered and then selected according to this weight. ? Heuristic-Evaluation Function This consists in giving each clause a weight as a function of certain clausal features such as a number of literals and a number of distinct variables. Usually the function is a weighted linear combination of these features. The difficulty with this procedure is in calculating appropriated weights. Because of this, it is not implemented in theorem-provers. As can be seen, fewest-literal preference strategy is a particular case of heuristic-evaluation function. Usually, theorem-provers implement a weighting heuristic, giving the user the option to specify the criterion for clause weighting. ? Demodulators Demodulators are procedures ‘ to simplify and canonize information’ . (Wos et al., 1988). Clauses giving semantically redundant, though not necessarily syntactically redundant, information are discarded. The main idea is to rewrite terms in a clause. For example, SUM(1,2) and 3 are terms giving the same though syntactically different information. As 3 is ‘ simpler’ , it would be a candidate to canonise this and other numerical expressions semantically equivalent (e.g., MULTIPLY(3,1)). It would be used to rewrite semantically equivalent expressions. Demodulators are very useful for doing numerical manipulations. For example, a demodulator in automated reasoning languages and theorem-provers (e.g., OTTER) might be: SUM(x,y) = x + y. Here ‘ +’has the meaning of the numerical operation addition, while SUM is a predicate symbol. Once the predicate SUM is used in a term, the demodulator is called, then a sub-program, doing the numerical operation is called, and, finally, the result of the addition is sent back to rewrite the term. The usefulness of demodulators in simulation is obvious, as numerical operations are common. However, there are few

56

theorem-provers offering demodulators, among them OTTER. In theorem-provers implemented over Prolog, such as SETHEO, numerical manipulations can be achieved using Prolog facilities. Demodulators will be used in the example (given in Chapter 7) for implementing functions originally written as backward-chaining rules in SDML. Consider a more complicated example of demodulation taken from Wos et al. (1988). It can be checked that the following two predicates have the same information: EQUAL(sum(sum(x,v),sum(y,z)),sum(sum(x,y),z)) EQUAL(sum(x,sum(y,z)),sum(sum(x,y),z))). EQUAL is a literal interpreted as meaning equal. In general, a demodulator has the form D: EQUAL(r,s), where r and s are terms. If a clause C has a term t matching r and the result of the matching is r’ (r’ = t), then applying the demodulator D to C will result in replacing C by C’given by the substitution of t for s’ , where s’results from the appropriated instantiation of the demodulator D’ : EQUAL(r’ ,s’ ). The demodulator given above for SUM can be thought as EQUAL(SUM(x,y), x + y). ? Paramodulators Paramodulation has some similarities with demodulation, though it is very different. Paramodulation generates a new clause C from two clauses A and B, while demodulation replaces and subsumes an old term for a new one. Concretely, should we have the clause A with an equality literal, lets say EQUAL(r,s), and clause B with a term t matching, for example, the left side of this literal, r, then the matching, where r’= t, gives the instances A’ , B’ , and EQUAL(r’ ,s’ ) (for the clauses A and B and the EQUAL literal, respectively). Call K’the literal EQUAL(r’ ,s’ ) and B’ ’the clause B’ , where r’ has been replaced by s’ . Then the resulting clause C of applying paramodulation to A and B using the EQUAL literal (EQUAL(r,s)) of A and the term t in B, will be the disjunction of A’ -K’and B’ ’ . Example 3-5 Consider A = P(x) v EQUAL(PROD(x,0), 0)) , and B = Q(x) v R(PROD(2,0)), Where: r = PROD(x,0), s = 0; t = PROD(2,0) Applying the explained procedure it will give: r’= PROD(2,0), A’= P(2) v EQUAL(PROD(2,0),0), B’= Q(2) v R(PROD(2,0)), = EQUAL(PROD(2,0),0), B’ ’= Q(2) v R(0). Finally, the disjunction of A’– K’and B’ ’gives: K’

57

P(2) v Q(2) v R(0). If A were A = EQUAL(PROD(x,0), 0)), the result would have been Q(2) v R(0). 3.5.1.7 Forward-chaining Search As was stated above, forward-chaining search consists in applying resolution rules over the clauses in the database, generating consequences that are added to the database if they are not subsumed by clauses already existing in the database. This process is repeated iteratively until a certain stopping condition is found (for example, a contradiction or an empty clause) or it is not possible to generate more consequences. Forward-chaining is the ‘ counterpart’of backward-chaining. ? Example: OTTER, a first-order logic theorem-prover OTTER is the theorem-prover used in this thesis for implementing a MAS simulation case (see Chapter 7), and so it will be a source of ideas for the methodology to be presented in the next chapters. Two reasons for using OTTER in this thesis are: first, because it is one of the most successful theorem-provers, and second, because it offers facilities for data manipulation (e.g., demodulators). This is convenient for implementing cases like the example from social simulation given in Chapter 7. This example is about a traderdistributor interaction, where numerical manipulation is made, e.g., for calculating sales, orders, prices. Other theorem-provers are very restricted to symbolic manipulation and do not provide these sorts of facilities. OTTER is a forward-chaining theorem-prover that uses a set of support strategy. That is to say, the original set of clauses is separated in to two groups, a first one containing only valid clauses called set (list) of support and a second group of (assumed) unsatisfiable clauses called usable list (this set of clauses is unsatisfiable if the theorem is valid). The usable list contains those clauses to be used as inference rules and the negation of the theorem to be proved. OTTER offers a wide range of inference rules and strategies, which can be turned on and off by using commands. Among the inference rules OTTER offers are: binary resolution, hyperresolution, unit resolution, and UR-resolution. The strategies it presents include: weighting and subsumption. OTTER also offers demodulators and paramodulators. These are very helpful in certain applications. In simulation, numerical manipulations and rewriting is not only a factor of efficiency but it also becomes essential in practice. In OTTER demodulators are applied as rules in backward-chaining whose intermediate or partial consequences are not written in the rulebase. They work as functions in imperative programming, in the sense that they are 58

called with a set of parameters and then they return the corresponding result. For example, a demodulator could simply be a function to multiply two integers: MULT(a, b) = a * b (for more about demodulation and paramodulation, see above). OTTER main inference procedure works as follows (McCune, 1995):

‘The main loop for inferring and processing clauses and searching for a refutation operates mainly on the lists usable and sos: While (sos is not empty and no refutation has been found) 1. Let given_ clause be the `lightest' clause in sos; 2. Move given_clause from sos to usable; 3. Infer and process new clauses using the inference rules in effect; each new clause must have the given_clause as one of its parents and members of usable as its other parents; new clauses that pass the retention tests are appended to sos; End of while loop.7 ... Otter's main loop implements the set of support strategy, because no inferences are made in which all of the parents are from the ini tial usable list.’

In step 3., the retention test checks that the new clause is not subsummed for existing clauses. A difficulty when using most of the existing theorem-provers is their poor interaction with the user, and OTTER is not an exception. In McCune’ s (1995) words: ‘ Although Otter has a primitive interactive feature … it is essentially a noninteractive program. On unix-like systems it reads from the standard input and writes to the standard output:

otter < input-file > output-file No command-line options are accepted; all options are given in the input file.’

As has been said, for implementing an example in OTTER in Chapter 7, hyperresolution, the sets of support and usable, subsumption, fewest-literal preference strategy, and demodulators will be used. However, when re-implementing the example in SDML, the search strategy will be model-driven by using automatic partitioning of the set of clauses and other strategies SDML offers, which are more common in simulation and in logic-programming languages than in theorem-provers (see next section, especially subsection 3.6.3). 3.5.2 Semantic (interpretation)-based Theorem-provers The inference procedures in this section are based on a search over interpretations as opposed to clausal manipulation studied in the previous paragraph. Aspects of semantic59

driven search were already considered in brief when considering semantic resolution, but there the use of interpretations was restricted to separate clauses in groups. The procedure is still predominantly syntactic as it is based on clausal manipulation after the separation is made. In the methods to be reviewed in this section, interpretations are more intensively used to drive the search. In fact, each inference step is guided by the validity of certain terms. Recent theorem-provers have been developed using the notion of semantic search, particularly using tableaux. Comparing with interpretations where all possible interpretation have to be checked, tableaux are more efficient as it considers only those interpretations that might make valid (literals in) certain clause(s) - in this sense it is also guided by clauses. 3.5.2.1 Tableaux Tableaux generate indirect proofs as ‘ the premises of an argument form together with the negation of the conclusion are tested for joint satisfiability’ (Gordon et al., 1992). ‘ … In essence, a tableau is a survey of the possible interpretations of S with each branch representing an interpretation’ (Bonacina, 1999). Tableaux are an ‘ attempt to systematically construct a structure from which a logical model can be extracted for the negation’of the theorem to be proved (Fisher et al., 2000). The idea is to generate a branch for each possible valid interpretation. Once a contradiction appears in a branch, this branch is closed. If all children of a node are closed, the node is also closed. If a branch is not closed (there is no contradiction in it), it is said to be open. An open branch, where no more inferences can be made, offers a logical model of the theory (remember, the theory in this case is given by the conjunction of the premises and the negation of the theorem). In this case the whole set of clauses defining the theory is satisfiable, the theorem is valid, and the negation of the theorem is false. On the other hand, if all branches of a tableau are closed, then there is no valid interpretation and the theorem is valid. The set of braches gives the tableau. If at least one branch is open, it is said that the tableau is open. If all branches are closed, the tableau is closed. As an (tableaux) inference rule (an example of this sort of inference rule will be given below) is applied, there is a transition from a tableau (state of the search) to another tableau (next state in the search). The proof will be successful if a final state with a closed tableau is reached. As soon as the procedure generates an open branch where no more inferences can be made, the proof fails. Tableaux are complete. 60

Now logical model elimination tableaux will be explained as an example of inference procedures used in tableaux. Starting from the set S, defined as the union of the premises and the negation of the theorem to be proved, roughly the inference procedure works as follows (Bonacina, 1999): Assume the starting clause is Ci = L1 v L2 … v Ln. This gives a first state in the search represented by a tableau with Ci as its unique node. A branch named Lm is created for each literal Lm of Ci representing the different ways this clause can be satisfied. Then suppose there is another clause in S: Cj = F1 v F2 v … Fk, and that for the substitution s : -L1s = F1s . Then this clause Cj will be added at the node corresponding to branch L1, closing the branch for F1 and applying s to literals in branch L1. (As is known, it is possible to apply binary resolution to these two clauses, giving as resolvent the disjunction: (L2 … v Ln v F2 v … Fk)s ). This gives a new state in the search process: the original tableau with node Ci expanded by this second node Cj which has branch F1 closed, and the application of the substitution s . Repeating this procedure to the remaining literals in nodes for Ci and Cj, the inference proceeds. A final state is achieved either because an open branch where no more inferences can be made is found (a logical model for S given by this branch has been found, the negation of the theorem is valid in this logical model, and the proof fails) or the tableau is closed (the proof succeeds). Example 3-6. Assume, S = {(1)~ P v ~ Q v R, (2) P v R, (3) Q v R, (4) ~R}, where the premises and the theorem are clauses (1)-(3) and (4) respectively, a possible tableau is shown in Figure 3.2.

(1)~ P v ~ Q v R chosen literal:~P, and clause (2) literal ~Q, clause (3) R R

l: R c: (4)

literal R, clause (4)

empty: closed

empty

Figure 3.2. A tableau for example 3-6 61

3.5.2.2 Prolog-based Logical Models Two theorem-provers written as extensions of Prolog will be considered: SATCHMO: SATisfiability CHecking by [logical] MOdel generation; and PTTP, a Prolog Technology theorem-prover. Both of them are based on logical model search. Their most relevant aspects will be considered. First, common characteristics will be enumerated and then particular aspects of SACHTMO will be reviewed, as it presents more similarities with OTTER and the way an exploration of simulation trajectories using SDML’ s facilities will be implemented. In both of them result from attempts to extend Prolog with a complete procedure to manage non-Horn clauses (i.e., not only Horn clauses as is usual in Prolog). A difference between them is that while SACHTMO is written on top of Prolog, PTTP is an extension compiled along Prolog’ s source code. To be implemented on top of Prolog gives SATCHMO flexibility and portability. Both are tableaux-style implementations. They introduce additional inference procedures from those Prolog has, are semantic oriented, and use splitting and backtracking to generate alternative paths in the search. SATCHMO is based on the introduction of a new set of rules in order to be able to manage clauses not allowed in Prolog. These rules are fired in a forward-chaining manner. In fact, the primary idea for writing SATCHMO comes from the observation that rangerestricted clauses can be exploited by the inference procedure. A range-restricted clause is a clause such that every variable occurs in at least one negative literal. If a clause is written in the form of a rule with the negative literals in the antecedent, it is said that the clause is range-restricted if every variable is in at least one literal in the antecedent. Then, the idea is that if in the rule a implies b (or in the clause ~a | b), a is valid but b is not, then b can be taken as deducted via this rule and added to the database. SATCHMO is based on hyperresolution. Splitting and backtracking are introduced in SATCHMO by allowing disjunctions in the consequent of rules. Other problems Prolog’ s search faces are also solved: the difficulty of infinite generation of clauses due to recursive clauses as a consequence of unbounded search is worked out by using subsumption (Rainer et al., 1988). A drawback of SATCHMO is that range-unrestricted clauses might need full instantiation of some variables over the whole Herbrand, universe which might lead to an ‘ explosion’ in the number of ground instances. This is prevented by an appropriate transformation of range-unrestricted clauses into range-restricted clauses, which is always 62

possible. See, for example, CPUHR tableaux, which are an adaptation of PUHR: Positive Unit Hyperresolution tableaux (Bry et al., 1996), the tableaux used to formalise SATCHMO, for managing constraints and existentially quantified variables (Abdennadher et al., 1997). A logical model in SATCHMO is given by a set of ground terms, more concretely by those ground terms that are satisfied in the logical model (the remaining ones are assumed violated by default). This notion of determining all facts in a logical model is slightly different in simulation and especially so when proving tendencies due to practical reasons. For example, not all facts of a trajectory are generated when searching for a tendency; once the tendency is found in the trajectory, the program backtracks leaving the remaining facts (irrelevant for the proof) as unknown. 3.6 Classification of Theorem-provers 3.6.1 Criteria The classification will be based on the character of the search, which might be: 1. Forward- vs. backward-chaining (i.e., assumption expansion vs. sub-goal reduction) As is already known, in forward-chaining the precedent of a rule is checked and, if it matches the database, then the rule fires and the consequent with the corresponding instantiations of variables is added to the database. In case of disjunctions in the consequent, different branches will appear, each one labelled with an assumption. On the other hand, if the search is backward-chaining, the consequent of a rule is checked and, if the goal is a conjunction of several sub-goals, a branch is open to check each of them. In this case, the sub-goals label the generated branches. 2. Semantic vs. syntactic focused: interpretation- vs. clausal-driven search Resolution is basically syntactic or clause oriented; it generates a clause from several other clauses considering basically the form of the clause and the meaning of terms (see above). Alternatively, tableaux are in essence semantic oriented, as the generated branches in a node are precisely those alternative interpretations that make certain clauses valid. 3. Implicit vs. explicit splitting of the search space Optional solutions in the search may appear for different reasons, for example, owing to a disjunction in the antecedent and/or in the consequent of a clause, or owing to existentially quantified variables in the consequent (see, for example, Abdennadher et al., 1997). In a splitting search, one branch is investigated at a time. This sort of search will be called explicit search. In this case, if attempting to prove unsatisfiability, the program backtracks 63

once a contradiction appears in a branch. Alternatively, in an implicit search the inference is carried out over all paths simultaneously. Branches in an implicit search will be called implicit branches. To illustrate this, compare OTTER and tableaux methods. In the new versions of OTTER both kinds of searches can be implemented (the older versions accept only implicit search), while in tableaux only explicit search can be followed. OTTER does not give explicit information about the closed branches in an implicit search. That is different when applying an explicit search. In this case OTTER provides a summary for a closed branch at the moment the branch is closed and OTTER backtracks to explore another branch. One of the main advantages of an implicit search when compared with splitting is that it is easier to prune the search tree when similar paths are found. On the other hand, one of its main disadvantages is that it needs more memory as all paths are kept in memory simultaneously. Nevertheless, a classification based on these criteria is somewhat arbitrary, as almost any computational logic program is a combination of syntactic, semantic, forward-, and backward-chaining strategies. For illustration consider OTTER. It is predominantly syntactic, as based on clause manipulation, and forward-chaining oriented, but it also implements semantic ideas, such as the separation of clauses into two groups (the sets of support and usable) and by the use of demodulators and paramodulators (see above). 3.6.2 Bonacina’ s Taxonomy of Theorem-prover Strategies Bonacina’ s classification (Bonacina, 1998 and 1999) is based on the third criterion above. She considers an implicit branch search as clausal oriented, and a splitting one as goal oriented. The two main categories in her classification are, on the one hand, those theoremprovers with a ‘ sub-goal reduction strategy ... if one considers the single object they work on as the goal, each step consists in reducing the goal to sub-goals’ (Bonacina, 1998, p. 257) (e.g., tableaux). On the other hand, she considers those theorem-provers using strategies she calls ‘ ordering-based strategies’ . She characterises these strategies as working ‘ with a set of objects [e.g., clauses], they use a well founded ordering to order the objects, and possibly delete [e.g., using subsumption] objects that are greater than and entailed by others’(idem, p. 256). In this group is OTTER. However, it is confusing to consider clausal orientation as an implicit branch search and splitting as a goal-orientated search. For example, as noted above, in the new versions of OTTER splitting has been implemented even though OTTER is clausal oriented.

64

Bonacina’ s depiction of the search process seems similar to the description of a simulation. She illustrates the search process as a chain of transition states, where a transition is generated by an application of an inference rule. Each state is a step towards a desired final state, where the proof is successful. In the case of a logical model search using tableaux, each state is a tableau. Given one state, a new state is generated by the application of a transition rule over this tableau and the generation of a new branch. The new state will be the tableau given by the addition of the new branch to the old tableau. The desired final state is a closed tableau. In case of a clausal search, a state is given by the set of clauses in a search step. This set might consist in subsets of clauses, for example, the sets ‘ usable’ and ‘ support’ in OTTER. Additional subsets might be given by the implicit branching of the search. A transition step consists in the application of an inference rule giving a new set of clauses defining the new state. The proof is successful once a desired final state is reached; that is, any subset of clauses representing a search path contains a contradiction. 3.6.3 Additional Considerations about Bonacina’ s Taxonomy: Classifying Generation of Trajectories in a Simulation The criteria given in section 3.6.1 will be used to add details about the search process in the theorem-provers SATCHMO and OTTER, and to include in this classification logical model generation in a simulation, namely an event-driven simulation, and the MAS-based simulation scheme adopted in SDML. 1. SATCHMO. This uses the sub-goal (backward-chaining) strategy of Prolog combined with a logical model (semantic) search with splitting. 2. OTTER. Originally OTTER was purely an assumption-based forward-chaining procedure using clausal (syntactic) orientation with implicit path generation. New versions of OTTER also allow the explicit use of splitting. 3. Event-driven simulation. Usually simulation is forward-chaining and logical modeloriented. It is rare to find simulation languages implementing splitting either implicitly or explicitly. When experimenting with the dynamics of a simulation, individual trials have to be generated by explicitly choosing the different combinations of the factors for each trial. As it is a logical model search, it can be interpreted as a tableau, but note that in a simulation not all facts have to be deduced and some are left as unknown. Although it is similar to tableaux (since it is logical model-oriented), it is different, as inference is forward-chaining rather than

65

backward-chaining. In theorem-provers like OTTER, the order transition rules as given by certain clause weighting, while in event-driven simulation this order is given by the future event list. 4. SDML. Like most simulation languages, this is a forward-chaining logical modeloriented language, but it enjoys facilities for splitting and backtracking and it is not necessarily committed to generate all valid clauses in a logical model. State transition rules follow an order given by the hierarchy of agents, the hierarchy of time levels, and, finally, a partitioning of the space of rules according to their dependencies. 3.7 Constraint-based Search

The idea is to develop search and inference procedures that allow a more flexible manipulation of the semantics than traditional logic-programming and forward-chaining systems do. The call has been for techniques that allow a semantic-driven search more adaptable to the search process than unification (Frühwirth et al., 1992). A first answer came from logic-programming, in the form of constraint logicprogramming, commonly using Prolog, both as a platform and as the programming style (Frühwirth et al., 1992). It is basically a backward-chaining inference system. A second answer came from rule-based forward-chaining systems. Examples are SATCHMO and CPUHR-tableaux calculus (Addennadher, 1995; idem, 1997; see also Frühwirth, 1994; Abdennadher et al., 1999). Among the advantages of these forward-chaining systems over constraint logic-programming are introduction of user defined constraints and meta and higher-order reasoning via rewriting of rules. Constraint logic-programming can be characterised by the interaction of the logicprogramming inference engine and a constraint-solver algorithm. constraint-solvers specialise in managing constraints in certain domains (e.g., integers, booleans). The logicprogramming engine calls the constraint-solver with a set of constraints, getting as the answer a normalised solution for it and then pruning the search space in accordance with this solution. A disadvantage constraint-solvers face is that they are not open to the user, making difficult their modification and application to new domains. Some rule-based forward-chaining systems allow user manipulation of the constraint-solver. There will exist both user-defined and built-in constraints. The first group of constraints will be managed by rules the user gives, while the second one will be managed by constraint-solvers. With illustrative purposes, in the next paragraphs aspects of CHR will be considered in more detail.

66

‘ CHRs define determinate conditional rewrite systems that express how conjunctions of constraints propagate and simplify … CHRs define simplification and propagation over user-defined constraints. Simplification replaces constraints by simpler constraints … Propagation adds new constraints which are logically redundant but might cause useful further simplification’ (Frühwirth, 1994, pp. 90-91). In what follows a CHR operational semantics will be considered to some extent in order to bring in an idea about how it works. In a CHR system, the inference engine is characterised by the interaction of a logicprogramming engine, a constraint-solver (working on built-in constraints), and the rules working on user-defined constraints. Using Frühwirth’ s notation, a state in the search is given by the tuple of sets: < GS, CU, CB>, where GS is a the set of goals, and CU and CB are constraint stores for user-defined and built-in constraints. A constraint store is a set of constraints. An initial state might consist of < GS,{},{}>, i.e., constraints are all embedded in the set of clauses defining the problem and in the goal. In case of logical model generation, a final state in a branch of the search is given by < {}, CU, CB>, if the search succeeds or by < GS, CU, {false}>, in case it fails (i.e., the branch is closed). Consider how a state transition would be. Suppose the state is: < {C}U GS, CU, CB>: that is, a subset of constraints {C} has appeared as a new goal during the search process. Calling constraint-solver, if (C ? CB) is logically equivalent to CB', in the next step it would produce: < GS, CU, CB’ >. Similar modifications are generated by CHR rules. In fact, CHRs will either generate simplifications in C U while adding constraints to GS, or add constraints into CU logically equivalent to some constraints in GS but leaving these constraints’ store unchanged (constraints propagation) (Frühwirth, 1994). Example 3-7 The sort of reasoning exposed above will be illustrated by using a simplification of an example borrowed from Abdennadher et al. (1997). Problem: - John has enrolled in courses either before 1994 or after 1995

67

- John was enrolled in cs30 before 1997. In clausal form, it is expressed as: enrolled(john, A, B) → (B < 1994) v (B > = 1996) (A, C) = ({enrolled(john, cs30, T)}, { T <= 1996}) Using notation for operative semantics given above, this will be produce: goal: enrolled(John, A, B), constraints: (B < 1994) v (B > = 1996) Applying logic-programming two sub-goals or branches are found: (A, {T <= 1996, T <1994}) and (A, {T<= 1996, T >= 1996}) Then applying constraint-solver at each branch, they are extended into: (A, {T <1994}) and (A, {T = 1996}), respectively. For the second branch the final state is achieved: {enrolled(John, cs30, 1996), {}). In this example it is assumed there is only one set of constraints to be solved by the constraint-solver. For more details about CHR, see Frühwirth (1994) and Abdennadher S (1999), or look at http://www.pst.informatik.uni-muenchen.de/~fruehwir/. 3.8 Meta-Level Reasoning and Proving Tendencies

The sort of meta-reasoning certain reasoning languages enjoy is quite convenient when proving tendencies. Examples of languages that have explicit mechanisms for presenting meta-reasoning are SATCHMO and SDML. SATCHMO allows the use of writing and eliminating of clauses not only in the database, but also in the rulebase using ‘ assert’ and ‘ retract’ (Prolog’ s built-in functions). Additionally, since it is written on top of Prolog, SATCHMO itself can be easily modified. On the other hand, SDML enjoys similar benefits from a meta-agent being allowed to read and to write rules in other agents. These supplementary facilities permit reasoning about the inference system itself, e.g., writing rules depending on how the search develops. An example would be: given only hints about a theorem related to an overall tendency defined in terms of certain output, the theorem might be elaborated by continuously updating an envelope for the output in accordance to the simulation results. Also, after all logical models have been found, relevant tendencies might be searched either over the whole space of logical models or over a subspace of it.

68

4

Chapter 4 - Understanding Phenomena and Simulation Dynamics

4.1 Introduction

In this chapter the development of the ‘ world view’ introduced in Chapter 2 will be continued. The main aim of this is to set up a framework for a conception of emergence of tendencies, which is a central notion in this thesis. A conception of the emergence of tendencies seems to be strongly linked to a subject’ s knowledge and to an object’ s complexity. A notion of the object-subject dichotomy will be emphasised as a useful distinction once the subject’ s bounded rationality has been recognised. The interest is not only in tendencies observed in an empirical system but also in a simulation. For instance, MAS-based simulation seems to be very helpful for understanding some complex systems, for example, social systems. There, aspects of the complexity of the ‘ real system’ , from the point of view of the scientific community studying such a system, are recreated in the simulation. The importance of observing and explaining tendencies in social simulation has been pointed out for researchers in this community (see, for example, Edmonds et al., 1998; Nigel, 1995; Conte et al., 1997; Carley, 1998). The re-creation of properties of target complex systems in the computer lab is what makes the simulation interesting in these cases, as it is not possible either to implement this sort of experiment with the target system or to get satisfactory knowledge of these tendencies via models of a different nature (mathematical models or historical studies, for example). In applications such as social simulation, where the number of agents is large and/or the interrelations are subtle and hardly contingent to the dynamics of each simulation trajectory, analysis of the design seems to be of limited use and a post hoc study becomes necessary. It is common to find structural change or fluctuation in these systems, making their analysis difficult (the simulation of systems undergoing structural change was discussed in sections 2.5 and 2.9). Some of the difficulties involved in studying structural change, and then complex systems, are: structural change develops differently in different simulation trajectories, it is difficult to foresee how it will unfold (e.g., what the new structure of the system will be after it happens), and it is not easy to identify the factors triggering it. The notion of emergent tendencies will be seen in this thesis as relative to a modeller’ s difficulties in understanding an observed system, and linked to, on the one hand, a modeller’ s cognitive model and bounded rationality, and on the other hand, by the observed system’ s ‘ objective complexity’ . The discussion will start by considering the

usefulness of the distinction between subject (observer) and object (observed phenomena) for understanding emergent tendencies. Then, the central discussion in this chapter will be addressed. First, a picture of a hierarchy of systems at different levels of phenomena in accordance with their potential for complexity (which will be called objective complexity) will be drawn. This is based on Heylighen’ s ideas. Second, a subjective notion of complexity in accordance with Edmonds’ s conception will be introduced. Third, using these two notions, a notion of what an emergent tendency might be and what are its causes will be discussed. Finally, aspects of a simulation related to emergent tendencies for a modeller observing a simulation will be examined. 4.2 Subject’ s Bounded Rationality and the Subject-Object Dichotomy

A basic notion in the view taken in this thesis is the object-subject dichotomy. How this dichotomy is understood and its relation to the idea of a subject’ s bounded rationality will be discussed. Bounded rationality is a central concept in social simulation – it is understood as the existence of significant bounds in an agent’ s cognitive model for gathering information from their environment and for processing it (Edmonds, 1999b). A pragmatic position will be taken for defining the emergence of tendencies, a stance useful for the purposes of this thesis – in particular, a position from which a subject’ s understanding will be conceived as conditioned by the interrelation between the subject’ s language and the observed object’ s complexity. It is not claimed that this notion has general philosophical or theological validity. Instead, it is based on the assumption that the agents and subjects of interest have bounded rationality and on the notion of object-subject dichotomy to be discussed below. It seems that the less bounded is the rationality of an agent with respect to relevant aspects of the environment (for its purposes), the more deterministic the world (surroundings) will be for this agent. Here ‘ relevant’ means those macro regularities that the agent perceives in its environment that are useful for reasoning and taking decisions in terms of some goals. Bounded rationality means that many aspects of the world will be effectively indeterministic. Different agents with perfect rationality (assumed deterministic in itself) and the same goals will share the same perfect knowledge and will take the same decisions under similar ‘ environmental’conditions. In general, the behaviour of a population of agents seems to be non-deterministic, even if they have perfect rationality. This is because of their differences in goals and experience (or the particular environmental circumstances they have become involved into). If a group

70

of agents with perfect rationality collaborate to achieve a common goal and the environment has no intention against them (e.g., there is no other kind of rational agent in such an environment whose goals conflict with theirs), then the ‘ resulting world’ might be deterministic for these agents. Behaviour of one of these agents will be predictable from another agent (or from a modeller) having the same information. Examples of agents with perfect rationality include the agent in a market as it is conceived in classical economics (Edmonds et al., 1998). These economic agents do not necessarily have the same goals and collaborate. Economists use certain ‘ game theory’ rules according to which, under certain conditions, behaviour of their (classical economic) agents is predictable. Unless the opposite is explicitly said, henceforth by ‘ agent’ is meant agent with bounded rationality. There seem to exist even more reasons for the behaviour of agents with bounded rationality to be indeterministic. Bounded rationality means that other aspects apart from goals and experience might make agents behave differently. These differences are rooted in the limited information an agent can capture from its environment and in the limited capabilities the agent has to process the collected information. For each agent, there will be a different ‘ world view’ . Different agents will behave differently under the same circumstances. Agents with bounded rationality use different languages, though the divergences might be only in the beliefs used as data to feed to their ‘ inference procedure’ . This might happen even if they have similar experiences (it is assumed agents are able to learn from experience). Each agent (a subject) has models of its surrounding environment – almost certainly different from other agents’ models. As a consequence, there is a subjective ‘ world’ for each one. There will be shared and unshared beliefs among agents. As more agents share a belief over time and space, it can be said that this belief is less subjective. Similarities among agents’ beliefs introduce the notion of knowledge with different degrees of subjectivity. Indirect justifications of subjects’ knowledge such as experimentation, theoretical developments, and modelling might influence subjects’ beliefs. For example, it is possible for human beings to decrease the degree of subjectivism of their knowledge by improving understanding of phenomena via modelling (e.g., simulation) theory development and historical discussion (see Figure 4.1). An increase in the degree of subjectivism of an agent’ s beliefs is understood in this thesis as an increase in the degree of objectivism of these beliefs. Decrease in the degree of subjectivism of a belief will hopefully bring that belief closer to the ‘ real’ (hypothetical) characteristic of the object the subject refers to with that belief. 71

Subject-2’ s beliefs

Shared beliefs

Subjects-1’ s beliefs

Decrease of the degree of subjectivism in subject’ s knowledge via, e.g., historical and theoretical discussion, and modelling

Subject-2’ s beliefs

Shared beliefs

Subject-1’ s beliefs

Figure 4.1.Increasing objectivism in subjects’beliefs In this discussion, it has been assumed implicitly that an agent reasons about its internal model of the ‘ world or surroundings’ rather than about the world itself. On the one hand, there is a sort of subjective world of the agent given by its mental models, goals, perceived phenomena, beliefs, language, and every other ‘ mental’construct it has. On the other hand, there is the ‘ real world’ or empirical world – the source of phenomena; i.e., there is a ‘ real interaction’ subject-world, where the actions of this and other subjects, on the one hand, and the new circumstances arising in the world as the result of those actions, on the other hand, turn up as, in part, the consequences of this subject’ s actions; and there is a perception by this subject of this interaction and of the consequences of its actions. Such a pragmatic ‘ division’ between a subject’ s perception of the world, mental models and language, and the empirical world itself is what in this thesis is called the subject-object dichotomy. The notion of subject, then, is linked to this mental model, language and perception, while that of object is linked to phenomena where it is supposed the ‘ real’ interaction subject-world takes place. These cognitive agents (called subjects) will focus on those parts of their surrounding with relevant behaviour for their intentions or goals, as it is supposed to be useful for understanding phenomena, for modelling their surroundings, and for taking better decisions. Hence, a subject-object dichotomy, where the subject uses mental models to understand the object (which is a system) has been identified. In our later discussion about emergent tendencies and complexity, this notion of an object-subject dichotomy will be exploited and recognised as originating from both subjective and objective factors. For example, a sort of trade-off between these subjective and objective factors is associated with the likelihood that a subject finds emergent tendencies in an object. It will have an effect on a subject’ s recognition of emergence and

72

complexity either in its environment or in a simulation. Then, in the following section, the factors related to an objective view of complexity in an object, namely the levels of comple