TABLE OF CONTENTS List of Figures iv List of Tables vi

Analysis of Mechanical Properties of Fiber-reinforced Composites with Random Microstructure
by Jan Zeman

Czech Technical University Faculty of Civil Engineering 1999

Approved by (Chairperson of Supervisory Committee)

Program Authorized to O?er Degree



List of Figures List of Tables List of Algorithms Chapter 1: 1.1 1.2 1.3 1.4 1.5 1.6 Introduction and historical review

iv vi viii 1 2 3 4 6 7 8 9 9 10 11 11 12 12 12 15

Concept of a representative volume element . . . . . . . . . . . . . . . . . . Description of microstructure morphology . . . . . . . . . . . . . . . . . . . Genetic algorithms and simulated annealing method . . . . . . . . . . . . . . E?ective properties of composite materials . . . . . . . . . . . . . . . . . . . Present work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Quanti?cation of microstructure morphology

Chapter 2: 2.1

Basic concepts and hypotheses . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.1 2.1.2 2.1.3 2.1.4 Concept on an ensemble . . . . . . . . . . . . . . . . . . . . . . . . . Ergodic hypothesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . Statistical homogeneity . . . . . . . . . . . . . . . . . . . . . . . . . . Statistical isotropy . . . . . . . . . . . . . . . . . . . . . . . . . . . .


Microstructure description . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.1 2.2.2 Microstructure description for general composites . . . . . . . . . . . Microstructure description for particulate composites . . . . . . . . .

2.2.3 2.3

Relating S and ρ functions for particulate microstructures . . . . . .

18 20 21 22

Determination of microstructural statistics for real materials . . . . . . . . . 2.3.1 2.3.2 2.3.3 Functions Sn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Functions ρn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Microstructural statistics for some theoretical models of microstructures 23 26 27 29 30 32 33 34 38 40 40 40 40 42 43 44 46 49 50 51 51 52


Analysis of real microstructure . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.1 2.4.2 2.4.3 2.4.4 Testing ergodic hypothesis . . . . . . . . . . . . . . . . . . . . . . . . Test of statistical isotropy . . . . . . . . . . . . . . . . . . . . . . . . Microstructure describing functions . . . . . . . . . . . . . . . . . . . Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Construction of the periodic unit cell

Chapter 3: 3.1 3.2

Objective function and problem de?nition . . . . . . . . . . . . . . . . . . . Deterministic algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.1 3.2.2 3.2.3 3.2.4 Powell’s method (POWELL) . . . . . . . . . . . . . . . . . . . . . . . Dynamic Hill Climbing (DHC) . . . . . . . . . . . . . . . . . . . . . . BFGS method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Test example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .


Stochastic optimization methods . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.1 3.3.2 3.3.3 3.3.4 3.3.5 3.3.6 3.3.7 3.3.8 Principle of genetic algorithm . . . . . . . . . . . . . . . . . . . . . . Sampling mechanisms . . . . . . . . . . . . . . . . . . . . . . . . . . Genetic operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Genetic algorithm I (GA I) . . . . . . . . . . . . . . . . . . . . . . .

Genetic algorithm II (GA II) . . . . . . . . . . . . . . . . . . . . . . . Genetic algorithm III (GA III) . . . . . . . . . . . . . . . . . . . . . . Hybrid genetic algorithm (HGA) . . . . . . . . . . . . . . . . . . . . Di?erential evolution (DE) . . . . . . . . . . . . . . . . . . . . . . . . ii


Augmented simulated annealing (AUSA) . . . . . . . . . . . . . . . .

53 54 56 56 58 62 62 68 68 70 70 71 72 74 78 80 87 89

3.3.10 Test example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 Some additional improvements . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.1 3.4.2 3.4.3 3.5 Adaptive operator probabilities . . . . . . . . . . . . . . . . . . . . . Problem-dependent operators . . . . . . . . . . . . . . . . . . . . . . Sampling grid re?nement . . . . . . . . . . . . . . . . . . . . . . . . .

Determination of periodic unit cell . . . . . . . . . . . . . . . . . . . . . . . E?ective properties of periodic composite medium

Chapter 4: 4.1

Basic relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1.1 4.1.2 Hill’s lemma . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Stress and strain concentration factors . . . . . . . . . . . . . . . . .

4.2 4.3 4.4

Mori-Tanaka method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Finite element formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . Numerical results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conclusions and future work

Chapter 5: Bibliography

Appendix A: Evaluation of M (r) for impenetrable cylinders Appendix B: Parameter settings for optimization algorithms



1.1 1.2 2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8

A real micrograph of a transverse plane section of the graphite ?ber tow . . Principle of simulated annealing method . . . . . . . . . . . . . . . . . . . . Example of sampling template . . . . . . . . . . . . . . . . . . . . . . . . . . Correction factor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Unit cell for hexagonal lattice of ?bers . . . . . . . . . . . . . . . . . . . . . Function K (r) for hexagonal packing of ?bers . . . . . . . . . . . . . . . . . Microstructural functions for fully penetrable cylinders. . . . . . . . . . . . . Idealized binary image of a transverse cross-section of a ?ber tow . . . . . . Selected members of the sample space . . . . . . . . . . . . . . . . . . . . . . Two-point matrix probability function Smm (x ? x ) and variation coe?cient v (φ) of Smm (r, φ) plotted as a function of r/R. . . . . . . . . . . . . . . . . .

1 5 21 22 24 25 26 27 28



Two-point matrix probability Smm (r) and unique relations for the general two-point probability function Srs (r). . . . . . . . . . . . . . . . . . . . . . . 30 31 32 34 41 60 63 64 65

2.10 Second order intensity function K (r) and radial distribution function g2 (r) . 2.11 The two-point matrix probability function . . . . . . . . . . . . . . . . . . . 3.1 3.2 3.3 3.4 3.5 3.6 Geometry of the periodic unit cell . . . . . . . . . . . . . . . . . . . . . . . . An admissible unit cell . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Allowable position of particle . . . . . . . . . . . . . . . . . . . . . . . . . . Periodic unit cells with corresponding second order intensity functions . . . . Variation of the objective function with the number of particles in the PUC. Evolution of the 10 particles PUC . . . . . . . . . . . . . . . . . . . . . . . . iv


Periodic unit cells: (a) Hexagonal lattice, (b) 2-?bers PUC, (c) 5-?bers PUC, (d) 10-?bers PUC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66


Example of Finite Element meshes: (a) Hexagonal lattice, (b) 2-?bers PUC, (c) 5-?bers PUC, (d) 10-?bers PUC. . . . . . . . . . . . . . . . . . . . . . . 75



2.1 3.1 3.2 3.3 3.4 3.5 3.6 3.7 4.1 4.2 4.3

Relation of two-point probability functions . . . . . . . . . . . . . . . . . . . Minimum values of function . . . . . . . . . . . . . . . . . . . . . . . . . . . Number of function evaluations for deterministic optimizers . . . . . . . . . . Characteristics of the best individual . . . . . . . . . . . . . . . . . . . . . . Number of function evaluations for stochastic optimizers . . . . . . . . . . . Initial operator probabilities resulting from adaptation procedure . . . . . . AUSA algorithm with adaptive operator probabilities . . . . . . . . . . . . . AUSA algorithm with problem-dependent operators . . . . . . . . . . . . . . Material properties of T30/Epoxy system . . . . . . . . . . . . . . . . . . . . Components of the e?ective sti?ness matrix . . . . . . . . . . . . . . . . . . Variation of e?ective sti?ness tensor for ?ve ten-particle PUC resulting from optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

15 42 43 55 56 58 58 62 74 76



Variation of e?ective sti?ness tensor for ?ve randomly picked ten-particle PUC 77 89 89 90 90 91 91 92

B.1 Deterministic algorithms parameters . . . . . . . . . . . . . . . . . . . . . . B.2 Parameter settings for GA I . . . . . . . . . . . . . . . . . . . . . . . . . . . B.3 Parameter settings for GA II . . . . . . . . . . . . . . . . . . . . . . . . . . B.4 Parameter settings for HGA . . . . . . . . . . . . . . . . . . . . . . . . . . . B.5 Parameter settings for GA III . . . . . . . . . . . . . . . . . . . . . . . . . B.6 Parameter settings for DE . . . . . . . . . . . . . . . . . . . . . . . . . . . . B.7 Parameter settings for AUSA . . . . . . . . . . . . . . . . . . . . . . . . . .


B.8 Adaptivity parameters setting . . . . . . . . . . . . . . . . . . . . . . . . . . B.9 Parameter settings for AUSA with problem-dependent operators . . . . . .

92 93


3.1 3.2 3.3 3.4 3.5 Golden Section search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Principle of deterministic optimizations . . . . . . . . . . . . . . . . . . . . . The Distant point algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . Principle of genetic algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . Augmented Simulated Annealing . . . . . . . . . . . . . . . . . . . . . . . . 37 39 39 44 67

I would like to express my deepest gratitude and thanks to my advisors, Ing. Miroslav ˇ y, CSc. and especially to Ing. Michal Sejnoha, ˇ Cern? Ph.D for their support, endless patience, continuous encouragement and inspiration during the whole course of this ˇ work. In addition, I would like to thank Prof. Ing. Jiˇ r? ? Sejnoha, DrSc. for his numerous valuable suggestions and inspiring ideas. I extend my appreciation to Ing. Karel Matouˇ s and Mr. Matˇ ej Lepˇ s for the fruitful discussions and useful guesses from the ?eld of the evolutionary computation. Jan Zeman Prague December 1999


Czech Technical University Faculty of Civil Engineering Abstract

Analysis of Mechanical Properties of Fiber-reinforced Composites with Random Microstructure
by Jan Zeman

E?ective elastic properties of the graphite-epoxy unidirectional ?ber composite with ?bers randomly distributed within the transverse plane section are found. To enhance the e?ciency of numerical analysis the complicated real microstructure is replaced by a material representative volume element consisting of a small number of particles, which statistically resembles the real microstructure. First, various statistical descriptors suitable for the microstructure characterization of a random media are considered. Several methods for the determination of these descriptors are proposed and tested for some simple theoretical models of microstructures. Moreover, a validity of various statistical hypotheses usually accepted for a random heterogenous media is checked for the present material as well. Successively, the unit cell is derived from the optimization procedure formulated in terms of these statistical descriptors. A variety of deterministic as well as stochastic optimization algorithms is examined to solve this problem. It is shown that for this particular application stochastic method based on genetic algorithm combined with the simulated annealing method is superior to other approaches. Finally, the estimates of elastic properties are found for the resultant unit cells using the Finite Element Method. Results suggest that the proposed approach, which e?ectively x

exploits the knowledge of material’s statistics of the composite, is more reliable then various averaging techniques or simple unit cell models based on the regular ?ber distribution.


The elastic as well as the inelastic behavior of composite materials in strongly dependent on the geometrical arrangement of distinct phases of the composite – microstructure. On the contrary, conventional methods for the prediction of thermo-mechanical response of composite materials are either based on limited microstructural information such as volume fraction of phases (see eg. [Gibson, 1994]) or deal with a regular distribution of phases (see eg. [Aboudi, 1991, Tepl? y and Dvoˇ r? ak, 1988]).

Figure 1.1: A real micrograph of a transverse plane section of the graphite ?ber tow Unfortunately, microstructure of real composite materials is typically quite complicated. To illustrate this fact, we present a high-contrast micrograph of a part of the graphite ?ber tow impregnated by the polymer matrix, Fig. 1.1. Evidently, such a composite can hardly be characterized by a simple periodic unit cell commonly used for the rectangular and hexagonal


arrangement of ?bers. The characterization of microstructure by the volume fractions of ?ber or matrix only do not captures evident clustering of ?bers. On the other hand it appears possible to select such a micrograph directly as a objective of analysis. However, once we realize, that this particular sample contains approximately 300 particles it becomes clear that the resulting analysis will be extremely di?cult, especially in the inelastic regime. To tackle this problem, one can substitute the whole complex microstructure by its representative volume element (RVE) and perform the analysis on the RVE instead on the whole sample. 1.1 Concept of a representative volume element

The crucial question now is how to determine such a volume element. In his classical work [Hill, 1963], Hill states that: This phrase (representative volume) will be used when referring to a sample that (a) is structurally entirely typical of the whole mixture on average, and (b) contains su?cient number of inclusions for the apparent overall moduli to be e?ectively independent of the surface values of traction and displacement, so long as these values are ‘macroscopically uniform’. The above conditions imply that the RVE should contain a large number of inhomogeneities to include all possible con?gurations appearing in the composite. This leads to a general, quite vague, suggestion that the RVE should be “su?ciently large” compared to the typical size of the inhomogeneity. Another perhaps more rigorous approach proposed by [Povirk, 1995] relies on the description of the original microstructure by a certain statistical function and then requires the RVE to approximate the original function as good as possible. In order to provide “su?cient” number of particles and meet the aforementioned conditions, periodicity of such a RVE is often considered – we presume that the RVE is surrounded by the periodic replicas of itself. This procedure enables us to substitute complicated original microstructure by the periodic


unit cell which consists of a small number of reinforcement only but still carries non-trivial statistical informations and is capable of capturing the real response of composite material. This approach is used in this work. 1.2 Description of microstructure morphology

A literature o?ers a number of statistical descriptors of a random composite media. The most suitable for the present purpose are the n-point probability function Sn de?ned in [Torquato and Stell, 1982], which gives the probability of simultaneously ?nding n points in the matrix or ?ber phase and the generic n-particle probability density ρn [Boubl? ?k, 1996, Quintanilla and Torquato, 1997], which gives the probability density for ?nding particle centers in given n points. Unfortunately, obtaining these descriptors for values of n higher than two for real materials is prohibitively di?cult, so the higher-order correlation functions are of little practical importance. Pioneering work devoted to determination of Sn -type functions for micrographs of real media is attributed to [Corson, 1974]. Despite the fact that the proposed procedure was extremely cumbersome (it was based on manual evaluation of photographs taken from samples of the real material) the principle itself (evaluation of the Sn function on some sampling grid and then averaging of the corresponding values) was used and extended in following works. In [Berryman, 1984] Corson’s procedure was automated by the use of image processing technique and ?nally, [Smith and Torquato, 1988] proposed simple simulation method for the determination of functions Sn , which is suitable for non-digitized media as well. The approach for obtaining ρn -type function used in this work is based on the second order intensity function K (r) de?ned by [Ripley, 1977], which can be easily determined for particulate microstructure. Unlike the preceding method this procedure is not simulation-based and therefore K (r) can be evaluated very rapidly for a reasonable number of reinforcements.



Genetic algorithms and simulated annealing method

Genetic algorithms (GAs) are adaptive methods which may be used to solve general search and optimization problems. The basic idea behind these methods are the genetic processes of biological organisms – namely the evolution and self-organization of natural systems over many generation according to the principle of natural selection and “survival of the ?ttest”. In reality, individuals of one population compete with each other for natural resources and for attracting a mate. Those which succeed in obtaining resources and attracting mates will have large number of o?springs, while individuals performing poorly will produce lesser number of descendants. So the abilities of successful individuals are more likely to be passed to following generations and result in the best adaption to the environment. GAs closely mimic all these natural processes – they work with a population of individuals, each represents a solution to a given problem. A score or ?tness is assigned to each individual depending on how good solution it presents. The individuals with higher ?tness are more likely to reproduce by crossover and to pass their characteristics to their o?springs. Moreover, an individual can also undergo a mutation if appropriate. In this way, over a number of successive population, the good characteristic of individuals are mixed and spread over all population, so the population converges to the optimal solution. The earliest work on this subject was in fact Darwin’s Origin of species where aforementioned principles were stated. The basic principles of GAs were ?rst rigorously laid down by [Holland, 1975], their detailed discussion is given eg. in [Beasley et al., 1993a, Goldberg, 1989, Michalewicz, 1992]. In their classical formulation, GAs work with the binary representation of objective variables. Michalewicz in [Michalewicz, 1992] proposed the GAs, which use the ?oating-point representation of search variables and showed their superiority for the real-valued function optimization. The Simulated annealing (SA) method is based on physical rather than biological principles – the algorithm e?ectively exploits the analogy between optimization problem and the annealing process of solid bodies. In the physical process of annealing the temperature of


G : Global L : Local



Figure 1.2: Principle of simulated annealing method

a solid is kept rather high initially and then decreases su?ciently slowly, so the individual particles have the possibility to attain the state with the minimal energy for a given constant temperature. As the temperature gradually decreases the energy of the whole body decreases as well and ?nally reaches the minimal value. SA algorithm works on the same principle - initial solution is generated randomly, some arti?cial parameter called temperature is set to initial value and new solutions are randomly generated. If the new solution is better than preceding in terms of ?tness, it is accepted automatically and replaces the original solution. However, if the new solution is worse than the preceding one, it has still the chance to replace the original solution – this probability depends on the di?erence of objective function and actual temperature – which enables the solution to escape from local minimum (see Fig. 1.2). This procedure is repeated a number of times for constant temperature and then the temperature is decreased until it reaches certain prescribed minimal value. This version of algorithm was ?rst proposed by [Kirkpatrick et al., 1983] and independently by [Cerny, 1985]. Detailed discussion of basic principles and further aspects of simulated annealing method can be found eg. in [Ingber, 1993, Ingber, 1995, Kvasniˇ cka, 1999]. In [Mahfoud and Goldberg, 1992] the e?cient combination of GAs and SA methods was proposed. It exploits the essentials of GAs (a population of chromosomes, rather than one


point in space is optimized) together with the basic concept guiding the search towards minimal energy states. Some additional suggestions and methods for implementation are cka, 1993]. given in [Kvasniˇ 1.4 E?ective properties of composite materials

Pioneering work devoted to the problem of determination of e?ective properties of composite media is attributed to [Voight, 1887] and [Reuss, 1929]. However, most of the essential discoveries over the last decades are based on the famous Eshelby’s ?nding [Eshelby, 1957] that the stress ?eld in a ellipsoidal inclusion subjected to a uniform remote stress ?eld is also uniform. This results allows to estimate the stress and strain ?elds in composite aggregate in terms of the phase elastic properties and certain stress and strain concentration tensors. These quantities can be generally derived either by employing a rigorous bounding procedures or by using some approximation techniques. The ?rst approach of the determination of elastic properties uses well known energy principles of elasticity to obtain bounds on e?ective elastic properties. The most noticeable contributors are [Hashin and Shtrikman, 1963, Beran, 1968, Milton, 1982] to name but a few. A variety of approximation methods were proposed in the past 40 years. The most common procedures include dilute approximation, self-consistent method [Hill, 1965] and the Mori-Tanaka method [Mori and Tanaka, 1973, Benveniste, 1987]. In particular, the MoriTanaka method seems to be the most popular one due to it’s simplicity and explicit estimates of the overall elastic moduli of composites. When periodic media are under the consideration, it is possible to determine the local ?elds with the aid of a certain numerical method such as the Finite Element Method. However some constraints must be imposed on the objective ?elds in this case to take into account the periodicity of the media (see eg. [Tepl? y and Dvoˇ r? ak, 1988, Suquet, 1987] and the overview


[Michel et al., 1999]). However, once the local ?elds are known the procedure of the determination of e?ective properties is quite simple and straightforward. 1.5 Present work

The main objective of the present work is to apply previously introduced principles to analysis of the unidirectional ?ber composite with carbon ?bers embedded in polymer matrix - it means two-phase material generated by identical ?bers of circular cross section whose generators are parallel. Morphology of such a material can be uniquely determined by its cross-section only so that the examined medium can be considered as a two-dimensional for statistical description as well as for proceeding numerical analysis. Chapter 2 presents basic statistical descriptors for the two-phase media. First, the concept of an ensemble is brie?y discussed and the principle of ensemble averaging is outlined. Successively, various hypotheses commonly applied to random composite materials are presented and discussed. Microstructural descriptors suitable for the characterization of the two-phase two-dimensional media are presented and methods for their determination for given materials are discussed and tested for some elementary theoretical con?gurations of particles. Finally, testing of some properties of statistical descriptors as well as their determination for the material displayed in Fig. 1.1 are performed. Chapter 3 outlines determination of the RVE with the help of a certain minimization problem. A variety of deterministic and stochastic optimization algorithms is presented and their performance for the given problem is examined. Chapter 4 brie?y describes the evaluation of e?ective properties for a heterogenous material. Finite Element Method is employed here to calculate the volume averages of local ?elds, which provide e?ective elastic properties of resulting composite media. The quality of unit cells resulting from optimization is compared with the results for the hexagonal array of ?bers and with unit cells selected as a random cut of real microstructure.




Standard tensor and vector/matrix notation is used throughout the thesis. Tensor quantities are denoted either by lightface Greek letters eg. σij , Lijkl ; (3 × 1) and (6 × 1) vectors are denoted by lower-case boldface italic letters eg. σ , (6 × 6) and (3 × 3) matrices are denoted by upper boldface Roman letters eg. L and scalars are denoted by lightface eg. r. The inverse of non-singular matrix is denoted by L?1 and LT stands for transpose of matrix L. Standard contracted notation is adopted when appropriate, thus 1 ≡ 11, 2 ≡ 22, 3 ≡ 33, 4 ≡ 23, 5 ≡ 13 and 6 ≡ 12.

This Chapter provides introduction to basic principles and methods of statistical description of the random two-phase composite material. Initially, two basic approaches to microstructure description are outlined, for each case the microstructure of a random media is described by some general n-order statistical moments. Then, functions which incorporate statistical informations up to the two-point level are examined in more details. Despite the limitedness of such a description, these functions still carry su?cient amount of statistical informations to provide e?cient tool for ful?lling the objective of this work - the RVE de?nition. Section 2.1 explores basic concepts and hypotheses for the microstructure description of random composites. Section 2.2 introduces microstructural functions used in this work; their evaluation for di?erent types of microstructure is presented in Section 2.3. Moreover, the connection between di?erent approaches to the microstructure classi?cation is examined and proposed methods are tested for some simple models of microstructures. Finally, these principles are in Section 2.4 applied to the analysis of micrographs taken from the real material. 2.1 Basic concepts and hypotheses

Consider again the micrograph displayed in Fig 1.1. In this particular case, arrangement of phases is exactly known (to a level of the microscope resolution) and one may assume that it completely describes the morphology of the whole composite. Unfortunately, when micrographs are taken from other parts of the specimen, microstructure of each sample is


substantially di?erent (see Fig. 2.7). At this point, situation becomes quite complicated - the questions is, which of these micrographs is the most suitable for the foregoing analysis. To answer this question, one has to recognize the random nature of geometrical arrangements of phases - it means that the particular microstructure of a given sample yields only one possible arrangement of phases. Therefore, instead of determining the exact value of some quantity at a given point (which depends on actual sample), attention has to be turned to its expected or averaged or macroscopic value, which incorporates informations from all samples taken from a material. 2.1.1 Concept on an ensemble

To re?ect the random character of the media under the consideration it is convenient to introduce the concept of an ensemble – the collection of a large number of systems which are di?erent in their microscopical details but they are entirely identical within a point of view of macroscopic details (see eg. [Beran, 1968, Boubl? ?k, 1996]). Within this concept, e?ective or expected value of some quantity corresponds to the process of its averaging through all systems, forming the ensemble. To formalize this idea, let us introduce a sample space U , identify individual members of this space by α and de?ne p(α) as the probability density of α in U ([Drugan and Willis, 1996, Kohler and Papanicalou, 1982]). Then ensemble average of function F (x, α) at point x is provided by F (x) =

F (x, α)p(α)dα.


In the context of the quanti?cation of microstructure morphology, ensemble represents the collection of material micrographs taken from di?erent samples of the material. To follow the above de?nitions literary, experimental determination of the ensemble average of function F (x, α) for a given point x leads to the cumbersome procedure of manufacturing a large number of samples (which form the ensemble space U ), measuring F (x, α) for every sample and then its averaging for all samples. So it appears to be meaningful to introduce


some hypotheses about the ensemble average, which substantially simplify this task. 2.1.2 Ergodic hypothesis

This hypothesis demands all states available to an ensemble of the systems to be available to every member of the system in the ensemble as well ([Beran, 1968]). Once this hypothesis is adopted, spatial or volume average of function F (x, α) given by F (x, α) = lim 1 V →∞ V F (x + y , α)dy


is independent of α and identical to the ensemble average, F (x) = F (x). (2.3)

This hypothesis allows to examine only one arbitrary member of the sample space, provided that the sample is “su?ciently large” (see Eq. (2.2)). One possible way how to ful?ll this condition is to assume periodic composite with a unit cell ?. Then, 1 V →∞ V lim F (x + y , α)dy =

1 ?

F (x + y , α)dy


so for the ergodic periodic composite medium, the ensemble average is equal to the volume average taken over the unit cell. 2.1.3 Statistical homogeneity

Let us consider the situation when function F depends on n vectors x1 , . . . , xn rather than one vector x. Then, statistical homogeneity assumptions means that value of the ensemble average is independent of the position of coordinate system origin [Beran, 1968, Torquato and Stell, 1982], so the expression F (x1 , . . . , xn ) = F (x1 ? y , . . . , xn ? y ) holds for an arbitrary value of y . The most common choice is to set y = x1 , so F (x1 , . . . , xn ) = F (0, x2 ? x1 , . . . , xn ? x1 ) = F (x12 , . . . , x1n ), (2.6) (2.5)


where xij = xj ? xi .Therefore, if statistical homogeneity condition holds, ensemble average of F is a function of n ? 1 variables only. Note that for periodic media this condition is satis?ed automatically. 2.1.4 Statistical isotropy

When making the statistical isotropy assumption, we further extend the statistical homogeneity hypothesis. In this case, the ensemble average is not only independent on the position of the coordinate system origin but on the coordinate system’s rotation as well. Under this hypothesis, the ensemble average depends on the absolute value of vectors x12 , . . . , x1n only: F (x12 , . . . , x1n ) = F (r12 , . . . , r1n ), where rij = xij . 2.2 Microstructure description (2.7)

This Section presents basic statistical functions suitable for description of the microstructure of a random media. Although two di?erent approaches to the microstructure characterization are employed here, in both cases statistical descriptors are obtained by similar procedure – initially, some fundamental random function relevant to the microstructure con?guration is presented and then statistical moments of this function are identi?ed as descriptors of microstructure morphology. Properties of the ?rst and second order moments are thoroughly discussed in the sequel. Finally, connection between these two approaches is demonstrated. 2.2.1 Microstructure description for general composites

Fundamental function and statistical moments. The stepping stone of a general description of a random composite media is the introduction of the random stochastic function– characteristic or indicator function χr (x, α), which is equal to one when point x lies in phase


r in the sample α and equal to zero otherwise [Beran, 1968, Torquato and Stell, 1982],
? ? ? 1 x ∈ Dr (α) ? ? 0 otherwise.

χr (x, α) =


The symbol Dr (α) denotes here the domain occupied by r-th phase, r is further assumed to take values m for the matrix phase and f for the ?ber phase respectively. Characteristic functions χf (x, α) and χm (x, α) are related, since the point x must be located either in the matrix or in the ?ber phase, χm (x, α) + χf (x, α) = 1. (2.9)

With the aid of this function, general n-point probability function Sr1 ,...,rn can be found using the ensemble average of the product of characteristic functions [Torquato and Stell, 1982, Beran, 1968, Drugan and Willis, 1996], Sr1 ,...,rn (x1 , . . . , xn ) = χr1 (x1 , α) · · · χrn (xn , α). (2.10)

Thus, Sr1 ,...,rn gives the probability of ?nding n points x1 , . . . , xn randomly thrown into media located in the phases r1 , . . . , rn . Functions of the ?rst and second order. Hereafter, we limit our attention to functions of the order of one and two, since higher-order functions are quite di?cult to determine in practice. Therefore, description of the random media will be provided by the one-point probability function Sr (x) Sr (x) = χr (x, α), (2.11)

which simply gives the probability of ?nding the phase r at x and by the two-point probability function Srs (x, x ) Srs (x, x ) = χr (x, α)χs (x , α), (2.12)

which denotes the probability of ?nding simultaneously phase r at x and phase s at point x.


Moreover, these functions can be further simpli?ed employing assumptions of statistical isotropy, homogeneity and ergodicity, as presented is the Section 2.1. In particular for statistically homogeneous media we have Sr (x) = Sr , Srs (x, x ) = Srs (x ? x ). (2.13) (2.14)

Suppose further that the medium is statistically isotropic. Then Srs (x ? x ) reduces to Srs (x ? x ) = Srs ( x ? x ). (2.15)

The ergodicity assumption makes possible to substitute the one-point correlation function of the statistically homogeneous media by its volume average ie. volume concentration or volume fraction of the r-th phase cr , Sr = c r . (2.16)

Limiting values. In addition, the two-point probability function Srs depends on the onepoint probability function Sr for certain values of its arguments such that for x → x for x ? x : Srs (x, x ) = δrs Sr (x), lim Srs (x, x ) = Sr (x)Ss (x ), x?x →∞ (2.17) (2.18)

→∞ :

where symbol δrs stands for Kronecker’s delta. Relation (2.17) states that probability of ?nding two di?erent phases at one point is equal to 0 (see also Eq. (2.9)) or is given by the one-point probability function if phases are identical. Equation (2.18) manifests that for large distances points x and x are statistically independent. This relation is often denoted as the no-long range orders hypothesis (see eg. [Willis, 1977]). Finally, according to Eq. (2.9) we may determine one and two-point probability functions for all phases provided that these functions are given for one arbitrary phase. For one-point probability function of statistically homogeneous and ergodic media, this relation assumes trivial form: cm = 1 ? cf . (2.19)


Relations for two-point probability functions of statistically isotropic and ergodic medium are summarized in Table 2.1. Note that symbol r which stands for x in Table 2.1 should not be mistaken with subscripted r used as a phase index heretofore. Known function Smm (r) Smm (r) Smf (r) Sf f (r) Smm (r) cm ? Smm (r) cf ? cm + Smm (r) Smf (r) Sf f (r)

cm ? Smf (r) cm ? cf + Sf f (r) Smf (r) cf ? Smf (r) cf ? Sf f (r) Sf f (r)

Table 2.1: Relation of two-point probability functions


Microstructure description for particulate composites

Another approach to the microstructure characterization can be adopted when the composite material is particulate - it consists of particles with identical speci?c shape (eg. ellipsoids, cylinders etc.) embedded in a matrix phase. Microstructure morphology of such a composite can be described using only centers of particles. This approach is frequently used in the statistical mechanics of liquids (see eg. [Boubl? ?k, 1996, Chapter 6]). Its principles will be applied here. Fundamental function and statistical moments. To be more speci?c, suppose that each sample α in U is formed by N distinguishable particles with centers located at disN tinct points x1 α , . . . , xα placed in a matrix of volume (area) V . Such a system of random

points can be described by the random ?eld density function ψ (x, α) (see [Markov, 1998, Ponte Caste? nada and Willis, 1995] and references herein)

ψ (x, α) =

δ (x ? xi α ),


where δ (·) stands for Dirac’s function.


Foregoing procedure is similar as in the Section 2.2.1. The generic n-particle probability density function1 ρn (x1 , . . . , xn ) ([Boubl? ?k, 1996, Quintanilla and Torquato, 1997]) for distinct points x1 , . . . , xn is found by ensemble averaging the product of basic functions, ρn (x1 , . . . , xn ) = ψ (x1 , α) · · · ψ (xn , α), (2.21)

which gives the probability density of ?nding a particle center in all points x1 , . . . , xn randomly thrown into the material. Functions of the ?rst and second order. As in the previous Subsection, we turn our attention to one and two-particle functions only. Thus, description of a material will be carried out by generic one-particle probability density function ρ1 (x) = ψ (x, α), (2.22)

which gives the probability density for ?nding an inclusion centered at x, and by generic two-particle probability density function provided by δ (x ? x )ρ1 (x) + ρ2 (x, x ) = ψ (x, α)ψ (x , α), (2.23)

which denotes the probability density of ?nding one inclusion centered at x and the second one in x . The ?rst term in Eq. (2.23) is added to take into account the situation, when points x and x coincide (see also discussion bellow). Again, under the assumption of statistical isotropy, homogeneity and ergodicity these functions simplify substantially. Hence, for statistically homogeneous media ρ1 (x) = ρ1 ρ2 (x, x ) = ρ2 (x ? x ).

(2.24) (2.25)

The generic n-particle probability density function can be derived via speci?c n-particle probability density

function [Boubl? ?k, 1996, Quintanilla and Torquato, 1997]. However, this procedure is purely formal and of no practical use, so is omitted here.


Furthermore, if the material is statistically isotropic the ρ2 (x ? x ) reduces to ρ2 (x ? x ) = ρ2 ( x ? x ). (2.26)

To this end, assuming the ergodicity of the media, one may equal the value of the onepoint correlation function of statistically homogeneous media to the number of particle centers per unit volume , ρ1 = ρ = lim N . V →∞ V (2.27)

Limiting values. For this approach it is meaningless to investigate the situation x → x for the function ρ2 (x, x ), since one point x cannot be occupied by two centers of distinguishable particles. However, recall that ψ (x, α)ψ (x , α) gives the probability density of ?nding particle centers located at points x and x randomly thrown into sample. When points x and x coincide, this probability density is simply equal to the probability density of ?nding a particle center at point x. Therefore, for x → x : ψ (x, α)ψ (x , α) = ψ (x, α) = ρ1 (x). (2.28)

Fortunately, the two-particle probability density function ρ2 (x, x ) for statistically homogeneous medium attains under the no-long range hypothesis similar form as the two-point probability functions for x ? x →∞ : lim ρ2 (x, x ) = ρ1 (x)ρ1 (x ). x?x →∞ (2.29)

Since for the statistically isotropic and ergodic medium ρ1 (x)ρ1 (x ) = ρ2 it is convenient to normalize this function to one for large distances of points x and x . This requirement yields the de?nition of the radial or pair distribution function g2 (x ? x ) [Axelsen, 1995, Boubl? ?k, 1996, Pyrz, 1994] g2 (x ? x ) = ρ2 (x ? x ) . ρ2 (2.30)

Sometimes it is useful to describe the media by function, which is equal to zero rather than to one for large distances of points x and x . The total correlation function h(x ? x )


[Boubl? ?k, 1996] given by h(x ? x ) = g2 (x ? x ) ? 1 is an example of such a function. The de?nition of function ρn as a probability density results in its complicated determination for given microstructures. To overcome this problem, for statistically isotropic and ergodic media [Ripley, 1977] de?ned the second order intensity function K (r) which gives the number of further points expected to lie within a radial distance r of an arbitrary point divided by the number of particles (?bers) per the unit area. This function is related to the radial distribution function by 1 dK (r) . (2.32) 2πr dr At the end, let us brie?y discuss the situation, when composite media under consideration g2 (r) = are formed by non-overlapping particles. This condition implies that ρ2 (x, x ) is equal to zero for (x ? x ) ∈ ?d ; it means that some “secure area” ?d cannot be occupied by another particle center. For example, if we consider the unidirectional ?ber composite with identical circular ?bers of radius R, ?d is cylinder with center at point x and radius 2R. 2.2.3 Relating S and ρ functions for particulate microstructures (2.31)

As noted before, the description of a media by functions ρn is suitable for material systems consisting of particles with speci?c shape, while the de?nition of functions Sn needs no such a type of initial assumption. The obvious question is how to relate this two types of approaches. To answer this question, it is necessary to establish the connection between characteristic function of one phase χr (x, α) and probability density function ψ (x, α). It is easy to notice, that for the composite formed by impenetrable particles, this relation holds nada and Willis, 1995]: [Ponte Caste? χf (x, α) = ψ (y , α)m(x ? y )dy , (2.33)

where m(y ) is the characteristic function of one inclusion with center at the origin of coordinate system.


Performing ensemble averaging and using Eq. (2.23), one gives Sf f (x1 , x2 ) = = m(x1 ? x3 )m(x2 ? x4 ) ψ (x3 , α)ψ (x4 , α)p(α)dα dx3 dx4

m(x1 ? x3 )m(x2 ? x4 ) [δ (x3 ? x4 )ρ1 (x3 ) + ρ2 (x3 , x4 )] dx3 dx4 .

Considering statistically homogeneous ergodic media and using total correlation function h(x3 , x4 ) rather than ρ2 (x3 , x4 ), we arrive at expression Sf f (x1 ? x2 ) = ρ ?int (x1 ? x2 ) + ρ2 ?2 + + ρ2 h(x3 ? x4 )m(x1 ? x3 )m(x2 ? x4 )dx3 dx4 , (2.34)

where ?int (x1 ? x2 ) stands for the intersection volume (area) of two particles with centers located at points x1 and x2 and ? is the volume (area) of one particle. Once the Sf f function is known, one may obtain all the remaining functions with the help of relations summarized in Table 2.1. For example, the two-point matrix-matrix probability function is after some elementary manipulation provided by Smm (x1 ? x2 ) = 1 ? ρ ?u (x1 ? x2 ) + ρ2 ?2 + + ρ2 h(x3 ? x4 )m(x1 ? x3 )m(x2 ? x3 )dx3 dx4 , (2.35)

where ?u (x1 ? x2 ) = 2? ? ?int (x1 ? x2 ) is the union volume (area) of two particles with centers located at points x1 and x2 . This relation is equivalent to formulas obtained by Torquato in [Torquato and Stell, 1985] using slightly di?erent procedure. To make Eq. (2.35) as simple as possible, let us consider statistically isotropic twodimensional composite formed by identical circles with radius R. For this particular case, Eq. (2.35) yields Smm (r12 ) = 1 ? ρ ?u (r12 ) + ρ2 ?2 + ρ2 h(r34 )m(r13 )m(r23 )dx3 dx4 , (2.36)

where ? equals to πR2 . Further, functions m(x) and ?u (x) depend on r = x only and are given by m(r) =
? ? ? 1 r≤R



? ? 0 otherwise


?u (r) =

? ? ? 2R2 π ? arccos(r/2R) + r/2R ? ?

1 ? (r/2R)2

r < 2R r ≥ 2R




Thus, the only unknown term in Eq. (2.35) now is the integral M (r12 ) = h(r34 )m(r13 )m(r24 )dr 3 dr 4 . (2.39)

Evaluation of this expression becomes simple once we realize that M (r) is a double convolution. In such a case the Fourier transform of M is given as a multiplication of Fourier transforms of individual functions (see eg. [Rektorys, 1995a, Chapter 28]) M (t) = h(t)m(t)m(t). After some manipulations, which are outlined in Appendix A, we arrive at m(t) = 2πR J1 (R t) t
∞ 0


∞ 0

h(t) = 2π

h(t)J0 (rt) r dr = t

K (r)J1 (rt)dr,


where J0 (·) and J1 (·) in above equations are the Bessel functions of the ?rst kind, zero and ?rst order, respectively and K (r) = K (r) ? πr2 is a counterpart of function h(r). Finally, the inverse Fourier transform can be then written as M (r) = 2.3 1 2π
∞ 0

M (t)J0 (rt)t dt.


Determination of microstructural statistics for real materials

This Section exploits individual methods suitable for the determination of microstructure descriptors de?ned in the previous Section. For the n-point probability functions a simple simulation technique is proposed, while the determination of radial distribution function is performed by deterministic procedure. Finally, precision of these methods is tested for simple two-dimensional models of microstructure. For all procedures, medium is supposed to be formed by N identical circles with radius R and centers located at positions x1 , . . . , xN embedded in the rectangle with dimensions 0; H1 × 0; H2 . Furthermore, periodicity of the media is assumed.


Figure 2.1: Example of sampling template


Functions Sn

The procedure of Sn functions determination is based directly on their de?nition - recall that for statistically homogeneous media eg. function Sm gives the chance of ?nding the randomly placed point located in the matrix phase. To determine this quantity, simple Monte-Carlo like simulation can be performed - we throw randomly point in the microstructure and count successful “hits“ into the matrix phase. Then, the value of function Sm can be estimated as Sm ≈ n , n (2.44)

where n is the number of successful hits and n denotes the total number of throws. Similar procedure can be used to determine values of Smm (x). In [Smith and Torquato, 1988] faster procedure for the determination of function Smm (x) was proposed. Instead of tossing a line corresponding to x into a medium, sampling template is formed (see Fig. 2.1). The center of such a sampling template is randomly thrown into a medium and corresponding successful hits are computed. Furthermore, if the medium under consideration is statistically isotropic, values found for points located on the same


Figure 2.2: Correction factor

circumference can be averaged as well, which allows large number of tests to be performed within one placement of the template. 2.3.2 Functions ρn

As stated in Section 2.2.2, the de?nition of ρn -type functions as the probability density functions makes their determination quite complicated. On the other hand, a function which is given as an integral value of function ρn can be easily determined. An example of such a function is the second-order intensity function given by Eq. (2.32). According to its de?nition, function K (r) can be computed as [Axelsen, 1995, Pyrz, 1994] K (r) = A N2 Ik (r) , k=1 wk


where Ik (r) is the number of points within a circle with center at particle k and radius r, N is the total number of particles (?bers) in the sample, A is the sample area and wk stands for correction factor, which takes into account points outside the sampling area if they are not used for the calculation of Ik (r). The correction factor wk is equal to 1 if periodicity of the microstructure is assumed, otherwise it can be determined as (see Fig. 2.2) wk = πr2 . A (2.46)


As is evident from Eq. (2.45), for a given microstructure and ?xed r, to obtain the value of function K one needs to determine number of points within a distance r from all particle centers in the sample. Provided that particle centers are sorted with respect to one coordinate, this task can be performed by very simple projection method with computational complexity O(N [log2 (N ) + k ]), where k ≤ N (see eg. [Hudec, 1999, Chapter 4]). Therefore, this procedure can perform rather poorly for a very large number of particles, but for the samples under consideration it was found to be su?ciently rapid. Once function K (r) is known, radial distribution function can be determined according to Eq. (2.32) by numerical di?erentiation of the K function. The principal disadvantage of functions K (r) and g2 (r) is the fact, that they can be determined by this procedure only for statistically isotropic and ergodic media. Therefore, validity of these hypotheses must be con?rmed ?rst before using these functions for the description of a random media. 2.3.3 Microstructural statistics for some theoretical models of microstructures

Now we test proposed methods for two simple cases of statistically isotropic models of microstructure - hexagonal arrangement of particles and model of equisized fully penetrable cylinders. For both con?guration, functions K (r) are known in the closed form while the expression for the function Smm (r) was found only for the model of fully penetrable cylinders. In both cases, microstructures are assumed to be periodic. Hexagonal array of cylinders. The hexagonal arrangement of particles in the matrix can be uniquely speci?ed by the periodic unit cell displayed in the Fig. 2.3. The dimensionless constant ξ is given by ξ4 = 4 π2 . 3 c2 f (2.47)


Figure 2.3: Unit cell for hexagonal lattice of ?bers

By a simple geometric consideration, function K (r) will have jumps corresponding to regular distances between particles denoted as 1 r (corner ?bers) and 2 r (central ?bers),
1 2


2 2


= ξ 2 R2 (3i2 + j 2 ), ξ2 2 = R 3(2i + 1)2 + (2j + 1)2 , 4

(2.48) (2.49)

where i, j = 0, . . . , ∞. Because there are exactly six additional ?bers found for every ?ber at a given jump of the function, from Eq. (2.45) we obtain the value of ?K for every jump in the form ?K = 6π 2 R . cf (2.50)

With the aid of these relations, the value of the function K (r) can be determined for every r. However, to obtain correct results one must carefully inspect the values of 1 r and 2 r in order to avoid taking one step into consideration twice. The graph of function K for di?erent values of cf is displayed in Fig. 2.4. Solid lines correspond to values predicted by aforementioned procedure, while dots denote discrete values obtained by numerical procedure. Fully penetrable cylinders This model assumes that positions of particle centers are totally independent. Thus, value of ρ2 (r) is constant and equal to ρ2 . Then, g2 (r) = 1 and,




c f = 0.1 c f = 0.2 c f = 0.5

K (r/R)



0 0 2 4 r/R 6 8 10

Figure 2.4: Function K (r) for hexagonal packing of ?bers from Eq. (2.32), K (r) is trivially equal to πr2 . To determine the value of Smm (x, x ) one needs to ?nd the probability that union of two cylinders with radius R and centers located in points x and x is not occupied by any particle center. Once we recognize, that this type of microstructure can be described by Poisson probability distribution (see eg. [Rektorys, 1995a, p. 662]) with the intensity ρ ?u (x, x ), we arrive at the relation Smm (x, x ) = exp(?ρ ?u (x, x )). For the statistically isotropic medium, this expression reduces to [Torquato, 1991] Smm (r) = exp(?ρ ?u (r)), (2.52) (2.51)

where ?u (r) is the union area of two identical cylinders with radius R and centers separated by distance r. Finally, recognizing that the matrix volume fraction is according to Eq. (2.17) given by Smm (r = 0) = cm = exp(?ρ πR2 ) (2.53)


we may determine the number of particles per unit area for the prescribed value of cm . To test individual methods we generated 20 di?erent con?gurations containing 100 particles for ?xed value of cm . For each of these con?gurations the value of function Smm was determined by template made of concentric circles as displayed in Fig. 2.1 with the number of random throws set to 5,000. K (r) was determined by aforementioned procedure. Results in Fig. 2.5 display averaged value of functions found for particular con?gurations.

3000 2500 2000



K ? from simulation K ? exact

0.8 0.7 0.6


1500 1000

0.5 0.4 0.3 0.2 0.1
?0.1 0

cm = 0.9 cm = 0.7 cm = 0.5 cm = 0.3 cm = 0.1

0 0


r/R (a)









r/R (b)

Figure 2.5: Microstructural functions for fully penetrable cylinders.

To conclude, results displayed in Fig. 2.4 and Fig. 2.5 show that proposed methods are su?ciently exact to provide reliable values of given statistical functions. 2.4 Analysis of real microstructure

In this Section, the preceding procedures are applied to the analysis of the micrograph shown in Fig. 1.1. As the ?rst step, the idealized binary image of the micrograph is found using LUCIDA software to determine the radius and position of all particles in the sample. In the following analysis, assumption of statistical homogeneity is applied so every considered micrograph is assumed to be surrounded by periodic replicas of itself.


Figure 2.6: Idealized binary image of a transverse cross-section of a ?ber tow

Before analyzing the present micrograph, check of the validity of ergodic assumption and the statistical isotropy hypothesis is performed. Provided that these tests justify the assumptions, all previously de?ned statistical functions can be used for description of the present media. 2.4.1 Testing ergodic hypothesis

To test the ergodic hypothesis it is necessary to form the ensemble space U . When sampling individual members of U we started with three micrographs of the ?ber tow taken from three specimens at approximately the same location. Each member of the ensemble was then found through a random cut of a part of a given micrograph subjected to condition of the “same” ?ber volume fraction. This condition actually supplements the lack of in?nity of our composite medium. Fig. 2.7 shows six such individuals generated from the micrograph displayed in Fig. 2.6. In view of the above comments we shall only require that cr = 1 n i S, n i=1 r r = f, m , (2.54)

i where n is the number of members in the ensemble. Functions Sr can be derived by randomly

placing a point in the member i in a certain number of times while counting the number








Figure 2.7: Selected members of the sample space


6.0 5.0 4.0 v[%] 3.0 2.0 1.0 0.0 0.0


4.0 r/R





Figure 2.8: Two-point matrix probability function Smm (x ? x ) and variation coe?cient v (φ) of Smm (r, φ) plotted as a function of r/R.

of hits in the phase r and then dividing by the total number of throws. When setting the number of throws equal to 500 we found Sf = 0.42, which agrees well with the average ?ber volume fraction cf = 0.435. A better agreement can be expected for larger n. Although an ultimate justi?cation of an ergodic assumption would require to prove equality of higher moments as well, we argue that the presented results are su?cient for the medium to be considered as ergodic, providing the medium is indeed statistically homogeneous. In the sense of an ergodic assumption we suggest that a single micrograph can be used hereafter for evaluation of the required statistical descriptors. 2.4.2 Test of statistical isotropy

To check the statistical isotropy of the medium under consideration we plotted the distribution of the two-point matrix probability function Smm for a statistically uniform medium as a function of the absolute di?erence of points x and x and orientation φ, Fig. 2.8(a). These results were obtained by 100,000 random throws of sampling template shown in Fig. 2.1. Should the material be statistically isotropic (independent of orientation) the variation coef-


0.60 0.55 0.50 0.45 Smm 0.40 0.35 0.30 0.25 0.20 0.0 2.0 4.0 r/R 6.0 8.0 10.0 n = 500 n = 1000 n = 5000 n = 10000 n = 50000

0.60 0.50 0.40 0.30 S2 0.20 0.10 0.00 -0.10 0.0 2.0 4.0 r/R
Smm Sfm Sff Smm + Sfm Sfm + Sff 1 - (Smm + 2Sfm + Sff)






Figure 2.9: Two-point matrix probability Smm (r) and unique relations for the general twopoint probability function Srs (r).

?cient v (φ) of Smm (φ)|r/R for a given ratio r/R would be equal to zero. For the purpose of our next study, however, the material shall be treated as statistically isotropic, and the maximum 5% variation evident from Fig. 2.8(b) is taken as an acceptable error of the assumption. 2.4.3 Microstructure describing functions

Once all the assumptions were veri?ed to some extent, all microstructure describing functions can be determined for the current microstructure. Two-point probability functions. First, the e?ect of the number of random throws on the value of function Smm was inspected. Results appear in Fig. 2.9(a). It is evident that for the present microstructure 5,000 repetitions are su?cient, which represent approximately 1 minute long computation for PC computer with Intel Pentium 200MMX processor. Fig 2.9(b) further shows certain unique relationships for the isotropic and ergodic medium, which are based on relations given in Table 2.1. As evident, proposed relations hold very well. Therefore obtained functions are su?ciently precise.


3000.0 2500.0 2000.0 1500.0 1000.0 500.0 0.0 0.0 K - using correction factor K - assuming periodicity g2

3.0 2.5 2.0 1.5 1.0 0.5 0.0 0.0

g2 - from K, ?r = 0.125R g2 - from K, ?r = 0.25R g2 - from K, ?r = 0.5R



4.0 r/R





8.0 r/R






Figure 2.10: Second order intensity function K (r) and radial distribution function g2 (r)

The two-particle probability density based functions. We start with the determination of function K (r). Results are shown in Fig. 2.10(a) for the determination of K (r) with the help of correction factor and periodicity conditions. The slight declination of curves shows that results obtained by correction factor are partially biased, especially for larger values of r. Even bigger di?erences are expected to be found for micrographs with a smaller number of particles. The function g2 (r) was determined from function K (r) by numerical di?erentiation. Results displayed in Fig. 2.10(b) are found by central di?erence formula with various values of steps used for numerical di?erentiation. As it can be anticipated, obtained results are strongly dependent on the step size, furthermore for smaller values of ?r function oscillates substantially. Finally, the relation between function Smm and functions K (r) and g2 (r) is inspected. To evaluate integrals (2.42, 2.43), adaptive Simpson’s integration scheme was used (see [Burden and Faires, 1989]) with prescribed tolerance 10?4 . Results shown in Fig. 2.11 are obtained from K (r) or g2 (r) functions and simulation for 50,000 repetitions of sampling


0.60 0.55 0.50 0.45 Smm 0.40 0.35 0.30 0.25 0.20 0.0 2.0 4.0 r/R 6.0 8.0 10.0 Smm - from simulation Smm - from K Smm - from g

Figure 2.11: The two-point matrix probability function

procedure. Coincidence of individual curves is evident. It it necessary to note that this procedure took approximately only 1 second on PC computer with Intel Pentium 200MMX processor, when most of the time computation was consumed by the numerical integration. The time needed for the determination of function K (r) itself was negligible. 2.4.4 Conclusions

To conclude, the medium under the consideration was tested to some extent for the ergodic assumption and for the statistical isotropy. These hypotheses were con?rmed for the current micrograph of a unidirectional ?ber tow. The relation between presented functions makes possible to choose any of these functions as a basic statistical descriptor; in the foregoing optimization procedure the function K (r) will be considered as the basic microstructure describing function, because its evaluation is the quickest from all the functions mentioned heretofore.

In Chapter 2, microstructure characterizing functions were identi?ed and their applicability to the current microstructure description was examined. This Chapter is devoted to the principle objective of this work - determination of the periodic unit cell, which possesses similar statistical properties as the original microstructure. Similarity of these two materials in terms of microstructure con?guration is measured by some objective function incorporating a certain statistical descriptor. Section 3.1 de?nes such a objective function using the second order intensity function K (r) as a basic statistical descriptor of the media and certain geometrical parameters of the unit cell (location of the particle centers and dimensions of the unit cell) as a free parameters of this function. The process of ?nding the minimum value of the objective function is divided into two steps - ?nding the optimal positions of ?bers for ?xed dimension of the unit cell and then ?nding the optimal dimensions of the unit cell. The second step of this procedure represents the one-dimensional minimization problem, which can be easily solved by an elementary optimization procedure. However, to solve the ?rst step, minimization of multi-dimensional, multi-modal function with a large number of local minima must be performed, which is a very di?cult task itself. A variety of deterministic and stochastic algorithms is tested in Sections 3.2 and 3.3., respectively. Section 3.4 presents some additional improvements relating to stochastic optimization procedures. Finally, Section 3.5 is devoted to the determination of resultant unit cells.



Objective function and problem de?nition

Figure 3.1: Geometry of the periodic unit cell

Consider the periodic unit cell consisting of N particles displayed in Fig. 3.1. The geometry of such a unit cell is determined by the values of dimensions H1 and H2 of the unit cell and x and y coordinates of all particles. The principal problem now is to select these values in such a way that material formed by this periodic unit cell is as much similar to some given original microstructure as possible. In order to be able to quantify the similarity between two distinct microstructures, it is convenient to introduce certain objective function which incorporates some characteristics pertinent to both microstructural con?gurations. The relevant characteristics can be in the present context provided by any microstructure describing functions, presented in Chapter 2. For the current situation, the objective function is chosen to be

F (xN , H1 , H2 ) =

K0 (ri ) ? K (ri ) 2 πri




where vector xN = {x1 , y 1 , . . . , xN , y N } stands for the position of particle centers of the periodic unit cell, xi and y i correspond to x and y coordinates of the i-th particle, H1 and H2 are the dimensions of the unit cell, K0 (ri ) is the value of K function corresponding to


the original media calculated in point ri and Nm is the number of points, in which both functions are evaluated. The choice of function K (r) for the optimization problem is primarily attributed to its simplicity and a very rapid evaluation for a reasonable number of particles N . In addition, the selected form of the objective function F (namely the “normalization” term πr2 ), Eq. (3.1), is quite useful since it serves directly as a “natural” penalization when particles happen to overlap. Therefore, no additional algorithmic labor necessary for avoiding unacceptable solutions [Povirk, 1995] is needed if the material is formed by impenetrable particles. To ?nd the optimal periodic unit cell the values of xN , H1 and H2 which provide the minimum value of function F must be determined. To solve this problem in its fully generality is prohibitively di?cult; therefore it is convenient to split the minimization procedure in two separate problems, namely: Optimal ?ber con?guration. For a given number of ?bers N , dimensions of a unit cell H1 and H2 and values of the original function K0 (r) evaluated in points ri , i = 1, . . . , Nm ?nd the con?guration of particle centers xN (H1 , H2 ) such that: xN (H1 , H2 ) = arg min F (xN , H1 , H2 ), xN ∈S where S denotes a set of admissible vectors xN . The noteworthy feature of this formulation is that the set S of admissible vectors can be de?ned in di?erent ways as is advantageous for a given optimization procedure: ? When we take into account the periodicity of the unit cell the individual components of vector xN can take arbitrary values so the problem can be treated as an unconstrained one. ? Another possibility is to restrict the position of ?bers only to unit cell by 0 ≤ xi ≤ H1 , 0 ≤ y i ≤ H2 , i = 1, . . . , N, so the problem becomes bound-constrained. (3.2) (P1)


? Finally, in Section 3.4.2 the impenetrability condition will be imposed, which with the use of some additional tricks results in substantial improvement of the optimization procedure performance. Once this problem is solved, the only unknown parameters are the values of H1 and H2 . Furthermore, if we require that the periodic unit cell must have the same volume fraction of phases cr as the original microstructure, the problem reduces to ?nding the optimal ratio η = H1 /H2 only. Thus, the second problem can be stated as Optimal ratio H1 /H2 . For known values of xN (η ) and for ?xed volume fraction of phases, ?nd the value of ratio η N such that: η N = arg min F (xN (η )),
η ∈ η a ;η b


where values ηa and ηb should be chosen to cover all the reasonable dimensions of the unit cell. While problem (P2) presents one-variable function optimization problem only, solving the problem (P1) requires to locate the global minimum of multi-dimensional function. Moreover, for the present problem the function F is a multi-modal, noisy, with a large number of plateaus and local minima (see Fig. 3.2(b)). This further complicates the situation. Therefore, we ?rst turn our attention to the solution of problem (P2); various approaches developed for solving the problem (P1) are discussed in Sections 3.2–3.4. Let us suppose for the moment, that we are able to solve the problem (P1). Then, the simplest algorithm suitable for one-dimensional optimization is the Golden Section search method [Press et al., 1992, Chapter 10.1]. Roughly speaking, given initial triplet of points a, b, c this algorithm is based on generating a new point in locations, which divide intervals a; b or b; c in some prescribed ratio – Golden section. The new point then replaces one of the points a, b, c according to its function value and position. The principle of this method is shown in Algorithm 3.1.


1 2 3 4 5 6 7 8 9

supply values a < b < c such that f (a) > f (b) ∧ f (c) > f (b) while((c ? a) < ) { determine new point d if (f (d) < f (b)) b = d, update a, c with respect to step 1 else if(d < b) a = d else c = d }

Algorithm 3.1: Golden Section search

Step 1 The initial points a and b corresponds in the present context to values ηa and ηb in (P2) and the function value f (·) represents here the minimum value of function F found for the optimization problem (P1) for given side ratio. Step 3 The new point is located in the larger of two intervals a; b and b; c , its distance √ from point b is (3 ? 5)/2 times the length of larger interval. For all computations performed here, the parameters ηa and ηb were set to 1.0 and 2.0 respectively. To check, whether the values ηa and ηb are consistent with the requirements of the step 1 of Algorithm 3.1, let us recall the results obtained in Section 2.3.3 for function K (r) associated with the hexagonal packing of ?bers. As was shown here, every regularity of microstructure results in the jump of function K (r) with the amplitude corresponding to the number of ?bers located at the same distance from a given particle. Therefore, for the side ratio η = 1.0 as well as for η = 2.0 and eg. for values of r = iH1 , i = 1, . . . , ∞, four


other ?bers are always found, while this situation does not occur for the intermediate values of η . Because function K (r) relevant to the original microstructure does not posses such jumps (see Fig. 2.10) the value of function F for η ∈ (1; 2) should be always smaller than for the interval endpoints. 3.2 Deterministic algorithms

In previous Section, the procedure for the determination of periodic unit cell was established provided that we are able to ?nd the optimal ?ber con?guration for given size of a unit cell. Some commonly used deterministic optimization procedures will be presented here to obtain the solution of problem (P1). The most widely used approach for ?nding the global minimum relies on restarting the optimization procedure from widely varying points [Press et al., 1992, p. 394] and then identify the lowest value of function found as a global minimum. In [Yuret, 1994] simple method was proposed to enhance this procedure. Instead of the random determination of starting points it rather explores the portions of search space far from previously found local minima in order to avoid trapping into the same minimum again. The method is summarized in Algorithm 3.2, where X corresponds to a set of obtained solutions, L and U correspond to vector of lower and upper bounds for given components of x. The Local optimize procedure in Step 4 of Algorithm 3.2 corresponds to a given optimization algorithm. In the present work, three methods were tested - namely Dynamic Hill Climbing, Powell’s method and BFGS algorithm were implemented. Because all these local optimizers are formulated for the case of unconstrained optimization problem, periodicity of the unit cell is taken into account to allow individual components of vector xN to take arbitrary values. Finally, self-explanating Algorithm 3.3 shows procedure for generating the distant points used in this work.


1 2 3 4 5 6

X = {L, U } while (not termination-condition) { x0 =Distant point(X ) x =Local optimize(x0 ) X =X ∪x }

Algorithm 3.2: Principle of deterministic optimizations

1 2 3 4 5

for each dimension d { sort xi ∈ X by xi [d] ?nd j : (xj [d] ? xj ?1 [d]) is maximal x[d] = u(xj ?1 [d], xj [d]) }

Algorithm 3.3: The Distant point algorithm



Powell’s method (POWELL)

The principle of Powell’s method is to perform sequence of one-dimensional line-minimization (similar to the one given by Algorithm 3.1) along the set of given directions, typically taken as unit vectors. Hovewer, in the numerical tests performed here the modi?cation of this classical approach based on heuristic choice of minimization directon (see [Press et al., 1992, Chapter 10.5] for details) was used. 3.2.2 Dynamic Hill Climbing (DHC)

Another non-derivative based method is the Dynamic Hill Climbing procedure proposed by Yuret. This method combines certain heuristic rules to enhance the performance of optimization - variable lenght of a step size, which expands when better solutions are found and shrinks when no progress is made, combined with short-term memory. For the more information inspect [Yuret, 1994]. 3.2.3 BFGS method

Finally, the BFGS method was chosen as a representative of a gradient-based methods. The implementation of this method was again taken from [Press et al., 1992, Chapter 10.7]. The values of gradients were obtained here by forward di?erence formula with step ?h = εxi or ?h = εy i respectively. 3.2.4 Test example

To test individual methods we assumed a square periodic unit cell consisting of 10 ?bers with the same volume fraction as the real specimen. The function K0 (r) and K (r) were ?tted in ?ve points (Nm = 5). Sampled points were spaced by ?ber diameter. Each algorithm with parameters displayed in Appendix B, Table B.1 was run 20 times. The iteration process was terminated, if one of the following conditions was met: ? Algorithm returned value F (x) ≤ ε = 6 × 10?5 .


0.25 0.2 0.15 0.1 0.05 25 00 20 5 15 10 x1[?m] 15 10 20 25 5 30 0 y1[?m] 30

Figure 3.2: An admissible unit cell

? Number of function evaluations exceeded 250,000. ? Number of discovered local minima exceeded 10,000. For each run the number of function evaluations was recorded together with the minimum attained value of the objective function. To manifest the problem complexity, an admissible unit cell consisting of 10 ?bers together with an example of the objective function F is shown in Fig. 3.2. Coordinates x1 and y 1 on Fig. 3.2(b) represent locations of the ?lled ?ber center. Positions of remaining ?bers are ?xed. The minimum of function F for the present con?guration is marked by a hollow circle. Results and discussion. Table 3.1 shows the minimum, maximum, and average of minima found by di?erent local optimization procedures. These values show that the performance of all methods for the current problem is quite poor. The worst result is delivered by the BFGS method, which was unable to locate the prescribed value of the objective function for all runs. This fact is probably caused by the presence of plateus found in our objective function (see Fig. 3.2(b)) where derivatives of the objective function vanish. The


performance of Powell’s method is slightly better since the returned values of the objective function are substantially smaller than for BFGS algorithm. However, this method still succeded only once in ?nding the required value of the objective function. The DHC method is clearly the most succesfull of all proposed methods. Still, the chance like 1:3 of locating the presribed value of the objective function is simply not satisfactory. Moreover, to obtain such results, quite large number of function evaluations must be performed (see Table 3.2). To reduce this quantitiy it is necessary to provide more promising starting points x0 than those supplied by Algorithm 3.3. 3.3 Stochastic optimization methods

Results obtained in the previous Section imply that to locate the global minimum of the current objective function, it is necessary to explore the search space S more thoroughly instead of starting a determininstic optimizer from a quasi-randomly generated points. A variety of stochasic methods uses such an approach. In the sequel, a number of the evolutionary algorithms is formulated to solve given optimization problem. Namely four di?erent modi?cations of a simple genetic algorithm and method of di?erential evolution is adressed. Moreover, the algorithm based on a simulated annealing procedure is proposed here. At the end, the performance of these algorithms is compared with those presented in the previous Section. To de?ne search space S one may now adopt the second option mentioned in Section 3.1


Number found

Returned value ×105 Min 6.0 5.9 2030.8 Avg 34.9 9.7 6992.5 Max 122.3 25.4 11617.3


1/20 7/20 0/20

Table 3.1: Minimum values of function



Number of evaluations Min Avg 244,258 147,706 Max 244,258 216,586 -


244,258 99,128 -

Table 3.2: Number of function evaluations for deterministic optimizers

and restrict the position of ?bers to the area of the unit cell only. 3.3.1 Principle of genetic algorithm

Genetic algorithms (GAs), as stated in the introductory part, are formulated using a direct analogy with evolution processes observed in nature, a source of fundamental di?erence between traditional optimizers and GAs. Genetic algorithms, in contrast to traditional methods presented in the previous Section, work simultaneously with a population of individuals, exploring a number of new areas in the search space in parallel, thus reducing a probability of being trapped in a local minimum. As in nature, individuals in a population compete with each other for surviving so that ?tter individuals tend to progress into new generations, while the poor ones usually die out. This process is brie?y described in Algorithm 3.4. Algorithm 3.4 provides basic steps of a single GA cycle; reproduction phase (step 5), recombination (step 6), and selection of a new population (step 7). In the next Subsection we ?rst explore mechanisms of selections of new individuals, which undergo genetic process (step 5), and then we proceed with basic operators controling the step 6. Steps 5 and 7 will be explained in more details when formulating various algorithms for solving the optimization problem.


1 2 3 4 5 6 7 8

t=0 generate P0 , evaluate P0 while (not termination-condition) { t=t+1 select Mt from Pt?1 alter Mt create Pt from Mt and evaluate Pt } (apply sampling mechanism) (apply genetic operators) (insert new individuals into Pt )

Algorithm 3.4: Principle of genetic algorithm


Sampling mechanisms

In step 5 of Algorithm 3.4 individuals selected for the reproduction are copied to the “mating” pool according to their relative performance re?ered to as their “?tness”, or “?gure of merit”. In the case of function optimization it is simply equal to the function value or rather its inverse when solving minimization problem. An expected number of copies each individual should receive in the mating pool Mt is given by ei = si P j =1 M, si = 1 , δ + fi fi ≥ 0 (3.3)


where P is the number of individuals in the population M is the number of individuals in a mating pool Mt and fi is the function value associated with the i-th individual; si = fi when solving maximization problem. Parameter δ is a small positive number which prevents division by zero. The procedure which allocates the new individuals into mating pool Mt according to Eq. (3.3) and thus turns expected number of o?spring (which is a real number) into provided number of o?springs (which is a integer number) is called sampling mechanism. In the sequel the procedures used to perform this step are summarized.


Fitness scaling. Usually it is not desirable to sample individuals according to their raw ?tness. In such a case the best individuals may receive a large number of copies in a single generation, so after a small number of GA cycles all individuals start to look alike and the algorithm usually converges prematurely to a local minimum. To compress the range of ?tness a linear scaling (shifting) of the ?tness function is incorporated into sampling procedures [Goldberg, 1989]. Then, the relation between the raw ?tness s and the scaled ?tness s is simply s = as + b. The standard procedure for the determination of constants a and b is to require the average raw ?tness savg and scaled ?tness savg to be the same. The second condition is then provided by prescribing the value of maximal scaled ?tness by smax = Cmult savg , where the values Cmult ∈ 1.2; 2 are usually used. Roulette wheel selection. In this method the probability of selecting i-th individual is set to pi = si /
P j =1

sj . As the ?rst step of a selection, a real number p with uniform distribution

on the interval 0; 1 is generated and then the ?rst individual for which its cumulative probability
j i=1

pi exceeds p is inserted into the mating pool. This procedure is repeated

cka, 1993]. until the mating pool is full. More details can be found in [Goldberg, 1989, Kvasniˇ Note that this procedure is often described as an analogy of a single spin of roulette wheel with one marker, where parts of wheel corresponding to individual members are allocated with respect to the value pi . Stochastic universal sampling (SUS). Stochastic universal sampling is a generalization of previous selection method proposed by Baker in [Baker, 1987]. The standard roulette wheel is marked with equally spaced pointers indicating the happy individual selected for reproduction. A number of pointers indicates a number of desired individuals used for reproduction, which are again obtained by the single spin of roulette wheel. Remainder Stochastic Independent Sampling (RSIS). [Baker, 1987] This method ?rstly allocates individuals according to the integer part of their ei . Then, to sample the


fractional parts, we ?rst generate a random real number r uniformly distributed within the range 0; 1 . Then we check, whether the fractional part of the i-th individual is greater than r. If yes, the individual is added to the mating pool and its fractional part is set equal to zero. If not, we move to the next individual. These steps are repeated until the number of individual in the mating pool equals M . For the sake of completeness we also brie?y mention two selection methods which are not based on the expected number of copies in mating pool each individuals should receive. The main advantage of this approach is that it principally prevents the best individuals from causing the premature convergence of the whole population. Tournament selection. Whenever an individual is to be selected to the mating pool, two individuals are picked randomly from the original generation. Then the individual with higher ?tness is copied into the mating pool. This procedure is repeated until the mating pool is full. More details about this procedure can be found in [Beasley et al., 1993a]. Normalized geometric ranking. In this method, the probability of individual’s selection is based on its rank or position in the population rather than on the value of ?tness. For this particular method (see [Houck et al., 1995]), probabilities are assigned to each individual exponentially and are normalized to provide the cumulative probability of selection equal to 1. Thus, the probability of selection of i-th individual is pi = q (1 ? q )r?1 , where q is the probability of selecting the best individual in the population , r is the rank of i-th individual with respect to its raw ?tness, and q = q/(1 ? (1 ? q )P ). 3.3.3 Genetic operators

Let us now proceed with step 6 of Algorithm 3.4, where individual members of the population breed in order to combine their (we hope) good characteristics to produce (we hope) a better


o?spring. To accomplish this procedure, solution vector xN must transformed into the genetic information - chromosome formed by sequence of genes. Then the mating process is provided by applying the genetic operators to the individual members of population. In the classical version of genetic algorithm [Goldberg, 1989] the chromosomes are represented by binary numbers and only two basic operators are used - single-point crossover and mutation. Unfortunately, this representation is not suitable for the considered problem where the solution is formed by multi-dimensional real vectors. To encode this vector into binary representation with a reasonable precision, huge chromosome must be used, which leads to the poor performance of the algorithm. In view of this general conclusion we now abandon the binary representation of parameters and turn our attention to a ?oating-point representation of genes instead. This step brings a number of advantages. First of all, using real numbers easily allows representation to the machine precision. In addition, the search operators work directly in the search domain thus no mapping between the representation space and the search space is required. This is a direct consequence of the ?oating point implementation, where each chromosome vector is coded as a vector of ?oating point numbers, of the same length as the solution vector. Particularly, each individual – a con?guration of particles in the unit cell – is represented by a real-valued chromosome X = {x1 , . . . , x2N }. Individual components of this vector are related to actual ?ber centers as follows x2i?1 = xi and x2i = y i for i = 1, . . . , N, where N is the number of ?bers within the unit cell, xi and y i represent the x and y coordinate of i-th particle. Moreover, di?erent representation of genes calls for the suitable choice of genetic operators. Michalewicz in [Michalewicz, 1992, Michalewicz et al., 1994] proposed a group of real-valued genetic operators which are formulated for the convex search space S , so they can be directly used for the solution of considered problem. The description of these opera-


tors follows. Let Li and Ui represent the lower and upper bound for each variable xi , respectively. Further assume that vector X represents a parent, whereas vector X corresponds to an o?spring; u(a, b) is a real number and u[a, b] is an integer number with uniform distribution de?ned on a closed interval a; b . The following operators can be now de?ned: Uniform mutation: Let j = u[1, 2N ] and set: xi = ? ?
? ? ? u(Li , Ui ), if i = j

xi ,


Boundary mutation: Let j = u[1, 2N ], p = u(0, 1) and set:
? ? ? ? Li , if i = j, p < .5 ? ? ?

xi =

Ui , ? ? ? ? ? ? xi ,

if i = j, p ≥ .5 otherwise

Non-uniform mutation: Let j = u[1, 2N ], p = u(0, 1) and set:
? ? ? ? xi + (Li ? xi )f (t), ? ? ?

if i = j, p < .5 if i = j, p ≥ .5 otherwise

xi =

xi + (Ui ? xi )f (t), ? ? ? ? ? ? xi ,

where f (t) = u(0, 1)(1 ? t/tmax )b , t is the current generation, tmax is the maximum number of generations and b is the shape parameter. This operator allows for a local tuning as it searches the space uniformly initially and very locally at later stages.

Multi-non-uniform mutation: Non-uniform mutation applied to all variables of X .

Simple crossover: Let j = [1, 2N ] and set: xi =
? ? ? xi , ? ? y, i

if i < j otherwise


yi =

? ? ? yi ,

if i < j

? ? x , otherwise i

Simple arithmetic crossover: Let j = u[1, 2N ], p = u(0, 1) and set:
? ? ? pxi + (1 ? p)yi , ? ?

xi =

if i = j otherwise

xi ,

yi = ? ?

? ? ? pyi + (1 ? p)xi , if i = j

yi ,


Whole arithmetic crossover: Simple arithmetic crossover applied to all variables of X .

Heuristic crossover: Let p = u(0, 1) and set: X = (1 + p)X ? pY Y =X where X is better individual than Y in terms of ?tness. If X ∈ S , then a new random number p is generated until the feasibility condition (X ∈ S ) is met or the maximum allowed number of heuristic crossover applications is exceeded. Now when the principal steps of genetic algorithms were explored we proceed with four particular examples of genetic algorithm. For each of these algorithms, relevant steps of Algorithm 3.4 are discussed more thoroughly. 3.3.4 Genetic algorithm I (GA I)

Let us start with the most simple one usually termed as Steady State GAs. Reproduction is implemented through the weighted roulette wheel and only one or two o?spring are created within each generation. For better understanding we now review the relevant steps:


Step 5 By spinning the roulette wheel select the r individuals from population Pt?1 required for mating (one individual for mutation, two individuals when the crossover operator is applied). These individuals are temporarily stored in the mating pool Mt . Linear scaling is used to reduce a common threat of premature convergence to a local optimum. Step 6 Altering Mt by applying either crossover or mutation operators. In our case, the mutation operators are used twice as often as the crossover operators. Step 7 Based on a number of new o?spring created select a corresponding number of individuals from Pt?1 to die using the inverse roulette wheel. Insert new o?spring into Pt?1 to create Pt . 3.3.5 Genetic algorithm II (GA II)

This algorithm closely resembles the simple genetic algorithm described in [Goldberg, 1989] with only minor changes. To reduce statistical errors associated with the roulette wheel selection the Remainder Stochastic Independent Sampling procedure is employed in this case. As for GA I we now review the important steps of Algorithm 3.4: Step 5 By applying the RSIS sample individuals from Pt?1 and copy them into the mating pool Mt . Note that precisely P individuals are selected for reproduction. This sampling method thus falls into category of preservative and generational selections according to the classi?cation of [Michalewicz, 1992]. Similar actions as in GAR I are taken to deal with the premature convergence. Step 6 Genetic operators are applied to all individuals in Mt . Each operator is used in a prescribed number of times depending on the population size, and new individuals are placed into a temporary population Pt . Parents for breeding are selected uniformly. Step 7 Create a new population Pt by successively replacing the worst individual from Pt?1 by individuals from the temporary population Pt .



Genetic algorithm III (GA III)

This algorithm is essentially a replica of the Michalewicz modGA algorithm presented in [Michalewicz, 1992, p. 59]. It employs the stochastic universal sampling mechanism because it allows us to select arbitrary number of individuals to the mating pool by a single wheel spinning. This is particularly appreciable when applying the modGA, which is characterized by following steps: Step 5a Using the SUS select a subset of n individuals from Pt?1 for reproduction and copy them to Mt . Note that each member of Mt can appear only once in the reproduction cycle. Step 5b Again using the SUS select exactly P ? n individuals from Pt?1 and copy them to a new population Pt . Step 6 Select uniformly parents from Mt to produce exactly n o?springs (as in GAR II, but in this case the genetic operators act only on n individuals stored in the mating pool). Step 7 Add new o?springs to population Pt . 3.3.7 Hybrid genetic algorithm (HGA)

GAs are generally very e?cient in ?nding promising areas of the searched solution. On the other hand, they may perform rather poorly when shooting for the exact solution with a high degree of precision (premature convergence, convergence to a local minimum, etc.). Therefore it appears logical (see also discussion in Section 3.2.5) to combine GAs exploring the search space initially with a deterministic optimizer exploiting promising solutions locally. As natural for GAs, this procedure can be implemented in a number of ways. When experimenting with this approach, various ideas suggested up to date were combined, which eventually led to a reliable and e?cient algorithm. It works with relatively small population


sizes, which makes computationally feasible to restart the genetic algorithm after a given convergence criterion is met. Each restart is associated with a certain number of new members entering the initial population to maintain a su?cient diversity among chromosomes. Consequently, mutation operators can be excluded from reproduction. Individual steps of this algorithm are now discussed in a sequel: Step 2 Randomly generate a small population. Steps 5&6 Perform standard genetic operations until convergence or the maximum number of generations exceeded. To select chromosomes for reproduction stochastic tournament selection scheme is applied. Only crossover operators are used. Step 7a Select n best individuals for local search. We adopt the Dynamic Hill Climbing method of Yuret (Section 3.2.2) to seek for the desired optimum with a starting point provided by the GA. When the local optimizer converges copy new individuals into Pt . Step 7b Add P ? n randomly generated individuals to ?ll population Pt . This ensures diversity among chromosomes. Goto step 5 and restart the GA. 3.3.8 Di?erential evolution (DE)

Di?erential evolution is very simple stochastic optimization method proposed by Storn and Price [Storn and Price, 1995]. This method is reported to have very good results in various minimization problems [Storn, 1996]. To reveal its similarities and di?erences with previous algorithms the principle of this method will be demonstrated within the particular steps of Algorithm 3.4.1

In this work only one variant – called DE1 in [Storn and Price, 1995] – was tested. Interested reader can

inspect for additional variants of this method.


Step 2 The initial population should be spread as much as possible over the solution space S . The size of the initial population should be ≈ 10D, where D is the dimension of a problem. Step 5 Individuals subjected to the operation are picked from population sequentially, no attention is paid to the value of their ?tness. Step 6 First, a new perturbed vector Y is generated according to Y = X j + F (X k ? X l ), where j, k, l are mutually di?erent indexes, generated as integer random numbers with uniform distribution of interval 1; P , F is the parameter of the method and superscripts refer to the position of an individual in the population. Then, this new individual is subjected to two-point crossover with starting point n = u[1, D] and endpoint (n + L)%D, where L it the length of the exchange interval and % stands for modulo, such that probability P r(L ≥ n) = CR(n?1) , where crossover rate CR is the second user-de?ned parameter (see [Storn and Price, 1995] for more details). New individual is then given by: xi =
? ? ? y i

for i = n, . . . , (n + L ? 1)%D otherwise.

? ? x i

Step 7 New individual is inserted into new population only if it is better in terms of ?tness. 3.3.9 Augmented simulated annealing (AUSA)

When talking about stochastic optimization algorithms, it would be unfair not to mention another popular method using random choice to explore the solution space, namely the Augmented simulated annealing method presented by [Mahfoud and Goldberg, 1992]. This method e?ectively exploits the essentials of GAs (a population of chromosomes, rather than a single point in space, is optimized) together with the basic concept of simulated annealing method guiding the search towards minimum energy states.


If we wish to put GAs and the AUSA on the same footing, we may relate the AUSA to a group of Steady state and On the ?y methods [Michalewicz, 1992], in which o?spring replaces its parents immediately. The replacement procedure is controlled by the Metropolis criterion, which allows a worse child to replace its better parent with only a certain probability. The probability of accepting a worse solution is reduced as the procedure converges to the “global” minimum. The following algorithm describes an implementation of the AUSA:


1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19

T = Tmax , t = 0 generate P0 , evaluate P0 while (not termination-condition) { counter = success = 0 while( counter < countermax ∧ success < successmax ) { counter = counter + 1, t = t + 1 select operator O select individual(s) It from Pt modify It by O select individual(s) It from Pt p = exp ((F (It ) ? F (It ))/T ) if (u(0, 1) ≤ p) { success = success + 1 insert It into Pt instead of parents evaluate Pt } } decrease T }

Algorithm 3.5: Augmented Simulated Annealing

Step 7 It is recommended to choose mutation operators with much higher probabilities than crossovers. In [Kvasniˇ cka, 1993] ratio ≈ 0.1 is proposed. Step 8 New individuals are selected using the normalized geometric ranking method.


Step 11 The temperature Tmax should be chosen such that the ratio of accepted solutions to all solutions is ≈ 50%. Step 18 This step is called the cooling schedule. We use a very simple form of cooling schedule Ti+1 = Tmult Ti . In this step we also perform reannealing if necessary. If the actual temperature is lower than a given parameter Tmin , we set T = Tmax and copy a half of the current population to a new one. Remaining part of a new population is generated randomly. 3.3.10 Test example To test the performance of proposed methods the same problem as in Section 3.2.4 was considered. As in previous case, each algorithm was run independently 20 times with parameters given in Appendix B. Again, the minimum, maximum and average values of function evaluation and value of the objective function of the best chromosome in the population were recorded. Initial population was generated purely randomly in all runs. Moreover, for all algorithms a simple procedure was implemented to maintain a su?cient diversity in a population. Before inserting an o?spring X into population, we ?rst search for an individual X which satis?es F (X ) = F (X ) max |xi ? xi | < ε, i = 1, . . . , 2N,

where ε is set here to 1 × 10?5 . If such an individual exists, it is replaced by X . Otherwise an individual X enters a population following step 7 of Algorithm 3.4. Results and conclusions. As it is evident from Table 3.3, the results obtained by various genetic algorithms and by the augmented simulated annealing are substantially better than results obtained by deterministic algorithms and the method of di?erential evolution, which performs rather poorly for this particular example. From di?erent versions of genetic algorithms only GA I was not able to ?nd the value of objective function 6 × 10?5 two times.



Number found

Returned value ×105 Min 6.0 5.9 5.9 106.0 5.9 5.9 Avg 7.2 6.0 6.0 210.7 6.0 6.0 Max 16.4 6.0 6.0 295.9 6.0 6.0


18/20 20/20 20/20 0/20 20/20 20/20

Table 3.3: Characteristics of the best individual

It can be seen that the performance of algorithms GA II, GA III, HGA and AUSA are similar. But, from the point of view of function evaluation shown in Table 3.4 the HGA and AUSA algorithms are slightly better than the other ones. Finally, it is interesting to compare the results of the DHC listed in Table 3.1 and the HGA given in Table 3.3. It it evident that combination of genetic algorithm with local optimizer is very advantageous - the algorithm was able to ?nd the required value of the objective function 6 × 10?5 in all cases and the average number of function evaluations is approximately only 5% of the original value. 3.4 Some additional improvements

In this Section, we try to further improve the performance of the stochastic optimizers implementing more advanced techniques used for genetic algorithms. Particularly, the adaptive operator probabilities and incorporation of the problem-dependent information into the genetic operators are inspected.


Algorithm Min GA I GA II GA III DE HGA AUSA 8,896 6,956 4,484 1,613 3,490

Number of evaluations Avg 74,562 17,270 12,037 8,856 8,709 Max 193,600 55,296 26,224 24,404 26,314

Table 3.4: Number of function evaluations for stochastic optimizers


Adaptive operator probabilities

The one of the most di?cult problems faced when using a stochastic algorithm is the proper setting of algorithms parameters. As shown in Appendix B, this task is far from trivial - eg. AUSA method needs to set up seventeen di?erent parameters. Moreover, it is well known, that the role of di?erent operators during the optimization run changes (see discussion in [Michalewicz et al., 1994] for current operators). In [Davis, 1989], method for determination of adaptive operator probabilities was established. The key idea of his approach is that operators, which produce better o?springs should be used more often than those performing poorly. Further, besides rewarding the operator which produced the good o?spring the operator responsible for creating the parent should be rewarded as well. Let us brie?y outline, how these informations are gathered. Whenever a new member is added to a population in step 7 of Algorithm 3.4 a pointer is set to its parent(s) and to the operator which created it. Further, a check is performed if the new individual is the best one in the current population. If yes, the amount that it is better is stored as a “local delta”. These information are stored in some “adaptation window” which contains last W inserted members with parents considered M generations back. Then, after each


I inserted new individuals, performance of each operator is computed. In the ?rst step, every o?spring in adaptation window passes P times its local delta to its parents and this parent(s) passes P portion of this value back to their parents etc. Then, delta is summed for all members in adaptation window for all operators and operator probabilities are updated - each operator keeps S portion of its original value and remaining part is redistributed according to operators performance. For more information see [Davis, 1989]. This procedure was used for the AUSA algorithm. As the ?rst step, the initial operator probabilities were determined by the procedure suggested in [Davis, 1989] – initially, every operator was given the same probability and parameter S was set to 0.01. Then, the algorithm was run 20 times until the ?rst adaptation took place. The adapted values from 20 runs were averaged to obtain the initial probabilities of various operators. The setting of adaptation parameters is given in Table B.8 and the resulting operator probabilities are displayed in Table 3.5. It is evident that for the initial parts of the optimization process most of the progress is done by the boundary mutation and simple arithmetic crossover. Operator Uniform mutation Boundary mutation Non-uniform mutation Multi-non uniform mutation Simple crossover Simple arithmetic crossover Whole arithmetic crossover Heuristic crossover Initial probability 0.150 0.278 0.063 0.048 0.040 0.233 0.097 0.088

Table 3.5: Initial operator probabilities resulting from adaptation procedure

Then, the algorithm was run 20 times using this initial values of operator probabilities. The adaptivity setting was the same as in the previous example, only value of S was set to .7.


Number found 12/20

Returned value ×105 Min 5.9 Avg 11.8 Max 36.9

Number of evaluations Min Avg Max 25,316

2,092 6,358

Table 3.6: AUSA algorithm with adaptive operator probabilities

Results of this test are shown in Table 3.6. Unfortunately, this approach does not seem to be useful for our particular problem. The reason is obvious - the optimization procedure gets trapped in a local minimum. This is primarily caused by the fact, that during optimization run the probability of crossover operators gets higher and higher (because they produce good o?spring) so the the diversity of population is quickly decreasing. On the other hand, if this algorithm succeeds in locating the required value, it is done more faster than for the stand alone algorithm. 3.4.2 Problem-dependent operators

It is no secret that the performance of genetic algorithms can be substantially improved by putting in use the knowledge of the problem nature for the chromosome coding or for developing the problem-speci?c operators (see [Beasley et al., 1993b, Michalewicz, 1992]). The latter approach is explored in this section. Suppose that the parent con?guration we selected for reproduction meets the impenetrability constraint imposed on the ?bers arrangement. In the recombination step we wish to introduce only such operators, which yield an o?spring con?guration complying with the impenetrability condition as well. To state this in more formal manner, we de?ne the search space as S = xi ∈ 0; H1 , y i ∈ 0; H2 : (xi ? xj )2 + (y i ? y j )2 ≥ 2R ; i, j = 1, . . . , N . (3.4)

The simplest way to ful?ll the above condition may rely on randomly generating a vector x until the condition x ∈ S is met (death penalty to an infeasible individual (see eg.


Figure 3.3: Allowable position of particle

[Povirk, 1995, Yeong and Torquato, 1998]). Clearly, this process may become prohibitively expensive, especially for a higher concentration of ?bers. To deal with this problem, it is more e?cient to ?rst determine a set of possible locations of all particles and successively generate a new particle or particles, which belong to this set. Unfortunately, solving this problem in its full generality is rather complicated. Thus, instead of identifying allowable locations of all particles, we limit our attention to a set of possible locations of one particle permitted to translate either in the x or y direction, while the coordinates of the remaining ?bers are kept ?xed. We shall denote these sets as S|yi = S|xi = xi ∈ 0; H1 : x ∈ S y i ∈ 0; H2 : x ∈ S . (3.5)

To construct the above sets imagine a collection of identical cylinders, surrounded by a certain concentric protective cylindrical surface with diameter equal to 2R (see also discussion in Section 2.2.2). In view of the impenetrability constraint the secure cylinder cannot be occupied by another ?bers’ center. In particular, to identify space S|yi with the i-th particle we draw a line x = xi , which intersects the protective surfaces around all remaining ?bers in n ? 1 intervals aj ; bj where j = 1, . . . , (n ? 1). Then the set S|yi of allowable locations


of the particle i is given by (see Fig 3.3)
n?1 n

S|yi = 0; H1 \
j =1

aj ; b j =
j =1

aj ; b j .


Similarly we write
n?1 n

S|xi = 0; H2 \
j =1

aj ; b j =
j =1

aj ; b j .


In the spirit of Eqs. (3.6) and (3.7) we can now de?ne the following set of problem-speci?c operators (see also [Michalewicz et al., 1994]): Uniform mutation Generate an integer number i = u[1, N ] with uniform distribution on a closed interval 1; N , select the x or y coordinate, evaluate S|xi or S|yi , select k = u[1, n] and set: x i = xi + u(ak , bk ) or y i = y i + u(ak , bk ), while ?xing the remaining coordinates. Non-uniform mutation Select i = u[1, N ] and the x or y coordinate, evaluate S|xi or S|yi . Select k such that xi ∈ ak ; bk or y i ∈ ak ; bk , generate a real number p = u(0, 1) with uniform distribution on a closed interval 0; 1 and set: xi = ? ? or yi = ? ?
? ? ? xi + (bk ? xi )f (t),

if p < .5

xi + (ak ? xi )f (t), if p ≥ .5 if p < .5

? ? ? y i + (bk ? y i )f (t),

y i + (ak ? y i )f (t), if p ≥ .5


while ?xing the remaining coordinates. The number f (t) is the same as in Section 3.3.3. Multi-non-uniform mutation Non-uniform mutation applied successively to all coordinates of X .


Simple two-point crossover Select i = u[1, n] and j = u[1, n] ∧ i < j . Repeat for k = i, . . . , j :
1 2

xk = xk =

1 k 1 k

x (1 ? α) +2 xk α

x α +2 xk (1 ? α),

1 2

yk = yk =

1 k 1 k

y (1 ? α) +2 y k α y α +2 y k (1 ? α),

where eg. 2 xk corresponds to the x coordinate of k -th particles of the second parent. The parameter α is set to 1 initially. If x k ∈ S|yi or y k ∈ S|xi respectively, decrease α by 1/nd ; nd is the user-de?ned parameter (number of decreasing steps). This procedure is repeated until x k ∈ S|yi or y k ∈ S|xi holds (this condition is always met for α = 0). These operators were implemented to AUSA algorithm and the proof test was run again. Initial population consisted of non-penetrable particles. Parameters of the method are given in Appendix B. Results in Table 3.7 show, that this approach is very e?cient since the required value of the objective function was found for all 20 runs and the average number of evaluations was reduced by 50%. Number found 20/20 Returned value ×105 Min 5.9 Avg 6.0 Max 6.0 Number of evaluations Min 543 Avg 4,403 Max 11,516

Table 3.7: AUSA algorithm with problem-dependent operators


Sampling grid re?nement

To check the quality of the resultant unit cell let us plot the second order intensity function for the original microstructure (K0 (r)) against the one associated with a unit cell chosen



3000 2500 2000 K(r/R)
Original microstructure Unit cell for Nm = 5 Unit cell for Nm = 10



1500 1000 500 0






4 r/R (b)











Figure 3.4: Periodic unit cells with corresponding second order intensity functions

as the minimum from 20 independent runs. Results, which appear in Fig. 3.4(b), were again derived via the AUSA method. Evidently, both functions agree well at sampled points. Unfortunately, a signi?cant deviation is distinct in all other points. To improve coincidence of both curves the number of sampled points Nm must be increased. Fig. 3.4 shows results of such procedure. It is interesting to note that when running for example the AUSA for Nm = 10 from the beginning, we arrived at the minimum equal to 1.63 × 10?4 after approximately 173,000 function evaluations. However, when starting with Nm = 5 and then continuing with Nm = 10 we received the desired minimum after 115,000 function evaluations only. Thus, using the results obtained for coarser grid of sampling points as a initial population for foregoing optimization can save large amount of function evaluations. 3.5 Determination of periodic unit cell

In this Section, method used for the determination of a periodic unit cell is described in more details. Results obtained by this procedure are demonstrated and discussed. Finding optimal ?ber con?guration. Optimal ?ber con?guration was determined by the AUSA method (Algorithm 3.5) with the problem-speci?c operators (see Section


0.30 0.25 Objective function 0.20 0.15 0.10 0.05 0.00 -0.05 0 10 20 30 40 50

Number of fibers

Figure 3.5: Variation of the objective function with the number of particles in the PUC.

3.4.2). Parameter settings are given in Appendix B, Table B.9. Optimization was ?rst performed only for Nm = 4 with ?bers spaced by one ?ber diameter. Then, value of Nm was raised to 8,16,24 and 32. Sampled points were biased towards the value of r = 2R in order to capture an apparent correlation of individual space points within the sample. Best 50% of individuals were copied into new population to provide good initial guesses for subsequent optimization. Termination conditions. For each value of Nm , iteration was terminated when one of the following conditions was met: ? Algorithm returned value F (xN , H1 , H2 ) < ε = 10?4 . ? Number of function evaluation exceeded 250,000. The ?rst condition is usually ful?lled for small values of Nm , while the second one takes e?ect in remaining cases. As the value of the objective function F (xN , H1 , H2 ) for given H1 and H2 the result obtained for Nm = 32 was used.


Nm = 8 - K evaluated at 8 points Nm = 16 - K evaluated at 16 points Nm = 24 - K evaluated at 24 points

1.2 1.0 0.8




0.6 0.4


Unit cell - N = 8 Unit cell - N = 16 Unit cell - N = 24 Original micrograph


0.2 0.0 0.0







4.0 r/R



(a) (b)

Figure 3.6: Evolution of the 10 particles PUC

Finding the optimal side ratio. This step was performed by Golden Section search method (Algorithm 3.1) with values ηa = 1.0, ηb = 2.0 and = .05. Results for a di?erent number of particles within the unit cell are shown in Fig. 3.5. Values of objective function are relatively high, eg. the value of objective function lesser than 10?2 is ?rstly reached for 10-?ber periodic unit cell. To explain this fact, let us consider the results obtained for the periodic unit cell with 10 particles (Fig. 3.6). Evolution of the unit cell with increasing accuracy of the solution is plotted in Fig. 3.6(a), whereas Fig. 3.6(b) displays a variation of the normalized second order intensity function K (r)/πr2 corresponding to various stages of the solution process. From this Figure, two facts are evident – ?rst : for all values of Nm it is clear, that there is the jump of the objective function located at r corresponding to side length of unit cell, which results in quite large deviation of these two functions. Secondly, function corresponding to the periodic media agrees well in sampled points but in the neighboring points is quite noisy. This is probably caused by the relatively small number of particles in the unit cell comparing to the original microstructure. Nevertheless, to judge the quality of unit cells one should take into account the e?ective behavior of resultant periodic composite material rather than measuring the


quality of obtained unit cells by an arti?cially de?ned objective function. This is the objective of following Chapter. Finally, some examples of the resulting unit cells are given in Fig. 3.7 together with the hexagonal lattice shown for comparison. It is evident, that periodic unit cells resulting from optimization capture the clustering of particles in the original micrograph to some extent.






Figure 3.7: Periodic unit cells: (a) Hexagonal lattice, (b) 2-?bers PUC, (c) 5-?bers PUC, (d) 10-?bers PUC.

This Chapter is devoted to the testing of resultant periodic unit cells obtained by optimization procedure in Chapter 3 from the e?ective elastic behavior point of view. The e?ective elastic properties of the material formed by the periodic unit cell are obtained by the Finite Element Method and compared with results provided by the Mori-Tanaka method and by the periodic unit cell with hexagonal arrangement of ?bers. Section 4.1 outlines basic relation pertinent to evaluation of the e?ective elastic properties of unidirectional composites. Then, in Section 4.2 and 4.3 two speci?c procedures determining the e?ective properties – namely the Mori-Tanaka method and numerical procedure based on the Finite Element analysis – are discussed. Finally, Section 4.4 shows various numerical results obtained for di?erent periodic unit cells. 4.1 Basic relations

A common approach when dealing with composite structures is to replace their inhomogeneous microstructure with some homogenized or e?ective material which exhibits the same overall behavior as the non-homogeneous one. Since in this work only the elastic behavior of the composite material is considered, the homogenized medium is uniquely characterized by its e?ective material sti?ness or compliance tensor.


The elastic behavior of such a material is determined by displacement, strain and stress ?elds given by 1 ?ui (x) ?uj (x) + , 2 ?xj ?xi σij (x) = Lijkl (x) : kl (x),
ij (x)



ij (x)

= Mijkl (x) : σkl (x),

(4.2) (4.3)

?σij (x) = fi (x), ?xj where ui is the displacement ?eld,

and σij stand for the strain and stress ?eld, Lijkl is the

1 tensor of material sti?ness, Mijkl is the tensor of material compliance such that L? ijkl = Mijkl

and fi is the vector of body forces. When analyzing the unidirectional ?ber composite systems it is usually su?cient to use the generalized plane strain conditions. In such a state, the only non-zero components of strain and stress tensors are
11 , 12 , 22 , 33

and σ11 , σ12 , σ22 , σ33 respectively. Note that due

to perfect bonding between individual phases components

and σ33 attain constant values.

Employing Hill’s notation [Hill, 1964] the material sti?ness tensor assumes the form
? ? (k + m) (k ? m) 0 ? ? ? (k ? m) (k + m) 0 ? ? ?

L=? ?

0 l

0 l


0 n

?, ? 0 ? ? ?

l ? ? l ? ?



where constants k,m,n and l are connected with material engineering constants by
2 k = ?[1/GT ? 4/ET + 4νA /EA ]?1 2 n = EA + 4kνA = EA + l2 /k

l = 2kνA m = GT .

Then, to determine the e?ective properties of homogenized media one should relate the average stress and strain ?elds in the spirit of Eq. (4.2). Thus, prescribing average strain to be (x) =

or average stress to be σ (x) = σ 0 , the e?ective characteristics of composite


media are provided by: σ (x) (x) = = L(x) : (x) = L :

(4.5) (4.6)

M(x) : σ (x) = M : σ 0 ,

where · stands for spatial average of particular ?eld and L and M are the e?ective sti?ness and compliance tensors of the heterogenous material. 4.1.1 Hill’s lemma

The principal step when determining the e?ective properties is provided by Hill’s relation. He proved that for compatible strain and equilibrated stress ?elds the following relation holds [Hill, 1963]: σ (x) : (x) = σ (x) : (x) . (4.7)

This relation in fact states that the average of “microscopic” internal work is equal to the macroscopic work done by internal forces. Then, again prescribing the average strain (x) =

or average stress σ (x) = σ 0 ,

the e?ective properties can be alternatively de?ned using Eqs. (4.5,4.6) as




(x) : L(x) : (x) , σ (x) : M(x) : σ (x) .

(4.8) (4.9)

σ0 : M : σ0 = 4.1.2

Stress and strain concentration factors

Sometimes, it is convenient to establish the connection between average stresses and strains for given phases and overall prescribed values of stress or strain by [Hill, 1963]:

= Ar :


(4.10) (4.11)

σ r = Br : σ 0 ,


where Ar and Br are stress and strain concentration factors of r-th phase,


and σ r stands

for the average value of stress and strain values in the r-th phase, r = f, m. Noticing that

= cm


+ cf


(4.12) (4.13)

σ 0 = cm σ m + cf σ f ,

following relations can be established for individual concentration tensors of a two-phase media: cm Am + cf Af = I , cm Bm + cf Bf = I . (4.14) (4.15)

Thus, once the concentration factor for one phase is known it is possible to express the concentration factor for the second phase. Finally, recognizing that σ r = Lr be expressed as L = cm Am Lm + cf Af Lf , M = cm Bm M m + cf Bf M f . 4.2 Mori-Tanaka method (4.16) (4.17)



= Mr σ r , the e?ective material properties can

Let us consider the isolated ?ber embedded in the in?nite matrix subjected to far-?eld load σ0 . It follows form work of Eshelby [Eshelby, 1957], that stress ?eld inside the ?ber is constant and can be uniquely determined by values of Lm , Lf and Eshleby’s tensor S. In the reformulation of Mori-Tanaka method by Benveniste [Benveniste, 1987], the isolated inclusion is loaded by average stress in matrix. Then, stress inside the ?ber is provided by σ f = Wf σm , where Wf is partial concentration factor is given by
1 Wf = Lf I + SL? m (Lf ? Lm ) ?1 1 L? m .




1 The tensor S can be expressed via polarization tensor P as P = SL? m , which for cylin-

drical ?bers assumes the form [Sejnoha, 1999]: P22 = P33 = P44 km + 4mm 8mm (km + mm ) km + 2mm = 2mm (km + mm ) P23 = P32 = ?km 8mm (km + mm ) (4.20)

P55 = P66 = 1/(2pm ),

ˇ where indexes of P correspond to engineering notation (see eg. [Bittnar and Sejnoha, 1992]). The total stress concentration factors with help of Eq. (4.13) are obtained by Bf = Wf (cm I + cf Wf )?1 Bm = (cm I + cf Wf )?1 . Therefore, using Eq. (4.17), e?ective compliance material tensor yields M = [cm Mf Wf + cm Mm ](cm I + cf Wf )?1 . Finally, e?ective sti?ness tensor then can be obtained as L = M?1 . 4.3 Finite element formulation (4.22) (4.21)

When employing the Finite Element Method it is very advantageous to exploit the periodicity of the media. Since the microstructure of composite media is periodic with period ? (periodic unit cell), the stress and strain ?eld as well as material characteristics appearing in Eqs. (4.1– 4.3) are periodic as well. Let us suppose that our unit cell is loaded by prescribed value of

For foregoing analysis it is convenient to split the ?eld u(x) into periodic part and part

corresponding to overall strain in the form u(x) =

· x + u? (x),


where the ?rst term in previous equation corresponds to a?ne displacement due to prescribed uniform strain

[Michel et al., 1999] and second term is the ?-periodic part of u(x). Once

u? (x) is known, stress and strain ?eld are given by (x) =




(4.24) (4.25)

σ (x) = L(x) : (x),




(x) represents here the ?uctuating part of the strain ?eld, which vanishes upon

volume averaging. ˇ 1992] is To ?nd the u? the principle of minimum potential energy [Bittnar and Sejnoha, usually employed. This yields 2 W ( (x)) = ≤ (u(x)) : L(x) : (u(x)) = (v (x)) : L(x) : (v (x)) , (u(x)) : L(x) : (u(x)) ≤ (4.26)

where W (·) represents the potential energy of the system for given strain ?eld, v is any kinematically admissible displacement ?eld in the form v (x) =

· x + v ? and v ? (x) again

corresponds to ?-periodic ?uctuating part of displacement ?eld v . From this point, standard displacement-based Finite Element procedure can be employed ˇ [Bittnar and Sejnoha, 1992]. We start with the discretization of u, u(x) =

+ N(x)r ,


where N(x) stands for matrix of basis functions and r is the vector of unknown degrees of freedom. Then, the strain ?eld is provided by (x) =

+ B(x)r .


Minimization of Eq. (4.26) with respect to r ?nally yields the system of equilibrium equations Kr = f , where K =


Ke fe

where where

Ke =

1 ? 1 ?


BT Le B dAe BT Le

f =

fe = ?


dAe ,


where K is the sti?ness matrix of the system, f is a vector of global nodal forces resulting from the loading by

e stands for number of elements, Ae is the area of element e and ?

is the area of periodic unit cell.


Solution of this system of equation determines the u? up to the rigid body motion, which must be restricted by imposing appropriate boundary conditions. Due to the periodicity of the microstructure, these supports can be provided by ?xing the corners of the unit cell. The second problem is implementing the periodicity condition on r . A large number of approaches can be adopted to take this restriction into account (see [Michel et al., 1999] for details). In this work, this condition was simply ful?lled by assigning same code numbers to corresponding degrees of freedom related to the nodes on opposite sides of the unit cell. To obtain the e?ective sti?ness tensor, the periodic unit cell is successively loaded by strain ?elds

which have one component equal to 1 and other 0. Performing the preceding

procedure, one may ?nd corresponding value of u? and determine σ (x) . This value then corresponds to one row of the elastic sti?ness tensor L. By this algorithm, all components of the e?ective sti?ens tensor can be obtained. 4.4 Numerical results

The aforementioned procedure was applied to equivalent unit cells resulting from optimization procedure as well as to the original sample displayed in Fig 2.6. Each of micrographs was discretized using CST triangle elements, the corresponding FEM meshes displayed in Fig 4.1 were created by Triangle 1.31 program. Finally, Table 4.1 shows the considered material properties of the matrix and ?ber. Phase EA [GPa] Fiber Matrix 386 5.5 ET [GPa] 7.6 5.5 GT [GPa] 2.6 1.96 0.41 0.40 νA

Table 4.1: Material properties of T30/Epoxy system


This program was obtained from





Figure 4.1: Example of Finite Element meshes: (a) Hexagonal lattice, (b) 2-?bers PUC, (c) 5-?bers PUC, (d) 10-?bers PUC.

First, to prove the applicability of the proposed method, the elastic moduli derived for the original microstructure, Fig 2.6, are compared with those found for the periodic unit cells displayed in Fig. 3.7. Selected components of the e?ective sti?ness matrix L are stored in Table 4.2. Results obtained for the hexagonal arrangements of ?bers and found by the Mori-Tanaka method are provided for additional comparison. Evidently, the e?ective elastic properties obtained for the periodic unit cell, unlike those found by the hexagonal lattice and the Mori-Tanaka method, which correspond to the transversally isotropic medium, are capable to capture a slight anisotropy associated with the real microstructure. In addition, the results in Table 4.2 also promote the PUC consisting of 5 ?bers only as the smallest one


Unit cell Original 2 ?bers PUC 5 ?bers PUC 10 ?bers PUC Hexagonal array

L11 10.76 10.78 10.76 10.76 10.74

L22 10.73 10.75 10.73 10.73 10.74 10.74

L33 2.215 2.202 2.215 2.215 2.213 2.216

L44 177.2 177.2 177.2 177.2 177.3 177.3

cf 0.44 0.44 0.44 0.44 0.44 0.44

Mori-Tanaka method 10.74

Table 4.2: Components of the e?ective sti?ness matrix

we should consider for evaluation of the e?ective elastic material properties. Recall that the smallest number of ?bers providing the reasonable value of objective function F in Chapter 3 was established to be 10. To con?rm our theoretical expectations, we investigated an in?uence of the proposed optimization technique on the e?ective moduli computed for the 10-?bers PUC derived from ?ve independent optimization runs. As shown in Fig 3.6, the periodic unit cell substantially changes during the optimization run; moreover the resulting ?ber con?gurations are not identical in di?erent optimization runs. Nevertheless, results stored in Table 4.3 show that

Modulus Mean value [GPa] L11 L22 L33 10.76 10.73 2.215

Standard deviation [GPa] 0.013 0.013 0.003

Variation coe?cient [%] 0.12 0.12 0.13

Table 4.3: Variation of e?ective sti?ness tensor for ?ve ten-particle PUC resulting from optimization


Modulus Mean value [GPa] L11 L22 L33 10.73 10.71 2.210

Standard deviation [GPa] 0.32 0.38 0.07

Variation coe?cient [%] 2.97 3.54 3.48

Table 4.4: Variation of e?ective sti?ness tensor for ?ve randomly picked ten-particle PUC

the ?nal moduli are not sensible to the particular ?ber con?guration (each optimization run provides a slight di?erent ?ber arrangements having, however, the same materials statistics up to the two-point probability function). Finally, to support the present approach, the mean value and standard deviation of e?ective sti?nesses derived from ?ve independent runs for unit cells de?ned through a random cut of the original micrograph Fig. 2.6 was found. Dimensions of such a unit cell were selected as to comply with dimensions found for the PUC consisting of 10 particles. Results given in Table 4.4 show that the di?erences in the moduli are not severe, however the variation coe?cient is by orders of magnitude higher than for unit cells derived from the optimization procedure.

A simple and intuitive approach based on microstructural statistics was proposed to determine periodic unit cells. This procedure was applied to the analysis of real composite material represented here by a bundle of graphite ?bers surrounded by an epoxy matrix. In Chapter 2 two di?erent approaches suitable for the description of a unidirectional ?ber composite microstructure were introduced. For each approach, the detailed discussion on microstructural characteristics up to the two-point level was presented together with the methods for their determination. Then, the validity of statistical isotropy and ergodicity assumptions for the composite material microstructure were con?rmed by series of tests. These results allowed us to use the second order intensity function as a basic statistical descriptor in subsequent numerical computations. Chapter 3 was concerned with the generation of the unit cell with the same statistics (up to two-point level) as possessed by the real microstructure. First, the objective function describing the similarity of the original material and the material formed by the periodic unit cell was de?ned in terms of the second order intensity function. Optimal geometrical arrangement of ?bers within the periodic unit cell was then determined by minimizing the proposed objective function. The optimization procedure itself was performed in two stages the Augmented Simulated Annealing method with problem-suited operators was found to be the most e?cient tool for determination of the optimal ?ber con?guration, while dimensions of the unit cell were found by the Golden Section search method. It is worth to note, that the second order intensity function, although evaluated under the assumption of statistical isotropy, can capture clustering character of the real microstructure. The e?ective properties of the material formed by the periodic unit cell and original


material were obtained by the Finite Element Method in Chapter 4 and compared with results provided by the Mori-Tanaka method and unit cell with hexagonal arrangement of ?bers. Results show, that the corresponding periodic unit cells exhibit the slight anisotropy possessed by the original material. Further, the invariance of e?ective properties of unit cells resulting from di?erent optimization runs was veri?ed. However, although the resulting e?ective properties are not far from those corresponding to the hexagonal packing of ?bers, an appreciable di?erence may appear when the contrast between the phases (ratio between the transverse Young’s of the harder phase to the transverse Young’s modulus of the softer phase) increases. As shown in Chapter 4, the replacement of the complex material microstructure of the real material by the substantially simpler periodic unit cell is very convenient from the point of view of numerical determination of the material behavior. This approach can be extremely useful when examining the inelastic response of the composite material such as plasticity, debonding or damage and may provide useful insight into the e?ective behavior of composite materials. Moreover, since the microstructural characteristics are known for a given composite media, results obtained by numerical procedure can be compared with rigorous bound and estimates found when considering inelastic behavior of the material. These topics will be subjects of further investigation.

[Aboudi, 1991] Aboudi, J. (1991). Mechanics of composite materials : A uni?ed micromechanical approach, volume 29 of Studies in Applied Mechanics. Elsevier. [Axelsen, 1995] Axelsen, M. (1995). Quantitative description of the morphology and microdamage of composite materials. PhD thesis, Aalborg University. Also available as [Baker, 1987] Baker, J. E. (1987). Reducing bias and ine?ciency in the selection algorithm. In Grefenstette, J., editor, Proceedings of the First International Conference on Genetic Algorithms, pages 101–111. Lawrence Erlbaum Associates. [Beasley et al., 1993a] Beasley, D., Bull, D., and Martin, R. (1993a). An overview of Genetic Algorithms: Part 1, Foundations. University Computing, 15(2):58–69. Also available as [Beasley et al., 1993b] Beasley, D., Bull, D., and Martin, R. (1993b). An overview of Genetic Algorithms: Part 2, Research topics. University Computing, 15(4):17–69. Also available as [Benveniste, 1987] Benveniste, Y. (1987). A new approach to the application of Mori-Tanaka theory in composite materials. Mechanics of Materials, 6:147–157. [Beran, 1968] Beran, M. J. (1968). Statistical Continuum Theories. Monographs in Statistical Physics. Interscience Publishers. [Berryman, 1984] Berryman, J. G. (1984). Measurement of spatial correlation functions using image processing techniques. Journal of Applied Physics, 57(7):2374–2384.


ˇ ˇ [Bittnar and Sejnoha, 1992] Bittnar, Z. and Sejnoha, J. (1992). Numerick? e metody mechaˇ niky, volume 1. CVUT. [Boubl? ?k, 1996] Boubl? ?k, T. (1996). Statistick? a termodynamika. Academia. [Burden and Faires, 1989] Burden, R. and Faires, J. D. (1989). Numerical Analysis. PWSKent Publishing Company, 4th edition. [Cerny, 1985] Cerny, J. (1985). Thermodynamical approach to the traveling salesman problem: An e?cient simulation algorithm. J. Opt. Theory Appl., 45:41–51. [Corson, 1974] Corson, P. B. (1974). Correlation functions for predicting properties of heterogeneous materials. I. Experimental measurement of spatial correlation functions in multiphase solids. Journal of Applied Physics, 45(7):3159–3164. [Davis, 1989] Davis, L. (1989). Adapting operator probabilities in genetic algorithm. In Scha?er, J. D., editor, Proceeding of the Third International Conference on Genetic Algorithms, pages 61–69. Morgan Kaufmann. [Drugan and Willis, 1996] Drugan, W. and Willis, J. (1996). A micromechanics-based nonlocal constitutive equation and estimates of representative volume element size for elastic composites. Journal of the Mechanics and Physics of Solids, 44(4):497–524. [Eshelby, 1957] Eshelby, J. D. (1957). The determination of the elastic ?eld of an ellipsoidal inclusion and related problems. Proceeding of Royal Society, Series A, 241:376–396. [Gibson, 1994] Gibson, R. F. (1994). Principles of composite material mechanics. McGrawHill. [Goldberg, 1989] Goldberg, D. (1989). Genetic Algorithms in Search, Optimization and Machine Learning. Addison-Wesley.


[Hashin and Shtrikman, 1963] Hashin, Z. and Shtrikman, S. (1963). A variational approach to the theory of elastic behavior of multiphase materials. Journal of the Mechanics and Physics of Solids, 11:127–140. [Hill, 1963] Hill, R. (1963). Elastic properties of reinforced solids - Some theoretical principles. Journal of the Mechanics and Physics of Solids, 11:357–372. [Hill, 1964] Hill, R. (1964). Theory of mechanical properties of ?bre-strenghtened materials: I. Elastic behaviour. Journal of the Mechanics and Physics of Solids, 12:199–212. [Hill, 1965] Hill, R. (1965). A self-consistent mechanics of composite materials. Journal of the Mechanics and Physics of Solids, 13:213–222. [Holland, 1975] Holland, J. H. (1975). Adaptation in Natural and Arti?cial Systems. MIT Press. [Houck et al., 1995] Houck, C. R., Joines, J. A., and Kay, M. G. (1995). netic Algorithm for function OpTimization: A Matlab implementation. A GeTechnical

Report NCSU-IE TR 95-09, North Carolina State University.

Also available as staff/joines/papers/ ˇ [Hudec, 1999] Hudec, B. (1999). Programovac? ? techniky. CVUT. [Ingber, 1993] Ingber, L. (1993). Simulated annealing: Practice versus theory. Mathematical and Computer Modelling, 18(11):29–57. Also available as [Ingber, 1995] Ingber, L. (1995). Adaptive simulated annealing (ASA): Lessons learned. Control and Cybernetics, 25(1):33–54. Also available as


[Kirkpatrick et al., 1983] Kirkpatrick, S., Gelatt, Jr., C., and Vecchi, M. P. (1983). Optimization by Simulated Annealing. Science, 220:671–680. [Kohler and Papanicalou, 1982] Kohler, W. and Papanicalou, G. C. (1982). Bound for the overall conductivity of random media. In Burridge, R., Childress, S., and Papanicolaou, G., editors, Macroscopic properties of disordered media, volume 24 of Lecture Notes in Physics, pages 111–130. Courant Institute, Springer-Verlag, Berlin. [Kvasniˇ cka, 1993] Kvasniˇ cka, V. (1993). Stochastick? e metody optimalizace funci? ? N premenn? ych. In Anal? yza dat, Pardubice. [Kvasniˇ cka, 1999] Kvasniˇ cka, V. (1999). Evoluˇ cn? e algoritmy. In preparation. [Mahfoud and Goldberg, 1992] Mahfoud, S. and Goldberg, D. E. (1992). A genetic algorithm for parallel simulated annealing. In Manner, R. and Manderick, B., editors, Parallel Problem from Nature, volume 2, pages 301–310. North-Holland, Amsterdam. [Markov, 1998] Markov, K. Z. (1998). On the cluster bounds for the e?ective properties of microcraked solids. Journal of the Mechanics and Physics of Solids, 46(2):357–388. [Michalewicz, 1992] Michalewicz, Z. (1992). Genetic Algorithms + Data Structures = Evolution Programs. AI Series. Springer-Verlag, New York. [Michalewicz et al., 1994] Michalewicz, Z., Logan, T. D., and Swaminathan, S. (1994). Evolutionary operators for continuous convex parameter spaces. In Sebald, A. and Fo-

gel, L., editors, Proceedings of the 3rd Annual Conference on Evolutionary Programming, pages 84–97, River Edge, NJ. World Scienti?c Publishing. [Michel et al., 1999] Michel, J., Moulinec, H., and Suquet, P. (1999). E?ective properties of Also available as


composite materials with periodic microstructure: A computational approach. Computer Methods in Applied Mechanics and Engineering, 172:109–143. [Milton, 1982] Milton, G. W. (1982). Bounds on the elastic and transport properties of twocomponent composites. Journal of the Mechanics and Physics of Solids, 30(3):177–191. [Mori and Tanaka, 1973] Mori, T. and Tanaka, K. (1973). Average stress in matrix and average elastic energy of elastic materials with mis?tting inclusions. Acta Metallurgica, 21:571. [Ponte Caste? nada and Willis, 1995] Ponte Caste? nada, P. and Willis, J. (1995). The e?ect of spatial distribution on the e?ective behavior of composite materials and cracked media. Journal of the Mechanics and Physics of Solids, 43(12):1919–1951. [Povirk, 1995] Povirk, G. L. (1995). Incorporation of microstructural information into models of two-phase materials. Acta metall. mater., 43(8):3199–3206. [Press et al., 1992] Press, W. H., Teukolsky, S. A., Vetterling, W. T., and Flannergy, B. P. (1992). Numerical recipes in C. Cambridge University Press, 2nd edition. Also available as [Pyrz, 1994] Pyrz, R. (1994). Correlation of microstructure variability and local stress ?eld in two-phase materials. Materials Science and Engineering, A177:253–259. [Quintanilla and Torquato, 1997] Quintanilla, J. and Torquato, S. (1997). Microstructure functions for a model of statistically inhomogeneous random media. Physical Review E, 55(2):1558–1565. Also available as [Rektorys, 1995a] Rektorys, K., editor (1995a). Pˇ rehled uˇ zit? e matematiky, volume 2. Prometheus, 6th edition.


[Rektorys, 1995b] Rektorys, K., editor (1995b). Pˇ rehled uˇ zit? e matematiky, volume 1. Prometheus, 6th edition. [Reuss, 1929] Reuss, A. (1929). Berechnung der ?iessgrenze von mischkristallen auf grund der plastizatatesbedingung fur eirkristalle. Z. Ang. Math. Mech., 9:49. [Ripley, 1977] Ripley, B. (1977). Modeling of spatial patterns. Journal of the Royal Statistical Society - Series B (methodological), 39(2):172–192. [Sejnoha, 1999] Sejnoha, M. (1999). Micromechanical analysis of unidirectional ?brous composite plies and laminates, volume 3 of CTU Reports. Czech Technical University in Prague. [Smith and Torquato, 1988] Smith, P. and Torquato, S. (1988). Computer simulation results for the two-point probability function of composite media. Journal of Computational Physics, 76:176–191. [Storn, 1996] Storn, R. (1996). On the usage of di?erential evolution for function optimization. In NAPHIS 1996, pages 519–523. Berkeley. Also available as storn/ [Storn and Price, 1995] Storn, R. and Price, K. (1995). Di?erential Evolution : A simple and e?cient adaptive scheme for global optimization over continuous spaces. Technical Report TR-95-012, University of Berkeley. Also available as [Suquet, 1987] Suquet, P. (1987). Elements of homogenization for inelastic solid mechanics. In Sanchez-Palencia, E. and Zaoui, A., editors, Homogenization Techniques for Composite Media, volume 272 of Lecture notes in physics, pages 194–278. Springer-Verlag, Berlin.


[Tepl? y and Dvoˇ r? ak, 1988] Tepl? y, J. L. and Dvoˇ r? ak, G. J. (1988). Bound on overall instantaneous properties of elastic-plastic composites. Journal of the Mechanics and Physics of Solids, 36(1):29–58. [Torquato, 1991] Torquato, S. (1991). Random heterogeneous media: Microstructure and improved bound on e?ective properties. Applied Mechanics Review, 44(2):37–76. [Torquato and Stell, 1982] Torquato, S. and Stell, G. (1982). Microstructure of two-phase random media. I. The n-point probability functions. 77(4):2071–2077. [Torquato and Stell, 1985] Torquato, S. and Stell, G. (1985). Microstructure of two-phase random media. V. The n-point matrix probability functions for impenetrable spheres. Journal of Chemical Physics, 82(2):980–987. [Voight, 1887] Voight, W. (1887). Theoretische studien uber die elastizitatsverhaltnisse der klustalle. Abh. Klg. Ges. Wiss. Gottingen, Math. Kl., 34:47. [Willis, 1977] Willis, J. R. (1977). Bounds and self-consistent estimates for the overall properties of anisotropic composites. Journal of the Mechanics and Physics of Solids, 25:185– 202. [Yeong and Torquato, 1998] Yeong, C. L. Y. and Torquato, S. (1998). Reconstructing random media. Physical Review E, 57(1):495–506. [Yuret, 1994] Yuret, D. (1994). From genetic algorithms to e?cient optimization. Technical Report 1569, Massachusetts Institute of Technology, Arti?cial Intelligence Laboratory. Also available as Journal of Chemical Physics,

The function M (r) appears in the relation for the two-point matrix probability function of particulate media (see Eq.(2.39)). For the statistically isotropic media, it takes the form M (r12 ) = h(r34 )m(r13 )m(r24 )dr 3 dr 4 . (A.1)

To obtain this function, the two-dimensional Fourier’s transform of individual members of previous equation must be found. The two-dimensional Fourier’s transform is given by F (t) = F (x)e?ix·t dx, (A.2)

and the inverse Fourier’s transform is provided by F (x) = 1 4π 2 F (t)eix·t dt. (A.3)

Let start with the evaluation of h(t), given by h(t) = h(x)e?it·x dx. (A.4)

Substitution of (A.4) into polar coordinates leads to h(t, θ) =
0 ∞ 0 ∞ 0 0 ∞ 0 2π 2π 2π

h(r)e?irt(cos θ cos φ+sin θ sin φ) r drdφ = h(r)e?irt(sin(θ?φ)) r drdφ = e?irt sin α dα dr (A.5)

= =

h(r) r

Noticing, that bracketed term can be rewritten as (see eg. [Rektorys, 1995b, p. 663])
2π 0

e?irt sin α dα = 2πJ0 (rt),



where J0 is the Bessel’s function of the ?rst kind and 0-th order, we arrive at expression h(t) = 2π
0 ∞

h(t)J0 (rt) r dr .


Furthermore, integral (A.7) can be determined using the function K (r) instead of h(r). Recall how the function g2 (r) is derived from K (r) (Eq. (2.32)) and proceed in similar fashion. Thus, h(t) = 2π
0 ∞

h(r)rJ0 (rt)dr = 2π

∞ 0

1 dK (r) rJ0 (rt)dr = 2πr dr
∞ 0


dK (r) J0 (rt)dr = t dr

∞ 0

K (r)J1 (rt)dr + J0 (rt)K (r)



where the function K (r) = K (r) ? πr2 is a counterpart of the function h(r) and J1 is the Bessel’s function of the ?rst kind and ?rst order. The last term in Eq. (A.8) vanishes, because K (0) = 0 and for r → ∞ J0 (rt) → 0 and K (r) → const. The value of m(t) is provided by m(t) = 2π
0 ∞ R

H (R ? r)J0 (rt) r dr = 2π


J0 (rt) r dr.


Note that (see [Rektorys, 1995b, p. 664]) dJ1 (r) d (rJ1 (r)) = J1 (r) + r = J1 (r) + (?J1 (r) + rJ0 (r)) = rJ0 (r), dr dr so integral (A.9) yields 2πr J1 (rt) m(t) = t


2πR J1 (Rt) t


Using similar procedure and employing the relation (A.7), the inverse transform of function M (t) can be obtained as M (r) = 1 2π
∞ 0

M (t)tJ0 (rt)dt.



Algorithm POWELL DHC

Parameters Step init Step init Step min

Value R R 0.001 10?2

Description Initial step length Initial step length Minimum step length Step for numerical di?erentiation



Table B.1: Deterministic algorithms parameters

Parameters N pop b C mult N mut crs N heu max

Value 64 2 2. 2 10

Description Size of population Shape parameter for non-uniform mutation Ratio of maximum/average scaled ?tness Ratio of mutation/crossover operators Maximum number of heuristic crossovers in one cycle

Table B.2: Parameter settings for GA I


Parameters N pop b C mult N heu max N uni mut N bnd mut N nun mut N mnu mut N smp crs N sar crs N war crs N heu crs

Value 64 2. 1.2 10 8 8 8 8 4 4 4 4

Description Size of population Shape parameter for non-uniform mutation Ratio of maximum/average scaled ?tness Maximum number of heuristic crossovers in one cycle Number of uniform mutations for one generation Number of boundary mutations for one generation Number of non-uniform mutations for one generation Number of multi-non-uniform mutations for one generation Number of simple crossover for one generation Number of simple arithmetic crossover for one generation Number of whole arithmetic crossover for one generation Number of heuristic crossovers for one generation

Table B.3: Parameter settings for GA II

Parameters N pop N pop det N pop max N heu max N smp crs N sar crs N war crs N heu crs

Value 8 4 20 10 1 1 1 1

Description Size of population Number of individuals used for DHC Number of populations before application of DHC Maximum number of heuristic crossovers in one cycle Number of simple crossover for one generation Number of simple arithmetic crossover for one generation Number of whole arithmetic crossover for one generation Number of heuristic crossovers for one generation

Table B.4: Parameter settings for HGA


Parameters N pop b C mult N heu max N uni mut N bnd mut N nun mut N mnu mut N smp crs N sar crs N war crs N heu crs

Value 64 2. 1.2 10 3 3 3 3 1 1 1 1

Description Size of population Shape parameter for non-uniform mutation Ratio of maximum/average scaled ?tness Maximum number of heuristic crossovers in one cycle Number of uniform mutations for one generation Number of boundary mutations for one generation Number of non-uniform mutations for one generation Number of multi-non-uniform mutations for one generation Number of simple crossover for one generation Number of simple arithmetic crossover for one generation Number of whole arithmetic crossover for one generation Number of heuristic crossover for one generation

Table B.5: Parameter settings for GA III

Parameters N pop F CR

Value 200 .6 .9

Description Size of population

Crossover rate

Table B.6: Parameter settings for DE


Parameters N pop q b N heu max p uni mut p bnd mut p nun mut p mnu mut p smp crs p sar crs p war crs p heu crs T max T min T mult N success max N counter max

Value 64 .08 2. 10 .225 .225 .225 .225 .025 .025 .025 .025 .1 ε/10 .9 500 5000

Description Size of population Probability of selecting best individual Shape parameter for non-uniform mutation Maximum number of heuristic crossovers in one cycle Probability of uniform mutation Probability of boundary mutation Probability of non-uniform mutation Probability of multi-non-uniform mutation Probability of simple crossover Probability of simple arithmetic crossover Probability of whole arithmetic crossover Probability of heuristic crossover Ratio of T max and average F of starting population Minimum temperature Ration of cooling Maximum number of accepted individuals Maximum number of iteration for ?xed T

Table B.7: Parameter settings for AUSA

Parameters W I M P

Value 1500 1000 10 .9

Description Length of adaptation window Adaptation interval Generations to be considered back Proportion of ? to pass back

Table B.8: Adaptivity parameters setting


Parameters N pop q b N heu max p uni mut p nun mut p mnu mut p s2p crs T max T min T mult N success max N counter max

Value 64 .08 2. 10 .3 .3 .3 .1 .1 ε/10 .9 500 5000

Description Size of population Probability of selecting best individual Shape parameter for non-uniform mutation Maximum number of heuristic crossovers in one cycle Probability of uniform mutation Probability of non-uniform mutation Probability of multi-non-uniform mutation Probability of two-point crossover Ratio of T max and average F of starting population Minimum temperature Ration of cooling Maximum number of accepted individuals Maximum number of iteration for ?xed T

Table B.9: Parameter settings for AUSA with problem-dependent operators



TABLE OF CONTENTS Table of Contents ii List of Figures and Tables iv
table of contents & list of figures and tables
TABLE OF CONTENTS Table of contents i List of Figures iii List of Tables v
Contents List of Figures iv List of Tables vii
Table of Contents Table of Contents ii List of Tables ii
List of Figures, x List of Tables, xii List of Tables, xii
TABLE OF CONTENTS LIST OF FIGURES……………………………………………………………... 5 LIST O
TABLE OF CONTENTS LIST OF FIGURES..........................................................
TABLE OF CONTENTS LIST OF FIGURES.................................. iii
Making Tables and Figures
Table of Contents List of Tables................................. v
TABLE OF CONTENTS LIST OF TABLES................................... vii
Contents List of Figures List of Tables Index
TABLE OF CONTENTS Table of contents i List of Figures iii List of Tables v