Abstract
Keywords
Introduction
Validation is an essential step in any modelling process when the predictions from the model are to be used to inform decisions. It is defined as ‘the process of determining the degree to which the model is an accurate representation of corresponding physical experiments from the perspective of the intended uses of the model’. 1 There is an extensive literature on the subject of model validation which has recently been reviewed by Sargent and Balci 2 and also by Roungas et al. 3 In the context of computational solid mechanics, the American Society of Mechanical Engineers (ASME) produced a guide for verification and validation in 2006 4 which has recently been revised into a standard. 1 While the European standards organisation, Comité Européen de Normalisation (CEN) published a workshop agreement on validation in 2014. 5 The former provides principles and comprehensive definitions while the latter describes a practical approach to making quantitative comparisons between fields of measurements and predictions. These efforts are largely based on observations in research laboratories, and there are relatively few reports of the application of validation processes in industrial environments. This is important because Barlas and Carpenter 6 observed that ‘operators’ perceive validity in terms of relative usefulness of models whereas ‘observers’, whose work tends to dominate the scientific literature, tend to consider validation as a formal concept of logic rather than a pragmatic issue. This topic was to some extent addressed by an inter-laboratory study, or round-robin in 2016 7 which evaluated the efficacy of the methodology in the CEN workshop agreement using three test-cases, one being the predicted and measured deformation of a reflector antennae structure for space application. The other test-cases were an I-beam with holes in its web that was subject to three-point bending and a rubber block that exhibited large deformation under the action of a wedge-shaped metal indenter. The inter-laboratory study identified three issues that needed further consideration: (i) a measure of the quality of the predictions; (ii) matching of regions of interest (ROIs) from the fields of predictions and measurement and (iii) the importance of designing experiments for the specific purpose of performing a validation of a model. In the intervening years, progress has been made towards addressing these issues. Dvurecenska et al. 8 have addressed issue (i) by developing a probabilistic metric for use in the validation of computational models that allows a statement to be made about the probability of a field of predictions being representative of reality based on the quality of the measured data defined by its relative uncertainty. While Christian et al. 9 have addressed (ii) by developing a generalised decomposition method for fields of data that permits regions of interest (ROIs) with irregular shapes to be decomposed and compared. The third issue about the requirement to design validation experiments is a reinforcement of the advice found in the ASME guidance1,4 and followed from difficulties caused by a lack of meta data relating to measurement data from experiments conducted some time earlier and/or for other purposes. This imposition of a requirement to design experiments specifically for the purpose of validation could be seen as part of the ‘observer’ approach to validation rather than a pragmatic solution that would be attractive to ‘operators’, particularly when economic factors are taken into consideration. Thus, to address this issue, a group of ‘operators’ and ‘observers’ developed an extension of the validation flowcharts found in the ASME and CEN documents to include consideration of the use of historical data from physical tests and computational models in a validation process. 10
Since efforts have been made to address the three issues highlighted in the inter-laboratory study on validation, it is timely to evaluate the enhanced process using an industrial case study. Hence, this paper describes the first industrial implementation and results of the enhanced validation process performed on a section of a cockpit of an aircraft during ground-based tests at the manufacturer’s site. A region of interest behind the cockpit windows was identified, and predictions from a finite element model were compared quantitatively with measurements made using digital image correlation while the full-scale fuselage nose section was subject to internal pressurisation loads.
Methods
Preliminaries
The revised flowchart describing the validation process produced by Hack et al. 10 was used and is shown in Figure 1. The sub-flowcharts describing the individual stages in the validation process are included in the Supplemental Figures S1a–S1f. A graphical user interface (GUI) was developed to support the implementation of the overall flowchart. 11 In this case study, no historical data was available; however, an existing test set-up was available and was utilised. The route taken through the flowchart is shown by the heavy black lines in Figure 1. A double-blind procedure was implemented with separate teams for modelling, for conducting the experiments and for performing the quantitative comparison. All communication between the modellers and experimenters was via the comparison team, and results were not shared until the final outcome. Hence, the simulation and measurement activities were blind to one another’s results and could not be influenced by them.

Validation flowchart based on Hack et al. 10 with the route implemented in this case study shown by the heavy black lines; sub-flowcharts describing the procedures within the coloured process boxes are provided in the Supplemental Figures S1a–S1f and a graphical user interface (GUI) is available to support the use of the flowchart. 11
A finite element model of the fuselage nose section for an aircraft was supplied by Airbus and is shown in Figure 2. For the purposes of this case study, it was decided to perform a validation process for the predictions of the out-of-plane displacements, shown in Figure 3, for a region of interest immediately behind and adjacent to the left side cockpit windows when the nose section was subject to a series of loads due to internal pressurisation. In other words, the requirement was to determine the degree to which the model is an accurate representation of the corresponding physical experiment from the perspective of the intended use of the model, that is, to predict the deformation of the fuselage nose section under pressurisation loads expected in flight. The decision about the acceptability of the model and its predictions is not included in the definition of validation and does not form part of this case study, although it is reviewed in the discussion section.

Finite element model of the fuselage nose section of an aircraft showing a three-dimensional view (top) and the side elevation (bottom) of the region of interest.

Predicted
Modelling and simulation
A non-linear finite element model (Figure 2) was solved and post-processed by Airbus using the Abaqus FE commercial code and comprised 75,528 nodes and 78,872 elements of which most were three-dimensional shell elements. The fuselage is constructed from aluminium, and appropriate properties were used for the different types of alloys present in the fuselage. The predictions of the Cartesian components of displacements due to an internal pressure are shown in Figure 3; although for commercial reasons, the value of the pressure is not included. Predictions of displacements for the region of interest were obtained for a series of values of internal pressure up to almost twice atmospheric pressure.
Physical testing
The structural response of the cockpit in the ROI was measured using a stereoscopic digital image correlation system (Q-400 DIC system, Dantec Dynamics GmbH) consisting of a pair of Baumer VCXG-51M cameras with a resolution of 2448 × 2048 pixels fitted with Schneider-Kreuznach 1.4/12-0906 lenses with a focal length of 12 mm. The geometry of the DIC system was such that it had a stereo angle of 28° with a base length of 1.19 m. The set-up is shown in Figure 4.

Experimental set-up showing region of interest (left) with part of the speckle pattern applied, the digital image correlation system (right) and the calibration target (centre).
The speckle pattern was manually painted on the lower half of the ROI using a fibre pen (Marking 2300 permanent marker, BIC), as shown in Figure 5, and consisted of black speckles on a white primer. The pattern in the upper half was impression-printed using a stamp (Horse-grooming brush, Oster). A calibration was performed using a calibration board (Dantec Dynamics GmbH, BD-50.0 mm-09 × 09), also shown in Figures 4 and 5, which provided the intrinsic and extrinsic parameters for the DIC system and also a field of measurement deviations, shown in Figure 6, which were obtained using the industrial calibration procedure developed by Siebert et al.
12
The method utilises the calibration target, commonly used with DIC systems to evaluate the intrinsic and extrinsic parameters of the system, through measurements of the object of interest and separately the calibration target in front of the object which are acquired before and after relative motion between the measurement system and the object of interest. These four sets of stereoscopic image pairs allow a quantitative comparison of the calculated and measured shape and location from which the field of measurement deviations can be obtained, and the minimum measurement uncertainty,

Speckle pattern on region of interest with the calibration plate in foreground; the pattern was manually applied on the lower half using a pen and impression-printed on the upper half for convenience.

Field of measurement deviations for the region of interest on the fuselage obtained using the method described by Siebert et al. 12

Typical measured displacements (millimetres) in the
Quantitative comparison
A common axes system was established for both the simulation and experiment by arranging for the

The initial comparisons of the predicted fields of
where

Graphical comparison of coefficients from feature vectors representing the predicted and measured fields of
In addition, the validation metric described by Dvurecenska et al.
8
was applied to the datasets and the probability that the predictions belong to the same population as the measurements given an uncertainty in the measurement data,
where
And the validation metric is given by
where || is an indicator function that takes a value of 1 when

Validation metric (VM) from equation (4) as a function of pressure difference in the cockpit where VM is expressed as the probability of the predictions being from the same population as the measurements given the uncertainty in the measurements is 0.1 mm.
Discussion
The purpose of this case study was to demonstrate, at a large scale and in an industrial environment, the effectiveness of the solutions to the issues raised by the inter-laboratory study 7 conducted in 2016 using the CEN workshop agreement on the validation of computational solid mechanics models. 5 These issues were: (i) the need for a measure of the quality of predictions, which has been addressed by the validation metric developed by Dvurecenska et al. 8 and used to generate the plot in Figure 10; (ii) the requirement to reliably match regions of interest in the fields of measurements and predictions, which has been addressed by using QR decomposition to process irregularly-shaped regions of interest, as described by Christian et al. 9 and implemented in a specially-written programme, THEON and (iii) the importance of designing experiments for the specific purpose of performing a validation process. This last issue appears to have arisen from a lack of information about the measurements derived from historical experiments, that is, experiments performed in the past that were not designed for the validation process currently being undertaken. This does not imply that all measurements from historical experiments are unsuitable for use in a validation process; however, it does suggest that a careful evaluation of the metadata relating to historical measurements needs to be undertaken before attempting to use such measurements in a validation process. Hack et al. 10 have proposed additions to the flowcharts found in the ASME guidance1,4 and CEN workshop agreement, 5 which allow this evaluation to be conducted for both historical measurement data and historical simulation data. These evaluations were conducted as part of this case study and led to the conclusion that no suitable historical measurement data were available but that an existing test article was suitable for the experiments. Similarly, it was concluded that additional simulations needed to be conducted using the model which was the subject of the validation process. The flowcharts in Figure 1 and the Supplemental Material permitted these evaluations to be conducted and recorded independently via a graphic user interface (GUI) 11 by the experimenters and modellers who shared their conclusions with the comparison team. The comparison team coordinated the execution of the subsequent measurement campaign and of the additional simulations to yield the results shown in Figures 3, 6 and 7. The division of the group conducting the validation process into three teams allowed a double-blind approach to be taken in which the team responsible for modelling did not have access to the data from the physical experiments and vice versa, that is, the team responsible for the experiments did not have access to the data from simulations. The essential information that both teams required was provided via the comparison team who also received the results from both teams and conducted the processes which resulted in Figures 8 to 10. The flowcharts and associated GUI were a key enabler in this double-blind process that ensured that there was no cross-contamination of results.
The measurement uncertainty is a key component of the metadata for the measurements and is needed to assess the significance of the differences between the measurements and predictions using either the graphical approach recommended in the CEN workshop agreement
5
and illustrated in Figure 9, or the validation metric developed by Dvurecenska et al.
8
Siebert et al.
12
have recently described a method for measuring the spatial variation of the error in full-field displacements obtained using stereoscopic digital image correlation in an industrial environment. The method uses stereoscopic image pairs of both the object of interest and separately the conventional calibration target acquired before and after a relative movement between the DIC system and the object of interest. This approach results in the capture of all sources of error, including those arising from the instrumentation, the environment, the test article and its speckle pattern. The method was utilised in this case study and provided the map of deviations shown in Figure 6. The deviations appear to be randomly distributed implying that there is no systematic error, although it is noticeable that the deviations tend to be larger in the top right of the region of interest where the curvature of the fuselage results in the surface being at the greatest angle to a plane normal to the optical axis of the DIC system. However, the largest deviations are less than 10% of the smallest
The region of interest to the rear of the cockpit window was an irregular shape that could not be readily decomposed by orthogonal decomposition using Chebyshev polynomials, as recommended in the CEN workshop agreement. 5 The difficulties arise because Chebyshev polynomials are based on rectangular arrays of data which means that extrapolation or interpolation must be used to fill cut-outs and holes in the data, or the data needs to be warped into a rectangular array, which both introduces errors in the representation of the data by the resultant feature vector and requires substantial expert intervention in the process. 9 In this case study, QR factorisation was used as the core of the decomposition process via an algorithm implemented in a specially-written software package, THEON which had been previously tested on complex shapes. 9 The accuracy of the decomposition was evaluated by reconstructing both the measurement and simulation data fields from their corresponding feature vectors and the results are shown in Figure 8. The root mean square of the absolute difference between the original and reconstructed data is 0.02 and 0.045 mm for the predictions and measurements, respectively, which is of the same order of magnitude as the deviations in the measurements in Figure 6 as would be expected because the guidelines provided by the CEN workshop agreement were followed in the selection of the number of coefficients used in the decomposition process.
The measurement uncertainty,
The case study represents a further step on the path from the research laboratory, as represented by the initial work on the methodology by Sebastian et al. 13 and the inter-laboratory study reported by Hack et al., 7 via the large-scale study of an industrial component in a laboratory described recently by Dvurecenska et al., 14 to a ground-test on an aircraft fuselage at the manufacturer’s site. This represents a progressive rise in the technology readiness level of this approach to the validation process for computational solid mechanics models and the resolution of a number of issues that had been identified during this transition from research laboratory to industry. Some issues remain to be addressed, such as a method for utilising the complete information set describing the field of deviations in Figure 6 for the measurement system; however, the next challenge is the routine deployment of the validation process by industry, which is likely to reveal a further set of issues that were not apparent during this case study executed by a team skilled in the process.
Conclusions
A validation process for computational solid mechanics models has been demonstrated on a ground-test of an aircraft fuselage at the manufacturer’s site for the first time. This represents a transition of the validation process from a tool for use in a research laboratory to its industrial use on large scale structures. The demonstration provided a showcase for a number of advances in the validation process that had been stimulated by an inter-laboratory study conducted approximately 5 years ago and that have been reported separately in the intervening period. These advances include a multi-level flowchart for evaluating the suitability of historical data from measurement and simulation campaigns, a decomposition process capable of handling irregularly-shaped data fields without expert inputs and a validation metric for quantifying the extent to which a predicted field of data represents a field of measurements. The results of the case study demonstrate the efficacy of the validation process enhanced by the combination of these recent advances and the practicality of its implementation in an industrial environment. The finite element model of the cockpit of aircraft was found to be an excellent representation of the measurements made adjacent and to the rear of the cockpit window over a range of pressure differences.
Supplemental Material
sj-pdf-1-sdj-10.1177_03093247211059084 – Supplemental material for Validation of a structural model of an aircraft cockpit panel: An industrial case study
Supplemental material, sj-pdf-1-sdj-10.1177_03093247211059084 for Validation of a structural model of an aircraft cockpit panel: An industrial case study by Eann A Patterson, Ioannis Diamantakos, Ksenija Dvurecenska, Richard J Greene, Erwin Hack, George Lampeas, Marek Lomnitz and Thorsten Siebert in The Journal of Strain Analysis for Engineering Design
Footnotes
Authors’ note
Declaration of conflicting interests
Funding
Supplemental material
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
