Abstract
Keywords
Introduction
Researchers and professionals widely accept that the banking sector is the most important component of any financial system (Georgantopoulos & Tsamis, 2013). Ergo, the stability of the banking system contributes to the stability of economic growth. Thus, the financial crisis in 2007–2008 caused the biggest economic disruption since the Great Depression (Adebambo et al., 2015), shaking the fiscal world with a wave of massive losses (Zaghdoudi, 2013, p. 537).
According to the Federal Deposit Insurance Corporation (FDIC), 140 U.S. banks failed in 2009, including several high-profile institutions such as Bear Stearns, Citigroup, Lehman Brothers, Merrill Lynch, and Wachovia. This widespread failure indicated the weaknesses of the banking system (Ayadurai & Eskandari, 2018). Because we now live in a very interconnected economy, this failure could have been transmitted to other sectors. Therefore, evaluating the financial performance of banks is very important because they were at the center of that crisis (M. E. Barth & Landsman, 2010). In the wake of the financial crisis, stakeholders are becoming increasingly concerned with their firm’s financial performance; bankers recognize the need to formulate better strategies to drive performance but may struggle to determine priorities.
Although an unprecedented number of banks collapsed or were bailed out by governments during the crisis (Erkens et al., 2012), not all banks across the world performed equally poorly; some banks performed better during the crisis (Beltratti & Stulz, 2012). To explore this phenomenon, several papers such as Beltratti and Stulz (2012), Serrano-Cinca et al. (2014), Adebambo et al. (2015), Cox et al. (2017), and Avkiran et al. (2018) investigated the impact of financial crisis on banks’ performance from a variety of perspectives. Thus far, however, insufficient attention has been paid by these studies to the role of prioritization of managerial decisions and policy activities in precipitating the financial crisis. Prioritization at a strategic and operational level is usually the difference between success and failure. This study helps address this gap in the literature by applying a technique to help bankers determine priority factors that should receive their attention to prevent a repeat of the failure.
In light of this consideration, importance-performance analysis, which is often called importance-performance map analysis (IPMA) (Ringle & Sarstedt, 2016), has been found to be a useful technique to identify the areas of improvement that should be addressed by management activities (Martilla & James, 1977). This approach allows managers to improve their management strategies because it indicates the main factors that require an immediate response (improvement) (Wyród-Wróbel & Biesok, 2017, p. 123).
The decision to apply this technique was driven by three motivations in particular: (a) IPMA facilitates more rigorous management decision-making; (b) IPMA is a powerful tool that can assist managers to set better priorities and better allocate scarce resources; and (c) having guidelines for performance assessment is as valuable to a firm as it is to the individuals who invest in it (particularly during and following a financial crisis; Streukens et al., 2018).
To be clear, the purpose of this study is not to analyze the effect of financial crises on banks’ performance. Rather, it introduces the IPMA and how to use it to provide a better understanding of where management should focus their attention. Reviewing business literature indicates that a plethora of papers have used IPMA in banking, such as Joseph et al. (2005), Ramayah et al. (2014), and Samar et al. (2017). These papers utilized an end-user satisfaction survey to measure customers’ perceptions. This article is the first to apply IPMA using secondary data collected from financial statements to assess the performance of banks. IPMA was applied to a unique data set of 140 failed U.S. banks that closed in 2009 compared with the same number of nonfailed banks from 2006 to 2008. The U.S. banking data were used as a case study because the crisis started in the United States, where many large banks lost most of their equity (Beltratti & Stulz, 2012). Thus, as U.S. banks play a major role in today’s global economy, their performance contributed more heavily to the crisis than other financial firms around the world.
It is well known that IPMA is the extension of partial least squares structural equation modeling 1 (PLS-SEM; Ringle & Sarstedt, 2016). The first requirement of the empirical work is to develop the PLS path model. 2 The factors chosen for this model are those most relevant to success and failure: profit and risk. Because financial ratios are generally utilized as good predictors of business failure (Maricica & Georgeta, 2012), 15 ratios for each bank were selected to build the initial model. According to the rule of thumb of model evaluation using PLS-SEM (Hair et al., 2016), only six ratios were employed each year. The model assembles 15 indicators with four predecessor constructs (i.e., profitability of 2006, profitability of 2007, risk of 2006, and risk of 2007) and one final target construct (i.e., profitability of 2008). Profitability and risk of 2007 mediate the path of profitability and risk of 2006 and profitability of 2008. The importance and performance values of profitability of 2008’s predecessor constructs (i.e., profitability of 2006, profitability of 2007, risk of 2006, and risk of 2007) create the importance-performance map of profitability of 2008.
The results plotted in the IPMA indicated that failed banks were predisposed to decreasing financial performance in 2008 because of their poor performance in 2006 and 2007. On the contrary, the profitability of 2006 and 2007 for nonfailed banks positioned them for increasing financial performance in 2008. Although the IPMA can be conducted on the indicator level, this article limits its analysis by focusing on the construct level only. 3 Finally, this research, to date, is the first to apply the IMPA using the secondary data collected from financial statements.
The rest of this article begins by presenting previous related work. This is followed by the research method where the procedures for predictive model assessment are outlined in clear steps. Findings and discussion follow along with a brief conclusion.
Literature Review
The use of PLS-SEM has received significant attention in several business disciplines (J. R. Barth et al., 2018). This technique is ubiquitous in marketing and management information systems, but it is still in its embryonic state in banking literature because the advantages of this approach have yet to be discovered by the banking discipline (Avkiran, 2018, p. 1). Only a few papers have used PLS-SEM in banking literature. 4
To my best knowledge, the first application of partial least squares discriminant analysis (PLS-DA) for the prediction of the 2008 U.S. banking crisis was done by Serrano-Cinca and Gutiérrez-Nieto (2013). They compared PLS-DA with eight algorithms commonly used in bankruptcy prediction. It was indicated that PLS-DA results resemble linear discriminant analysis and support vector machine results. A particular advantage of this technique (PLS-DA) is that it is not affected by multicollinearity because it has been designed to deal with this issue. They concluded that the interpretability of the PLS-DA model was satisfactory.
The study by Serrano-Cinca et al. (2014) applied a path model based on structural equations and logistic regression to investigate the financial symptoms that precede bankruptcy. They used low profitability, insufficient revenue, or low solvency ratios as proxies for symptoms, whereas loan growth (some of them risky), specialization (real estate concentration), and the pursuit of a turnover-driven strategy neglecting margin were used as proxies for the causes of these symptoms. They found that 5 years before the crisis, distressed banks, compared with solvent banks, had the following: higher loan growth, higher concentration on real estate loans, higher risk ratios, and higher turnover, but lower margins. Also, failed banks had a significant relationship between the percentage of real estate loans and risk. This relationship was negative in successful banks. Their findings confirmed that successful banks allocated real estate loans that were both fewer in number and higher in quality. I see this study as a good example of a causal analysis (i.e., symptoms and causes), but the measurement invariance of the composite model (MICOM)—which is a critical step in cross-group investigation—has never been observed in this study. Without measurement invariance, a study is susceptible to potential misspecification bias. Therefore, this article differs from the Serrano-Cinca study in that the measurement invariance is tested before running a multiple analysis to compare the path coefficients between failed and nonfailed banks.
To explain the drivers of bank soundness in G7 countries from 2003 to 2013, Ayadurai and Eskandari (2018) developed a model with 17 indicators of six constructs as the direct cause and eight as the indirect cause. They found that banks placed high importance on off-balance sheets and capital activities, thus taking on higher risk.
By using PLS-SEM models, Avkiran et al. (2018) monitored the transmission of systematic risk from shadow banks to regular banks. The results of the predictive model indicated that a substantial degree of the variation in systematic risk in the regulated banks was explained by micro-level and macro-level linkages that can be traced to shadow banking.
In terms of obtaining forecasts for net charge-off rates for banks, J. R. Barth et al. (2018) used a PLS model to extract target-specific factors. They included more than 200 predictor variables utilizing 250 quarterly macroeconomic data collected from the period 1987: Q1 to 2016: Q4. The empirical results showed that PLS outperformed benchmark models. They concluded that their model approach would assist banks in determining which variables cause failures and contain losses to manageable levels.
Although these papers presented the advantages of applying PLS compared with traditional statistical methods, they did not apply the IPMA in this context. The present work, therefore, supplements the literature by reviewing the application of IPMA as an extension of PLS-SEM and testing the MICOM.
Research Method
Path Model Estimation
The main goal of banks is to serve customers and obtain profit at the same time. To achieve this objective, banks must balance risk and return (J. R. Barth et al., 2018). Handling high risk might cause failure, whereas earning consecutive low profit puts management under pressure from shareholders. Thus, giving loans to customers is usually associated with riskier loan practices, which can be measured by risk ratios. If the customers fail to repay those loans, profitability will be affected. The initial model of this article is based on the following assumption: If the years preceding the crisis were highly profitable, a bank would not fail because that profitability could be used to absorb future losses; in other words, it could handle the risk.
The research model, which is depicted in Figure 1, combines five latent variables—profitability of 2006 (Prof2006), risk of 2006 (Risk2006), profitability of 2007 (Prof2007), risk of 2007 (Risk2007), and profitability of 2008 (Prof2008)—as reflective constructs. For each reflective construct variable, three manifests (indicators) were assigned. Profitability and risk of 2007 mediate the path of profitability and risk of 2006 and profitability of 2008.

Direct and indirect (mediating) effects.
Figure 1 presents both direct and indirect (mediating) effects that can be responsible for a bank’s distress. The profitability and risk in a previous year would affect the profitability of the following year, and so on. The predictive model covered the 3 years prior to the banks’ closing in 2009.
As mentioned in the introduction, the IPMA focuses mainly on the key target constructs of interest in the PLS path model. Figure 1 shows that the profitability of 2008 is the final target construct, whereas the predecessor constructs are profitability of 2006, profitability of 2007, risk of 2006, and risk of 2007. Starting at the back end of Figure 1, it can be seen that banks’ profitability in 2008 was positively or negatively influenced by the predecessor constructs. By employing the PLS-SEM using SmartPLS 3 software (Hair et al., 2016), the PLS model results can be utilized to calculate the important scores. The important-performance values of the final target construct (i.e., profitability of 2008) were created based on the important-performance values of the predecessor constructs (i.e., profitability of 2006, profitability of 2007, risk of 2006, and risk of 2007).
Figure 1 presents several paths (i.e., H1, H2, H3, H4, H5, H6, H7, and other indirect paths). In keeping with general IPMA procedures, these path coefficients must be determined to be significant at any level of confidence before confirming that the required conditions for carrying out the IPMA have been established. 5
Data Collection and Analysis Process
According to the FDIC, 140 American banks failed in 2009 because of the financial crisis. To explore the reasons behind this failure and to contribute to the existing body of knowledge, this work analyzes the entire population of failed banks in 2009. This article compares those failed banks (service type = 0;
The financial data of this study were collected from the FDIC and are available to the public. For each bank, 15 financial ratios were selected to build the model. Procedures were followed to evaluate the predictive model using PLS-SEM in SmartPLS 3 software (Hair et al., 2016). Accordingly, the financial ratios were filtered by eliminating those with a weak effect and using only the ratios with a strong effect to predict failure. According to the rules of thumb of model evaluation using PLS-SEM, only six ratios, as defined in Table A1, were employed for each year. Net interest margin (Nimy), return on assets (ROA), and return on equity (ROE) are proxy for profitability, whereas core capital (leverage) ratio (Rbc1aaj), Tier 1 risk-based capital ratio (Rbc1rwaj), and total risk-based capital ratio of risk (Rbcrwaj) are proxy for the capital strength of the bank. To evaluate the univariate discriminatory power of each ratio, a mean test was computed.
Model Assessment Utilizing PLS-SEM
Before reporting the findings of this work, reliability and validity must be assessed as a prerequisite for accurate estimation in SEM (Nitzl, 2016). Although previous works offer criteria to assess partial model structures, this article follows the guideline of Hair et al. (2016). To prevent a common mistake with model assessment, the reliable and valid measurement of reflective indicators should not be applied to formative indicators (L. Lee et al., 2011).
The measurement model of this study has five latent constructs, as depicted in Figure 1. Results 6 of reliability analyses using composite values show that all of these constructs (profitability and risk) obtained a high reliability value above 0.7 thresholds (China et al., 2012). Table 1 shows that all indicators’ loadings are greater than 0.7 for both groups except for three indictors (Nimy2006 for failed banks, and Nimy2007 and Nimy2008 for nonfailed banks). As evidenced, the low loading value of those three indicators ranges from 0.5 to ≈ 0.7. In exploratory studies, loadings higher than 0.4 are acceptable (Avkiran et al., 2015; Hair et al., 2013, p. 6). This confirms that the internal consistency requirements were established.
Evaluation Results of the Measurement Model.
To validate the measurement model, two types of validity were checked: convergent validity and discriminant validity. The average variance extracted (AVE) was used as a criterion of convergent validity, whereas the Fornell–Larcker criterion and cross-loadings were utilized as criteria of discriminant validity.
In establishing convergent validity, a threshold of acceptability of greater than 0.5 was ensured for the value of AVE. (China et al., 2012; Hair et al., 2016; Rasoolimanesh et al., 2017). Findings presented in Column 4 of Table 1 indicate that all AVE values met the requirements for the construct measures’ convergent validity, which is above the acceptability value. The convergent validity ranges from 0.625 to 0.969. The results showed that the loading’s value of Nimy2006, Nimy2007, and Nimy2008 ranged from 0.4 to 0.7, validating the decision to retain those indicators in the model.
Finally, the article measures discriminant validity by looking at the square root of AVE for each latent variable by applying Fornell–Larcker criterion. Table 2 shows that the square root of AVE for each latent variable (in bold) is higher than the absolute values of other correlation values among the latent variables. This indicates that all of the discriminant validity requirements through the square root of AVE to the correlation, and through the cross-loading analysis, are met.
Discriminate Validity (Fornell–Larcker Criterion).
Although the Fornell–Larcker criterion and cross-loadings are widely used for evaluating discriminant validity, SmartPLS provides yet another reliable criterion called the heterotrait-monotrait ratio (HTMT). In using the HTMT criterion to evaluate discriminant validity, one must determine the HTMT value for all pairs of reflective constructs. If the value of HTMT is below .85 (the threshold value), then discriminant validity is established (Henseler et al., 2015, 2016); however, if the value of the HTMT is greater than this threshold, there is a lack of discriminant validity (Ngah et al., 2018). Table 3 shows that all HTMT values were lower than the threshold value.
Discriminate Validity (HTMT Criterion).
In addition to the validity evaluation, this work checks for multicollinearity by variance inflation factor (VIF) and prediction relevance (
Measurement of Interaction Through Effect Size
Unlike
To measure the magnitude of
Results of Invariance Measurement Testing Using Permutation.
Measurement Invariance and Multi-Group Analysis
Measurement invariance, which is also known as measurement equivalence, is a crucial step in cross-group investigation (Ruzzier et al., 2014). This technique enables researchers to identify whether parameters of the structural model and measurement model are equivalent (i.e., invariance) across two or more groups (China et al., 2012). Measurement invariance is needed to be sure that the differences are not attributable to measurement model differences across the groups (Kock, 2017). If one failed to establish invariance, it would be difficult to determine whether the differences observed were due to true differences (China et al., 2012, p. 1).
Before running an multiple-group analysis (MGA) to compare the path coefficients between failed and nonfailed banks, it is important to employ invariance testing to avoid potential misspecification bias and misleading results. The MICOM test is required in PLS-SEM to be confident that the group differences in the model “do not result from either distinctive content or meanings of the latent variables across groups or from the measurement scale” (Sarstedt et al., 2018, p. 209). The MICOM analysis consists of three steps (Henseler et al., 2016, p. 412): (a) configural invariance, which is the first and weakest level of measurement (Kim et al., 2013, p. 502); (b) compositional invariance; and (c) the equality of composite mean values and variances. Thus, if configural invariance and compositional invariance are established, partial measurement invariance is confirmed. After that, researchers can compare the path coefficient estimates between groups. In addition, if partial measurement invariance is established and the composites have equal mean values and variances (Step 3) “across the groups, full measurement invariance is confirmed, which supports the pooled data analysis” (Henseler et al., 2016, p. 413). Pooled data analysis examines whether a common core of relationships exists across the groups (Durvasula et al., 1993).
In accordance with the MICOM analysis, Table 5 shows that partial measurement invariance is established, which is a requirement to compare groups and test any significant differences.
Robustness Check
To ensure that validity measurement is established when using PLS-SEM, checking unobserved heterogeneity is very important (Hair et al., 2018; Sarstedt et al., 2018; Schlagel & Sarstedt, 2016). Although data of this study are obtained from the same population (the banking industry), assumption of homogeneity is unrealistic because individuals are not homogeneous in their perceptions and evaluations of unobserved constructs (Ansari et al., 2000). This article applies the finite mixture partial least squares (FIMIX-PLS) method, which was proposed by Hahn et al. (2002), advanced by the extensive work of Ringle, Sarstedt, and Mooi (2010) and Ringle, Wende, and Will (2010), and implemented in the software application SmartPLS (Sarstedt et al., 2011, p. 35). This approach combines a finite mixture procedure with an expectation-maximization (EM) algorithm (Loureiro & Miranda, 2011). SmartPLS 3 (Hair et al., 2018) was used to segment the sample based on the estimated scores for latent variables. According to the result of FIMIX-PLS presented in Table 6, the analysis considers two-, three-, and five-segment solutions. Applying the relevant assessment criteria provided by Hair et al. (2018), two segments was the appropriate choice. Because the optimal solution is the number of segments with the lowest value (bold numbers) and the highest value of entropy values (EN) (Hair et al., 2018, p. 196), Segments 1 and 4 have been eliminated. No single criterion indicates the same number of segments, whereas the EN range between 0.683 and 0.831, which are higher than the threshold of 0.5 (Hair et al., 2018, p. 197; Schlagel & Sarstedt, 2016, p. 640). Panel B of Table 6 indicates that Segment 5 has very small segment sizes. To warrant valid analysis, Segment 5 was dropped. Panel C of Table 6 confirms that the
Model Selection.
IPMA
IPMA—also called importance-performance matrix, impact-performance map, and priority map analysis (Ringle & Sarstedt, 2016, p. 1866)—was first proposed and introduced by Martilla and James (1977). The IPMA approach examines not only the performance of an item but also the importance of that item. The objective of this analysis is to identify the (unstandardized) total effect of predecessor construct’s importance (e.g., profitability of 2006) in anticipating a specific target endogenous construct (e.g., profitability 2008) (Hair et al., 2016, p. 276; Hair et al., 2018, p. 105). The total effect demonstrates the importance of apparent variables, whereas the mean value of their scores (ranging from 0, which is considered the lowest, to 100, the highest) reflects their performance (Höck et al., 2010, p. 201). The interpretation of IPMA is that a 1-unit increase in the predecessor’s performance (e.g., Prof2006) increases the performance of the target construct (Prof2008) by the size of the predecessor’s unstandardized total effect (Hair at el., 2016, p. 278). The IPMA technique has two dimensions: importance and performance (Jaafar et al., 2016; Y.-C. Lee et al., 2008). The goal is to determine each predecessor construct’s importance in terms of its total impact on each target endogenous construct (performance).
Figure 2 shows that the final target construct—profitability of 2008—was affected both directly and indirectly by the profitability and risk of 2006 and 2007.

Importance-performance matrix and path model with results.
Starting at the back end of Figure 2, when we look at the final target construct (i.e., profitability of 2008), profitability of 2007 has a relatively high positive importance with the path coefficient of 0.456, whereas profitability of 2006 has lower importance with the path coefficient of 0.056. The path coefficient, depicted as arrows, demonstrates the relative importance, whereas the performance values, depicted as circles, are the average values of latent variables’ scores on a scale of 1 to 100. It should be noticed that a score closer to 100 indicates a higher performance latent variable (Hair et al., 2016).
The results of IPMA presented in Figure 2 are also depicted in Figures 3 (failed banks) and 4 (nonfailed banks) and discussed more deeply later. A comprehensive understanding of how to read and use results plotted in those figures can assist management in improving low performance by focusing on high importance (Hock et al., 2010).

Importance performance map of the profitability of 2008 (failed banks).

Importance performance map of the profitability of 2008 (nonfailed banks).
The IPMA approach must meet two requirements before any application: (a) all indicators must have the same orientation, and (b) outer weight must not be negative (Hair et al., 2016, p. 208; Hair et al., 2018, p. 123; Ringle & Sarstedt, 2016, p. 1868). These requirements have been met, 9 as shown in Tables B2 and B3.
Results
Descriptive Statistic
Table 7 presents the descriptive statistics of our full sample, as well as of subsamples of financial performance. Results reported in Table 7 confirm that all indicators (ratios) are significantly different across the two groups (failed and nonfailed banks) except for one indicator: Nimy.
Summary Statistics of the Two Groups (Failed and Nonfailed) and the Whole Sample.
Descriptive analysis depicted in Table 7 confirms that the average profitability was higher for nonfailed than failed banks. Unlike distressed banks, nonfailed banks had never had a negative performance, which protected them from bankruptcy. Failed banks had a lower average risk than nonfailed banks. Hence, low profitability can be a symptom of failure.
Test of Significant Paths
As with any other analytical technique, IPMA needs to be driven by statistical power consideration (Streukens et al., 2018, p. 380). Primary data analysis confirms interesting findings in six significant paths related to the main objective of this article. The results are summarized in Table 8 for each subgroup. It is important to consider the sign of the path coefficient (positive or negative), as it can be the reason those banks went bankrupt (e.g., negative profit). The path coefficients were tested (using 5,000 bootstrap) by applying a one-tailed test (for a more detailed description of when or whether to utilize one-tailed or two-tailed tests, see Kock, 2015).
Results of Hypothesis Testing.
The study found that the profitability in a given year was significantly correlated to the profitability of a following year. In failed banks, it was found that the profitability of 2006 (Prof2006 → Prof2007, β = −0.233,
Furthermore, in both groups, there was a positive and significant effect of the profitability of 2007 on the profitability of 2008 (Prof2007 → Prof2008, β = 0.223,
In terms of risk effect, the current study indicated several significant paths. First, the path coefficient for Prof2007 → Risk2006 was significantly negative (β = −0.151,
The indirect (mediating) effect’s findings illustrated that one of the three mediating paths was significant. Findings illustrated in Table 8 show that the indirect path for Prof2006 → Prof2007 → Prof2008 was significantly negative for failed banks (β = −0.054,
To measure the difference between the two groups (failed and nonfailed banks), the study employed Henseler’s MGA, the parametric test, and the Welch–Satterthwaite test. Findings illustrated in Table 9 reveal significant differences between failed and nonfailed banks in five paths. First, the significant difference for Prof2006 → Prof2007 (H1) was 1.161. This pointed to the difference between the group-specific path coefficients, which need not be lower than 1.0 to establish. For example, if the path coefficient is 0.5 for one group and −0.6 for another group, then the difference between the two groups is 1.1 (Christian Ringle, personal communication, April 17, 2018). Second, the path coefficient for Prof2007 → Prof2008 (H3) differed between the two groups by 0.723, which was significant according to the
Results of Hypothesis Testing (MGA-PLS).
This indicates the difference between the group-specific path coefficients, which is not necessary to be lower than 1 (Christian Ringle, Personal communication, April 17, 2018).
Results of IPMA
With bootstrapping with 5,000 subsamples confirming that some of the path coefficients were statistically significant and the requirements for carrying out the IPMA approach have been met, the findings of IPMA can finally be discussed. These results are plotted in Figure 3 (i.e., failed banks) and Figure 4 (i.e., nonfailed banks). In each importance-performance map, the analysis concentrated on the lower right area to enhance improvement because items plotted in that area have high importance with low performance. Concentrating constructive action in this area will produce maximum results (Martilla & James, 1977, p. 78). For failed banks, the results confirmed that the profitability of 2007 and risk of 2007 had particularly high impact on the profitability of 2008.
It is evident from Figure 3 that the profitability of 2007 had a large impact on the profitability of 2008 in failed banks, and thus represents a major opportunity for improvement that could have been addressed by bankers’ activities. More precisely, the importance of the profitability of 2007 had a positive total effect on 2008: a 1-dollar increase in the profitability of 2007 increased the profitability of 2008 by 0.254, whereas changing the risk by 1 dollar in 2007 tended to increase the profitability of 2008 by 0.047. This indicated that there was indeed room to improve the performance of 2008. As seen in Figure 3, the profitability of 2006 had the strongest negative total effect on the profitability of 2008. This variable was very important to affect the performance of 2008 because the total effect was significant. This study found that the total indirect effect of the profitability of 2006 on the performance of 2007 through 2008 was negative (–0.056). Both direct and indirect total effects confirmed that banks failed in 2008 because of their financial performance in 2006.
It is evident from Figure 4 that for nonfailed banks, the highest performance indicator was the profitability of 2007. This factor was confirmed to be a significant predictor of the profitability of 2008. Unlike the failed banks, IPMA showed that a 1-dollar increase in the profitability of 2007 increased the profitability of 2008 by 0.887 cents.
Although Figure 4 presents that the profitability of 2006 was the second-highest importance indicator, it was not a significant important variable in the projection of the performance of 2008.
Discussion and Managerial Implications
The findings suggest that there is a significant difference between failed and nonfailed banks regarding the effect of the financial crisis in 2007 and 2008. Research on banks’ performance during the crisis (Beltratti & Stulz, 2012) showed that banks were affected differentially because they had different balance sheets and profitability before the crisis. For failed banks, negative profitability measured by ROA and a fall in ROE appeared 3 years before failure. Bankers should have dealt with this trend as a symptom of failure. This result is consistent with Beltratti and Stulz (2012), who confirmed that bank profitability in 2006 played a more significant role in determining bank performance during the crisis than did other factors such as bank governance and bank regulation. This was not apparent in the case of nonfailed banks whose profitability was never negative. For nonfailed banks, increasing the income tended to increase the return, which was reflected in profitability ratios as a consequence of healthy growth. This result is in line with the accounting theory that assumes that firms can use their net income to absorb future losses.
Turning to performance and risk-taking, it can be seen that the risk that was taken in 2006 led to the poor performance of failed banks over the next 2 years. However, the risk that was taken by nonfailed banks in 2006 had no correlation with bank outcomes. This is a striking result—it was observed that the average risk (e.g., core capital (leverage) ratio, Tier 1 risk-based capital ratio, and total risk-based capital ratio) accepted by failed banks was lower than the average risk accepted by nonfailed banks. That lower average was the cause of U.S. bank failures during the financial crisis. This finding agreed with the study by Beltratti and Stulz (2012), who found that large banks with more Tier 1 capital and more deposit financing at the end of 2006 had significantly higher returns during the crisis. This evidence is also consistent with Serrano-Cinca et al. (2014) who reported that nonfailed banks compensated for increases in risk by strengthening their core capital. However, Cox et al. (2017) found that banks failed during the 2008–2010 financial crisis because they accepted more risk, specifically by having higher financial leverage. This is in line with Serrano-Cinca et al. (2014), who showed that 5 years before the crisis, failed banks had higher loan growths, higher concentration on real estate loans, higher risk ratios, and higher turnover, but lower margins. It can be concluded that the risk that was taken a few years before the crisis had a significant effect on a bank’s performance. It remains to be determined why risk taken in 2007 was not significantly associated with profitability in 2008.
This article extends the application of PLS-SEM by using IPMA to determine priority factors and should be useful to managers seeking to improve banking performance (especially managers of those banks that failed). The predictive model developed for this study indicated that failed banks were predisposed to decreasing financial performance in 2008 because of poor performance in 2006. Conversely, the profitability of 2006 and 2007 positioned nonfailed banks for increasing financial performance in 2008. For failed banks, financial performance in 2006 was found to be the highest negative important factor that bankers should have addressed to avoid failure. Failed banks should have taken advantage of the performance of 2007 by using their net income to either recover their previous losses or absorb future losses.
In particular, this study underscores that the most detrimental managerial decisions and policy activities by failed banks were in regard to high-risk loans. The study indicates that the risk that was taken in 2007 should have received significant attention because it had a positive total effect on the profitability of 2008 and represented an opportunity for failed banks to increase profitability in 2008. Unfortunately, bankers missed this remarkable opportunity to avert disaster before the crisis arrived. In consequence, loans went unpaid, profitability went down, and banks closed. Applying IPMA allows companies to determine the best allocation of their resources based on the results of the IPMA. The interpretation of the comparison results between failed and nonfailed banks was satisfactory because test measurement invariance was done to avoid potential misspecification bias.
In closing, managers who use and apply IPMA will obtain useful conceptual insights by overlaying the importance-performance analysis to prioritize their financial decision-making.
Conclusion
Overall, this research addresses an overview of the application of the IPMA as a useful technique to expand the analysis of PLS-SEM results. This work develops a model to help researchers who are keen on applying IPMA in the banking and finance fields. The results of IPMA indicated that failed banks were predisposed to decreasing financial performance in 2008 because of poor performance in 2006. Conversely, the profitability of 2006 and 2007 positioned nonfailed banks for increasing financial performance in 2008. It must be borne in mind that this research was only conducted on a small group of failed and nonfailed U.S. banks over a short period. Further research is thus needed to investigate the performance of large banks around the world during the crisis before generalized conclusions can be drawn.
